By Patrick Metzger
Full video transcript with sources:
I don’t have much time, but things are crazy and we have to talk about it. AI hallucinations are showing up in legal proceedings. Specialized AI systems are sometimes better at diagnosing medical conditions than a doctor. Cyber criminals are stealing millions of dollars using AI deepfake videos. Also, did you notice? AI already took your job.
Hi, I’m Patrick Metzger. This is Processing Product. I’m a product leader and I have been deeply tapped into what’s happening with AI this past year—through my day job, my side projects, and my discussions with tech leaders. So I thought I’d share some highlights.
What do I mean when I say AI? I’m talking about machine learning systems.
So the seismic shifts that have happened over the past year mostly have been led by large language models. But machine learning also includes software for image recognition and visual understanding, audio recognition and understanding, generative software for making images, videos, and audio. It includes bespoke tools for particular industries that use a neural network model to analyze data and provide suggestions. It means a lot of things, but it’s not exactly just a buzzword. It really is a fundamentally different way for computer systems to operate with a new level of sophistication and independence.
First thing’s first, let’s level-set. Pandora’s box is open.
AI is already an integral part of the US power grid, the construction industry, agriculture, aviation, fraud detection, radiology, warehouse management, customer support, police surveillance, the military, healthcare, and just about every other industry imaginable. More than 75 patients worldwide have a brain-computer interface implanted (12 Neuralink, over 50 BlackRock Neurotech, 10 Synchron, 3 CIBR, 1 Paradromics). Waymo has driven 127 million rider only miles without a human driver on public roads AI systems are curating the political information diet for billions of people, shaping elections without casting a single vote.
These things are not happening in the future. They’ve all been happening for several years at this point.
So we’re taking that as the baseline. Now let’s talk about what you came here for: AI and the economy. The unemployment rate has been trending upwards since 2023. 2025, in case you didn’t notice, was a massive year for layoffs. And this is just a quick glimpse at many of the companies that were behind all of the different layoffs across the year. This isn’t even comprehensive. The concept of a “forever layoff” was introduced where companies avoid making news with more frequent, smaller layoffs. All told, over a million people were laid off in 2025, probably many more than that (almost no new jobs were added since April 2025). Millions of people I think we can safely say.
But I hear you saying, “How can we specifically link this to AI?” Well, because the company executives are telling us that it is.
So in May, the CrowdStrike, CEO told staff they’ll cut 5% of its global workforce and that these reductions stem from “AI efficiencies” that flatten hiring and speed product development. Later that month, Bloomberg announced the AI Hiring pause is officially here, noting that AI now writes or assists 30% or more of the code at both Microsoft and Alphabet. Klarna’s CEO said that AI helped drive a 40% reduction in staff. CEOs started a wave of memos, basically saying, “Use AI or get out.” In June, BT’s chief executive said explicitly that AI could lead to more job cuts—more than the 55,000 already planned.
Amazon’s CEO says AI will lead to a smaller workforce. There were some documents leaked this year that shows they’re going hard on automation, replacing people with robots of different kinds.
In July, Microsoft said they saved $500 million because of AI, while slashing more and more jobs. Microsoft laid off over 20,000 people in 2025 and added 3,500 AI related jobs. It is the first time since 2016 that Microsoft’s headcount has been a net negative.
In August, Oracle cut its Cloud Division jobs while spending more on AI. Oracle laid off over 3,000 employees last year. And experts are analyzing these trends, so they’re telling us exactly why AI is behind the rising job cuts. In this article, for instance, they talk about IBM swapping 8,000 HR staff for its AskHR chatbot and intel cutting 21,000 jobs to focus on AI chips.
So which workers will AI hurt most? Well, the evidence is showing it’s already becoming harder to get an entry level job in tech, and many of those are more vulnerable to AI replacement.
“Employment for workers with less than two years of tenure, peaked in 2023 and is down about 20-25% since then.” But high experienced jobs are also vulnerable, such as software engineers and lawyers.
Anthropic’s CEO, Dario Amodei said on CNN that AI will eliminate half the entry level jobs and lead to 10-20% unemployment in the next one to five years. He says, “A couple years ago…[it was] as good as a smart high school student, now [it’s] as good as a smart college student and…reaching past that.”
More research, showed—in addition to this—that AI may already be shrinking entry level jobs in tech. A World Economic Forum survey found that “40% of employers” plan staff cuts “where AI can automate” work. The Wall Street Journal also wrote about this phenomenon, stating recent grads face 6.6% unemployment versus the 4% national rate. Signal Fire found the 15 largest tech companies cut entry level hiring to 7% of all new recruits in 2024, and that’s half of the 2019 proportion. So it went from 14% entry level jobs to 7% entry level jobs.
And then there’s the question of whether all of this is a bubble that will very soon burst. So Noah Smith writes, will data centers crash the economy? I’ll quote him directly here. “The market for debt securities backed by borrowing related to data centers where liabilities are pooled and sliced up in a similar way to mortgage bonds has grown from almost nothing in 2018 to around $50 billion today.” And he says, “I think it’s important to look at the telecom boom of the 1990s rather than the one in the 2010s. Because the former led to a gigantic crash. The railroad boom led to a gigantic crash too in 1873. In both cases, companies built too much infrastructure, outrunning growth in demand for that infrastructure, and suffered a devastating bust as expectations reset and loans couldn’t be paid back. ”
Others are also pointing out the similarities to the subprime mortgage crisis. Hard not to draw some parallels. All of this, while MIT came out with research showing that 95% of companies are getting zero ROI from AI. Though, here’s where I have to hedge a bit. That was a limited sample set and the study was performed earlier in the year, because the fully polished report with data visualizations and everything was published in July.
Which brings us to AI coding.
Anyone who’s been using AI coding tools in the past year knows that they went from pretty useful some of the time in January to basically junior developers in the summer to maybe mid-level developers in the fall.
This is something you can feel anecdotally if you’re working with the models every day, but it is also measurable. This is SWE-Bench Verified, which is a benchmark from “a verified subset of 500 software engineering problems from real GitHub issues validated by human annotators for evaluating language models’ ability to resolve real-world coding issues by generating patches for Python code bases. ”
So what does that actually mean? Well, in terms of solving these real world problems, in May, the state-of-the-art could solve 17% of them, and as of December, it can solve 81% of them. So we’re really talking about a completely new world over the course of 2025. the clear leader here is Claude Code, but OpenAI’s Codex, Google’s Gemini, CLI, and Cursor are very strong contenders fighting for that second place spot.
So what are the implications of all of this? Well, lots of cool things and lots of scary things.
Two major AI coding tools wiped out user data after making cascading mistakes. A hacker planted computer-wiping commands into the code base for Amazon’s AI coding agent. In August, OpenAI’s reasoning system achieved gold in the International Olympiad in Informatics, outperforming 325 of the 330 human competitors. That same month, viruses were successfully spread by prompt injection.
In September, Anthropic released Claude Sonnet 4.5 with a splash by having it build a clone of Slack fully independently over the course of 30 hours. In December, OpenAI used Codex to build Sora for Android in 28 days.
I appreciated Natalie Glance’s commentary on how AI coding is used at Duolingo. One experienced software engineer said, “Writing code is 90% solved,” but the chief engineering officer emphasized that “tech specs are more important than ever,” that they’re “still very much in the learning phase…sometimes going down a rabbit hole, trying to solve a problem with AI assistance and then having to restart from scratch the old way.
Meanwhile, fly.io realized in April that the fastest growing users of their platform aren’t human software developers. They’re AI agents that automatically spin up, modify, pause, and restart server infrastructure as part of coding workflows.
Dave DeGraw summed things up pretty well in his article, What the hell is going on right now?
So let’s talk about some other jobs that AI is impacting. It’s essentially reshaping the nature of all online monetization. So we can talk about publishers, content creators. ChatGPT is gobbling up more and more traffic and attention.
AI summaries have caused a devastating drop in online news audiences. This study claims that sites previously ranked first can lose 79% of traffic if results appear below a Google overview. Google is facing an antitrust complaint over AI overviews.
In July, CloudFlare introduced a method for monetizing AI crawling, in which Cloudflare’s CEO stated, “Over the last 10 years, because of the changes to the UI of ‘search,’ it’s gotten almost 10 times more difficult for a content creator to get the same volume of traffic.”
New York magazine’s, Intelligencer wrote about generative engine optimization, or is it called answer engine optimization? GEO, AEO, unclear.
AI is changing the entire way that we think about a sales or a purchase funnel. The commentary here from Casey Winters is very interesting. “Agents in the future can take over discovery transactions and even supply workflows, undermining the network effects that made marketplaces defensible, leading to product market fit, collapse, and completely changing how you need to think about your acquisition costs. ”
And if we look at the leaderboard across all major LLM models, it’s clear that adoption of these into workflows has exploded over the course of 2025. We can look at OpenAI’s tokens processed, or Google’s or Anthropic’s. You’ll see a big spike in Claude Sonnet 4.5 here, starting in the new year. And don’t count out DeepSeek. There’s been a huge wave of interest in DeepSeek version 3.2.
Although specifically in the case of OpenAI, maybe all of their traffic is bolstered by students. This is a chart from May into the summer where usage dropped off a cliff. Maybe the whole AI industry is propped up by students during the school year.
And now, here’s a very brief history of automated journalism. In 2016, the AP announced they would begin using an automated writing service to cover more than 10,000 minor league baseball games annually. In 2020, Microsoft laid off 27 journalists who were curating content for the MSN homepage and Edge browsers, replacing them with artificial intelligence software.
In January of 2023, CNET published 77 AI written stories over the course of a couple months and found errors in 41 of them—more than half.
In February 2024, Google started paying independent publishers to test an unreleased gen AI platform. And if they committed to publishing three articles per day with the tool, they got that cash.
In September, 2024, ESPN started using AI to produce text game recap stories of select sport events.
In March of last year, the LA Times’ new AI tool sympathized with the KKK. And meanwhile, many news outlets are suing AI companies, including the New York Times with OpenAI, the Chicago Tribune with Perplexity AI, and Conde Nast, the Atlantic, and others with Cohere.
Let’s talk about AI in media. Art, music, film, photography. An ad for Guess in the print edition of Vogue featured an AI model. Spotify has been promoting AI generated bands like The Velvet Sundown and King Lizard Wizard—a clear play on King Gizzard and the Lizard Wizard. At least six AI or AI assisted artists debuted on the Billboard Hot 100 in 2025.
You can make a video and have AI translate it into any language using your voice, down to the motions of your mouth. UNESCO released a report called Deep Fakes and the Crisis of Knowing, stating, “We are approaching a synthetic reality threshold—a point beyond which humans can no longer distinguish authentic from fabricated media without technological assistance.” Other studies are showing that even technological assistance doesn’t always work.
Some scammers were able to siphon $25 million from an engineering firm called Arup. They created an AI deep fake video that impersonated the CFO of this company, and it fooled some folks into giving over millions of dollars.
YouTube is experimenting with automatically using AI on video without people’s consent. An AI ad aired during the NBA finals and it was made for just $2,000.
An AI ad for liquid death went viral and performed better with audiences than 75% of all water ads ever tested by system one.
Industrial Light and Magic says they’ll use it in their films. The AI video platform Runway already works with many of the biggest movie studios and has public deals with Lionsgate and AMC networks. Netflix used generative AI for visual effects in the Argentine sci-fi series, the Eternaut. And a fully AI-generated show, Cat Biggie, premiered in June in South Korea. I’m guessing you don’t need me to go into detail about how damaging deepfake porn can be for movie stars and teenagers alike.
So we talked about lawsuits against these AI companies. Let’s talk about AI in the legal system. In May, the testimony of an AI avatar was admitted and clearly influenced the judge’s decision in a case. Damien Charlotin is documenting legal decisions in cases where AI produced hallucinated content. There are 763 cases so far. These are only the cases where the lawyers were caught doing this, so we know that it’s happening more often than that.
One example: In a 2025 Illinois trial involving lead poisoning claims against the Chicago Housing Authority, the defense team used ChatGPT, which invented fake legal citations. They got caught and had to pay about $60,000 for professional misconduct.
This is happening so often that the MIT technology review came out with a profile on early adopter judges using AI.
AI and automated systems can and do lead to algorithmic bias against oppressed populations. We’ve known about this danger since Joseph Weizenbaum’s work in the 1970s. With the scale of AI decision making what it is today, discrimination is being codified and the impacts amplified to a catastrophic degree.
Housing discrimination is more prevalent because of AI systems. Persistent prejudice across lines of race, gender, and sexual orientation continues in who gets access to loans and credit. So too with hiring, policing, incarceration, and countless other domains.
We talked about entry level jobs being at risk, and also expert jobs. Let’s talk about AI in healthcare. In May, Ambience announced an OpenAI-powered model that outperforms physicians by 27% in documenting medical codes for different diseases and conditions.
In June, Microsoft said that their AI system is better than doctors at diagnosing complex health conditions. The AI correctly solved 8 out of 10 case studies while the humans solved an average of 2 out of 10. That’s when compared to practicing physicians who had no access to colleagues, textbooks, or chatbots. So it’s not magic, but it’s not nothing, and it could be at least the first line of care in a world where it’s so hard to see a doctor. Costa Rica is already using AI to relieve pressure on their strained healthcare system.
And within the context of healthcare, we have to bring up AlphaFold. In 2020, Google DeepMind solved the protein-folding problem, creating an open database of over 200 million protein structure predictions, which helped in the rapid development of things like COVID vaccines. Since then, it has led to better cancer drugs and given us a better understanding of heart disease, Alzheimer’s, and Parkinson’s disease.
In August of 2025, researchers used generative AI to design compounds that can kill drug-resistant bacteria.
AI has been incorporated into gene editing with CRISPR, “increasing the safety and efficacy of CRISPR-based interventions.”
This makes it possible, for instance, to treat sickle cell disease with far fewer side effects, and has the potential to reduce costs and make these treatments accessible to more people.
And if you didn’t already think the future is now: The world’s first computer that combines human neurons with silicon is now commercially available. Programmable, organic neural networks born on a silicon chip, and living inside a digital world. Like right now.
Speaking of wellbeing, let’s talk about AI and the environment.
So, I care deeply about the environment, and the planet that I live on. I consider myself a climate activist. The main point I want to drive home here is that shifting the onus of environmentalism to the individual is a strategy pioneered by fossil fuel companies to discourage us from making meaningful change at a systemic level where most of the harm is done.
Your individual conversations with a chatbot, no matter how complex will not be the reason for a climate disaster, and refraining from using AI will not prevent the climate apocalypse that we are already living through. Andy Masley has recently done some great research to clarify the realities of AI water use, and his conclusion is, “The AI water issue is fake.” Now, the short version of this is that most of what the memes purport on social media is based on old data, or it’s blown out of proportion, or it’s just factually incorrect and it never was based on data to begin with. There’s a lot of misinformation out there.
The scale of water use at AI data centers—and data centers in general—is much less of a concern than, for instance, irrigating corn, or steel production, or denim manufacturing, or the existence of golf as a sport.
From an energy use standpoint, watching one hour of Netflix is roughly equivalent to 26.5 ChatGPT queries. This is data as of May, 2025. This is changing all the time. Also, hey, if you sent 3 to 4 messages per day, congratulations, you’re in the top 20% of ChatGPT users. Most people are sending infrequent queries, getting the information they need and then moving on.
The unfortunate truth is that every aspect of contemporary life is unsustainable. It’s not as newsworthy to restate, but it’s true. We’ve had this information for decades now. People don’t want to think about the fact that we as a species shouldn’t be burning such immense amounts of coal and methane for electricity, shouldn’t have so many farmers using outdated and carbon-intensive methods to manage their crops and livestock, simply shouldn’t have the scale of industrial manufacturing that we have now.
The reason that the environmental impact of data centers is so terrifying is because lawmakers have given tech companies free reign to build as many of them as they want, using whatever kind of energy source they want. And in July, with the “Big Beautiful Bill,” the government broke the kneecaps of any company or organization trying to push for renewable energy.
Do we need more data centers in order to survive as a species? No. Are they harmful for individuals and communities? Yes. Do they also lead to jobs, tax, revenue, and other benefits in those same communities? Yes. Are everyday people paying more for electricity rather than companies? Yes.
And a lot of that is due to policy concerns that are not about individual people using AI systems. None of this is to excuse AI companies, but if we’re going to be worried about our individual ChatGPT prompts, we have to go back and worry about charging our phone and watching streaming videos.
These individual actions are microscopic when compared to industrial electric usage and emissions. We should be worried about the right things like fossil fuel subsidies, shipping cars overseas, monocropping, or sending rockets into space for tourism instead of science.
We’re living in a world of droughts and more frequent hurricanes and faster sea level rise and more deaths from heat exhaustion. But we can’t blame AI for that happening. We have to blame decades of fossil fuel disinformation and government corruption that actively worsened a problem we’ve known about for 50 years.
Another thing that’s not so great for our health and wellbeing: People are developing real relationships with chatbots. This person says, ChatGPT is my best friend. People have married their chatbots. 72% of teens have used an AI companion. And AI chatbots are often sycophantic, simping for their users. ChatGPT’s 4o got so ridiculous in flattering people that OpenAI had to roll back the model and publish an explanation.
Some religions are encouraging people to find God through AI. Some people think that AI is their God. Sometimes ChatGPT convinces people that they themselves are God. It encourages dangerous behavior in teens.
Discussions with chatbots have led to people dying by suicide. 14 times, that we know of.
Sometimes people are talking to these artificial entities even when they don’t know it. A Chinese company called GoLaxy has launched “human-like bot networks and psychological profiling to target individuals” with fake accounts on social media. The messaging from these fake profiles is “customized to a person’s values, beliefs, emotional tendencies, and vulnerabilities.”
All of this, when we know that, even in the best cases, research has shown that when we lean too hard on AI as a crutch, ”these technologies can and do result in the deterioration of cognitive faculties that ought to be preserved, ” which has been visualized in observably “lower brain activity.” Meaning AI has the potential to make us dumber.
I am not going to go so in depth on AI and education. Hopefully you can draw some connections here. Like: Do we want students having lower brain activity while they’re meant to be learning? What does it mean that AI detection software aimed at catching plagiarism and cheating basically doesn’t work, generating false positives constantly. How often during the day are students talking to an AI companion for emotional support instead of their friends, and how does that impact their social and emotional development?
So, is AI safe? Is it being ethically deployed? Sam Altman has argued that releasing to millions teaches them things that they wouldn’t have learned any other way, and that it actually helps them catch vulnerabilities faster.
But other experts suggest that “entities developing or deploying high-risk AI systems should be required to present evidence of affirmative safety: a proactive case that their activities keep risks below acceptable thresholds.”
Beyond chatbots, when we consider safety, we need to recognize that AI is fully embedded in the military industrial complex. It’s integrated into training simulations to adapt based on a soldier’s performance. It enables drone warfare. While fully autonomous weapon systems are still not common practice, there have been documented cases such as Libya in 2020, when an attack drone searched for and struck targets on its own with little external control, and that boundary is being pushed further every day.
A great deal of drone warfare is now using a human-on-the-loop rather than human-in-the-loop model of direct piloting. The human authorizes lethal actions, but the drone flies itself, navigates, identifies potential targets, and executes all aspects of the actual strike.
And with the launch of new one-way attack drones, drone warfare is cheaper and more onboard decision making is built in for longer-range missions. China has created a mosquito-sized drone that can be used for reconnaissance. In the United States, DARPA has been experimenting with hijacking real insects and investing in micro-robots for over a decade.
Humans are still making a lot of the decisions about the overall scope of missions, but as the scale of global surveillance and warfare expands, humans are mostly not in control. Drone warfare makes visible what’s happening everywhere. Humans are still present, still accountable, but increasingly positioned as supervisors. Inside systems they didn’t design and can’t slow down.
So what do we do now? Well, I don’t know. I don’t think anyone knows what’s going to happen in 2026. Anyone who says otherwise is selling something. In terms of LLM chatbots and AI coding agents, you can’t expect people not to use the magic wand now that you’ve given it to them. You can’t stuff the horrors back in the box now that they’ve been unleashed. That inherently means that the nature of how we build technology and what it means to have a job or receive medical care or learn things or have friends is changing forever.
Will Jevons Paradox mean that demand for services increases because of the cost efficiencies AI allows? Maybe. This happened for radiologists who are now in more demand after AI improved the accuracy and effectiveness of their work.
What we do know is that AI is already replacing people’s jobs right now. Maybe around the corner there are more jobs with different skills. It remains to be seen. Zooming out, parts of the big picture here, like advances in medicine are undeniably good. Parts of the big picture are terrifying also, and no one person or company or government is in control of it.
This has gone beyond any individual choices. We are inside a massive wave of change. I don’t think we should be writing blank checks to tech billionaires who only want to get rich and don’t care who gets hurt in the process. And I don’t think we should be judging individual people for prompting LLMs occasionally. I don’t think AI should take over the arts, and I don’t think we should entirely ban the use of AI in any discipline.
We’re living in a time of paradox and we need to see all sides of the chaos that has been let loose on the world.
Hopefully, amidst all the machinery and flashing lights and profit motives, we’ll remember that the only reason for any of this is to support human flourishing. We want a habitable planet with less violence and fewer climate catastrophes. We want an equitable justice system, accessible healthcare, career stability, food security. As we say at Idealist, we want freedom and dignity for all. The creative people among you will find ways to make that happen—with and without AI.