Look, I get it. You’ve been hearing about the AI revolution for years now. AI was supposed to take our jobs, restructure the economy, and usher in a golden age of prosperity…or maybe destroy civilization as we know it. And yet…here you are. Still employed. Still driving to work every day, going to the same meetings, and spending too much on groceries. And I don’t know about you, but I haven’t seen any of those promised flying cars lately either.

The hype machine promised us the future, and what we got was chatbots that confidently make stuff up and bad art. Companies threw billions at AI initiatives, and the vast majority of them failed, spectacularly.

It might as well be a fact: most people still aren’t using AI in any meaningful way. The economy hasn’t restructured. Jobs haven’t disappeared. Daily life looks basically the same. And the public discourse about AI? It’s about video slop, chatgpt making dumb mistakes, and how many waterbottles AI data centers use. The most transformative technology in human history is being judged by its dumbest applications.

So the narrative writes itself: AI was overpromised. It’s not as powerful as they said. We can all relax.

I think this is completely, dangerously wrong. And I’d like to explain why.

The Packaging Problem

Here’s the thing about those corporate AI failures. I’ve read the case studies. The implementation efforts were, and I cannot stress this enough, embarrassingly bad. We’re talking about companies that chose substandard models, rolled them out with insufficient training, and then threw up their hands when Karen from accounting didn’t use ChatGPT to revolutionize her spreadsheet workflow.

This is not evidence that AI doesn’t work. This is evidence that most organizations are bad at adopting new technologies. Which, if you’ve ever watched your company attempt literally anything new, should not surprise you in the slightest.

And do you know what most of those companies were actually trying to do with AI? They were shoehorning chatbots in where no one wanted them. They were bolting “AI-powered” recommendations onto their existing products. They were giving every employee a ChatGPT license and measuring how many people logged in.

Don’t get me wrong, chatbots have their utility. But if we are going to talk about modern AI, we need to be talking about agents.

A chatbot answers your question about Skillshare’s refund policy. An agent scrapes 11,845 courses, cross-references them against your reading history, builds you a selection website, downloads your picks, and constructs a custom video viewer, all while you stir pasta. An agent doesn’t answer questions. It does things. It writes code, runs it, hits an error, fixes the error, and keeps going until the job is done.

I don’t think most people realize this. If the talk I hear online is anything to go off of, then it seems the average person is still busy being unimpressed by chatgpt’s mistakes and is tired of google’s mediocre AI search result summaries. Most people haven’t even heard of an agentic workflow.

The gap between what AI can do and what most people are doing with it is not a technology problem. It’s a packaging problem. My premise is that AI would already be taking all our jobs if the average person understood how to get the most out of it. The reason your neighbor isn’t using AI to automate half their job isn’t because the AI isn’t good enough, it’s because nobody has put it in front of them in a way that makes them realize how wildly easy it is. The interfaces haven’t caught up. The cultural understanding hasn’t caught up. We hear AI and we still think of mediocre chatbots.

Software Developer in Your Pocket

I’m a software engineer. Supposedly, I write code for a living. And yet I haven’t personally written a line of code in over 18 months. AI has completely changed my life. These days I open the Claude app on my phone, describe out loud what I want to build, and an AI agent on my computer writes it for me. I’m not sitting at a desk programming. I’m chatting while making dinner.

Let me give an example. Last Tuesday my free, one month trial of Skillshare was about to end, and I realized I hadn’t finished the course I’d signed up for. But I could just download the videos for later, right? So I opened Claude Code, my current agent of choice, and said: “Hey, download the last few videos of this course. Here’s the link.” A few minutes later, done.

But I had a bit of FOMO. I thought, if it’s that easy to download, are there any other courses on here I’d like to take? So I said, “Hey, go scrape all the descriptions of every course on the entire site so I can look at them.” It was 11,845 courses. Way too many to look at. How do I explain to the agent what kinds of courses I would be interested in? So I said, “Hey, here’s the link to my blog and all my book reviews, make a profile on my interests, and use that profile to choose which courses I’ll like.” So it looked at the eleven thousand courses and chose the ones that I would personally find the most interesting.

Sady it was still over 300 courses. Even if I read all the descriptions, how do I easily tell Claude which ones I want? So I said, “Hey, build me a quick website so I can scan through the final courses and click yes on my favorites.” I ended up picking 32 courses, which I had it download in parallel. But now that I had them, how was I going to watch them? So I asked it to build me a video viewing website. I didn’t describe the viewer. I didn’t spec a layout, didn’t mention colors or fonts or navigation. I just said “build me a viewer.” Three minutes later, I have a viewer.

And only then did I stop. Only after I had a personalized course catalog, a recommendation engine, a selection website, 32 downloaded courses, and a custom viewing app did I pause and ask myself: should I actually be doing this? Maybe I should just pay for Skillshare like a normal person.

I think we can all agree that the right thing to do is pay the company for the service they provide. But let’s set aside the ethics of scraping for a moment and think about the escalation. The incredible ease of creation. Each step was so quick and easy. Download the last few videos of a course I was already taking. Well, while I’m here, are there others I’d like? Well, if I’m going to browse, I might as well have AI pick the best ones. Well, if I’m picking, I’ll need a way to review them. Well, if I’m downloading a few, I might as well download them all at once. Well, if I have the files, I’ll need a way to watch them. Each step took minutes. In fact, the whole thing took less than 45 minutes start to finish. I went from “let me grab the last couple videos” to “I have a personalized replica of their entire platform” before I’d even had time to think about whether I should.

If you’ve never coded before, I need you to understand how fucking incredible this is. Three years ago this task would have been months of work and a PhD thesis on recommendation engines. I need you to understand: I didn’t write any of that code. Not a single line. Sure, I’m a software engineer by trade, but my technical background was completely irrelevant. I didn’t debug anything. I didn’t architect anything. I described things. That’s it. I said “build me a viewer” the same way you’d tell a contractor “build me a deck.”

We now live in an age where if you can describe what you want, an agent can build it for you.

This Isn’t a One-Off

But so what if agents can code now? Do you really have anything that you personally want coded for you?

Well, I’m a little biased. I’ve been coding my whole life, literally since before I hit puberty. I learned to code because I hate doing things by hand. I’m endlessly trying to make my life faster and easier. There’s a magic in being able to sit down at a computer and make anything you want. But you used to have to know an insane amount of technical details to actually code anything. For my previous example, you would have to know about css and html and javascript and react and aws and cloudflare and parallel processing and session management and local cookie storage and sql databases and natural language processing…I mean the list is endless. But we now live in an age where the LLM already knows all that stuff.

My coding abilities used to be fundamentally limited by both my knowlege and my time. This is no longer the case, not just for me, but for anyone with a Claude Code subscription. The knowledge required to build software has effectively collapsed to zero.

If you’ve never built software before, you might struggle to imagine the unfathomable power of being able to code literally anything your heart desires. That lack of imagination is the only thing standing between you and being completely awestruck. Here’s what the last six months of my life have looked like:

I didn’t like any of the note-taking apps on the market, so over a weekend I built a personalized one. A full android app with voice recognition that syncs to GitHub, where an AI agent automatically organizes my notes. OAuth, lock-screen capture, the works. I have never built an android app in my life. Months of work in a coding language I don’t know, done in weekend.

I wanted to control my camera gimbal remotely. I know nothing about electronics. So I had AI help me reverse engineer the gimbal’s serial protocol with an oscilloscope, tell me what hardware to buy, write the firmware, and build a web app to control the whole thing over wifi. Motorized keyframe animation with spline interpolation, controlled from a browser. It used to be you paid a company $1000 for the product that does this. I bought maybe $100 in parts and talked to Claude for a couple afternoons.

I wanted better book recommendations. I had AI read all of my Goodreads reviews and build a profile of my taste, then scrape a Reddit thread with 4,000 book recommendations and cross-reference them. Since I started reading AI-recommended books instead of picking my own, my average rating has gone from 3.6 to 4.2. The machine literally knows my taste in books better than I do.

I built myself a research agent I can reuse. When I point it at a topic, it doesn’t just Google it and call it a day. It plans an initial round of searches to get a feel for the topic, logging every finding with citations, then it identifies follow-up questions it didn’t know to ask before, and uses those to plan the next round of searches. Each cycle builds on what the last one uncovered. It finishes by writing a full synthesis report in about 15-20 minutes. I often spawn half a dozen of them in parallel before I even start working on a new project. By the time I’m ready to start, I have six finished research reports waiting for me, backed by pages of fact checkable citations. Sometimes I don’t even read the reports, I just give them to the next AI so it does a better job implementing whatever project I’m building.

I casually built a website to compare image compression formats, a password strength calculator that simulates every known attack strategy in parallel, a tool that identifies which of the six editions of Darwin’s Origin of Species a text snippet comes from, and a tool that analyzes high-speed video footage to measure the actual shutter speeds of vintage film cameras. Two years ago each of these would have been weeks or even months of work for a professional developer. Today? Each took less than an afternoon.

And it’s not just me. A friend of mine with zero engineering background spent a month and built the food recommendation app she’d always dreamed of. Another friend built an AI-based system for buying their first house. Another used it to study thousands of water quality reports and build an interactive dashboard to hold municipal governments accountable.

I used to proudly post every project I finished on my blog, because building things was exciting and rare enough to be worth writing about. I haven’t posted in months. Not because I stopped building, but because I’m building things so fast I can’t keep up with writing about them.

Do you think I sat down and carefully recalled all of these examples for you? Of course not. I asked Claude to go through my computer and make a compilation of my recent projects. Even the act of listing what AI built for me was done by AI.

Now Multiply By Everyone

I’ve been talking a lot about me.

I’m a software engineer. You might be tempted to write all of this off as “well sure, a tech guy figured out how to use a tech tool.” But remember what I told you: my technical background was irrelevant. I didn’t write code. I described things. The only skill involved was wanting something and being able to say it out loud.

Your mom could have done what I did last Tuesday. Not figuratively. Not with hand-holding. If she can say “download these videos and build me something to watch them,” she can have exactly what I have. The entire history of software development - the languages, the frameworks, the years of training — just got compressed into a single skill: describing what you want.

Now I want you to imagine that world. Not a world where I can do this. A world where everyone can do this.

Imagine every person on earth being able to casually scrape Skillshare’s entire catalog in 45 minutes. Not because they’re hackers. Not because they’re tech-savvy. Because they said “hey, download all of this for me” and their phone did it.

Now stop thinking about Skillshare and start thinking about every business, every system, every institution that relies on the fact that most people can’t build software.

SaaS companies charge you $50 a month for software that took a team of engineers years to build. That pricing model works because you can’t build it yourself. But what happens when you can? When anyone can say “build me a project management app that works exactly the way I want” and have it running by lunch? You won’t pay for a generic tool that sort of fits your workflow when you can have a custom one that fits it perfectly.

Digital content platforms survive because most people can’t bypass their paywalls, their DRM, their access controls. A streaming service has maybe a couple dozen engineers working on copy protection. The world has billions of people who want their content. Every defensive measure is a moving target, and you’re being chased by a planet. Defense was already losing this war when the only weapon was BitTorrent. Now every person on earth is about to get their own personal software engineer.

Middlemen exist because most people can’t do the research themselves. Remember my research agent? The one that reads sources, follows threads, and writes cited reports in 20 minutes? That’s a junior analyst at a consulting firm, except I run six of them in parallel and they’re done before my coffee. Regulatory complexity protects incumbents because most people can’t navigate it. Legal barriers work because most people can’t build the workaround.

The entire modern economy is built on friction. It’s built on the assumption that hard things are hard, that skilled work requires skills, that building software requires engineers. Strip that assumption away and you don’t just disrupt a few industries. You remove a load-bearing wall from the structure of how markets work.

And that’s just the people who aren’t trying to cause harm.

You Don’t Even Need to Wait

Everything I just described, the collapse of SaaS, the death of DRM, the slow-motion disintegration of friction-based business models, that’s the world where the packaging catches up and everyone figures it out. That’s what happens only after tons of people start adopting the technology. That’s coming, but it’s clearly not here yet.

Here’s what should really keep you up at night: you don’t need to wait for mass adoption. The world is already dangerous.

Throughout human history, individual output had a hard ceiling. Even the most brilliant, hardworking person on the planet was bounded by their own hours, their own hands, their own focus. A genius could outperform their peers by what, 5x? 10x? Whatever the number, humans were bounded by the number of hours in a day and the number of hands on their body.

AI obliterates that ceiling. One person can now spawn a thousand agents. Which means the most dangerous person on earth is no longer just dangerous — they’re a thousand of themselves.

And that means we’ve been measuring the threat wrong.

When people evaluate whether AI is a threat, they almost always look at general adoption. They look at the aggregate: are most people using it effectively? Are most companies seeing returns? Is the economy restructuring? And when the answers come back “no, not really,” they feel reassured.

This is the wrong metric. It is catastrophically, almost comically, the wrong metric.

We don’t measure the danger of nuclear weapons by asking whether the average citizen has a nuke in their garage. We don’t measure the danger of bioweapons by asking if the average college chem lab has anthrax. The question was never “can the average person wield this?” The question is “can the most dangerous person wield this?”

And the answer is already yes.

I used AI to download some courses and build a book recommendation engine. Cute. But the same capability, the ability to spawn agents that write code, find vulnerabilities, scrape data, and build tools autonomously, looks very different in different hands. The person who uses it to find and exploit security vulnerabilities in banking software. The person who builds custom tools to automate financial fraud at scale. The person who reverse-engineers proprietary systems or builds weapons that previously required state-level resources. These aren’t hypotheticals. This is what the technology can do today, in a world where most people haven’t even figured out how to use it for email.

Recently, Anthropic built a frontier model so powerful that they decided not to release it to the public. Instead, they did a closed release to major software companies and open source foundations, so those companies could patch their security vulnerabilities before the general public got access to a tool that could find and exploit them. When I shared this with friends, the response was exactly what you’d expect: “Yeah, sure, but AI companies always say their stuff is great.”

That reaction — that reflexive dismissal — is exactly what this entire essay is about. The “AI underwhelm” narrative has become so dominant that people now dismiss concrete, specific danger signals because they’ve been primed to believe AI is all hype. The boy who cried wolf is a cautionary tale about a boy. But the wolf was real.

We Have to Talk

AI works. It is awesome and it is terrifying. I have never had more fun in my entire life than I have had this year building anything I could dream up, but I have also never been more worried about the future. We need to recognize that just because most people online have never heard of an agentic workflow and the limit of most companies creativity is “let’s shove a chatbot on our website” does not mean that AI does not have world changing implications.

AI is already phenomenal, most people just haven’t learned the best ways to use it yet. AI is already dangerous, and general adoption is not the correct threat metric. While the internet is busy tricking chatgpt with riddles about car washes, individuals are quietly using these tools to do things that used to require entire engineering teams. Some of these people want to change the world, and some of them will succeed.

There are a lot of conversations we need to be having about all of this. Will AI change the world for the better, or will it ruin it? Do we need to regulate it, and how is that even possible? What happens to the economy when thought work isn’t just replaced, but is controlled by the billionaires with the most money to spend on tokens? How do we use this to accelerate scientific research, cure diseases, and solve problems that have stumped us for generations? How do we use it to make our own lives happier and more fulfilling? What does meaningful work even look like in a world where anyone can build anything?

These are huge questions. They deserve our best thinking.

But we can’t meaningfully have any of these conversations while most of the country is still going around acting like mediocre chatbots represent the state of modern AI. How do you explain to your friends how amazing agents are when they blindly hate LLMs because of an instagram post about waterbottles? How do we talk about economic disruption when most people’s experience of AI is using the free version of chatgpt to badly write an email.

We can’t have a serious conversation about the most powerful technology in human history while half the room is still convinced it’s a toy.