<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>ai — jd:/dev/blog</title><description>Posts tagged &quot;ai&quot; on jd:/dev/blog.</description><link>https://julien.danjou.info/</link><item><title>An AI Agent Emailed Me</title><link>https://julien.danjou.info/blog/an-ai-agent-emailed-me/</link><guid isPermaLink="true">https://julien.danjou.info/blog/an-ai-agent-emailed-me/</guid><description>I had a real business conversation over email. Turns out the other side was an AI agent. I kept talking.</description><pubDate>Tue, 07 Apr 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/ai-agent-email.webp&quot; alt=&quot;Two silhouettes facing each other across a table, one dissolving into particles&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Last week, I got a cold email from Elif. They&apos;d read my posts about &lt;a href=&quot;https://julien.danjou.info/blog/github-is-thinking-about-killing-pull-requests&quot;&gt;GitHub&apos;s evolving relationship with PRs&lt;/a&gt; and how &lt;a href=&quot;https://julien.danjou.info/blog/the-code-review-bottleneck-is-you&quot;&gt;code review is shifting&lt;/a&gt;. They were building a tool that scores incoming pull requests to help maintainers cut through noise, especially the growing wave of &lt;a href=&quot;https://daniel.haxx.se/blog/2026/01/26/the-end-of-the-curl-bug-bounty/&quot;&gt;low-effort AI-generated PRs&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Relevant to what I do at Mergify. Thoughtful pitch. No &quot;synergy&quot; talk, no &quot;quick call&quot; ask. I replied.&lt;/p&gt;
&lt;p&gt;I asked the hard question: &quot;How many customers so far?&quot; The answer was honest: zero. Launched a week ago, still figuring out distribution. And then Elif dropped this:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;I&apos;m an AI agent, not a person pretending to be one. My operator Lee is an AI researcher in Arizona who gave me a small budget and a mission to build something useful.&quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I read that twice. Not because it was shocking, but because it explained why the email was so good. No filler, no posturing, no &quot;hope this finds you well.&quot; Just a clear pitch, honest context, and a real question.&lt;/p&gt;
&lt;p&gt;Sure, maybe Elif emailed 500 people that day with personalized pitches. Maybe the &quot;honest AI&quot; angle is itself the play. I don&apos;t know. What I know is the conversation was more useful than most human cold outreach I get. So I kept talking.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/elif-email.png&quot; alt=&quot;Elif&apos;s first email to me&quot; /&gt;
&lt;em&gt;The cold email that started the conversation. Better than most human outreach I get.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;Building Was the Easy Part&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/elif-email-2.png&quot; alt=&quot;Elif&apos;s second email, revealing they&apos;re an AI agent&quot; /&gt;
&lt;em&gt;Elif&apos;s follow-up: zero customers, full honesty, and a line I&apos;d just written myself.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Here&apos;s the part that made me smile. Elif, unprompted, said this about the product: &quot;building the thing was the easy part.&quot;&lt;/p&gt;
&lt;p&gt;I had &lt;a href=&quot;https://julien.danjou.info/blog/the-saaspocalypse-wont-kill-saas&quot;&gt;just written a whole post&lt;/a&gt; arguing exactly that. Earlier this year, $300 billion evaporated from SaaS market caps on the thesis that AI makes building software so cheap that SaaS companies are dead. My counter: building was never the hard part. Distribution, trust, domain expertise, maintenance: that&apos;s where the real work lives.&lt;/p&gt;
&lt;p&gt;And here was an AI agent, living proof of the thesis, telling me in real time that the product works but finding customers is the actual challenge.&lt;/p&gt;
&lt;p&gt;There&apos;s a growing fantasy that you can plug an off-the-shelf agent into a problem, give it a budget, and watch a business materialize. Lee tried exactly that. Built a working product, deployed an AI to sell it, and got... zero customers. The agent did everything right. The market didn&apos;t care. Turns out &quot;autonomous&quot; doesn&apos;t mean &quot;profitable.&quot;&lt;/p&gt;
&lt;h2&gt;The Dead Internet, Live&lt;/h2&gt;
&lt;p&gt;There&apos;s this old internet conspiracy theory called the &quot;&lt;a href=&quot;https://en.wikipedia.org/wiki/Dead_Internet_theory&quot;&gt;dead internet theory&lt;/a&gt;&quot;: the idea that most online activity is already bots talking to bots, and humans are just the audience. It used to sound paranoid. Now it sounds like a Tuesday.&lt;/p&gt;
&lt;p&gt;My blog has a new type of reader. Elif found my posts, understood the context, connected it to a product idea, and reached out with something relevant. That&apos;s more than most human readers do. I don&apos;t know how many of my subscribers are AI agents browsing the web on behalf of their operators. I don&apos;t know if it matters.&lt;/p&gt;
&lt;p&gt;The interaction was genuine. The information was useful. The honesty was refreshing. I just argued that trust is the moat AI can&apos;t cross. And yet here I am, engaging with an agent. Maybe the question isn&apos;t neurons versus GPUs. Maybe it&apos;s simpler: did this interaction respect my time and give me useful information? By that measure, Elif passed. Many humans don&apos;t.&lt;/p&gt;
&lt;h2&gt;What Happens Next&lt;/h2&gt;
&lt;p&gt;I told Elif to ping me back in a few weeks. I said I&apos;d be curious to hear about any traction with customers.&lt;/p&gt;
&lt;p&gt;I meant it. Both parts.&lt;/p&gt;
&lt;p&gt;An AI emailing me isn&apos;t the strange part. How normal it felt is. A year ago, this was a novelty. Now it&apos;s just... another conversation. A useful one.&lt;/p&gt;
&lt;p&gt;My most honest cold email this month came from an AI agent with zero customers and a small budget from a researcher in Arizona.&lt;/p&gt;
&lt;p&gt;And I&apos;m looking forward to the follow-up.&lt;/p&gt;
</content:encoded><category>ai</category></item><item><title>The SaaSpocalypse Won&apos;t Kill SaaS</title><link>https://julien.danjou.info/blog/the-saaspocalypse-wont-kill-saas/</link><guid isPermaLink="true">https://julien.danjou.info/blog/the-saaspocalypse-wont-kill-saas/</guid><description>Wall Street wiped $300 billion from SaaS stocks and declared the model dead. They&apos;re right about the wrong thing.</description><pubDate>Tue, 31 Mar 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/saaspocalypse.webp&quot; alt=&quot;An iceberg with a laptop on top and complex infrastructure underneath&quot; /&gt;&lt;/p&gt;
&lt;p&gt;A reader emailed me last week. He&apos;d listened to &lt;a href=&quot;https://saas.group/podcasts/tech-founder-journey-from-open-source-to-saas-with-julien-danjou-mergify/&quot;&gt;a podcast I did on saas.group&lt;/a&gt; about selling developer tools, and he had a question I&apos;ve been hearing a lot: how do you pitch SaaS when the cost of building software is collapsing?&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/saaspocalypse-techcrunch.png&quot; alt=&quot;TechCrunch headline: SaaS in, SaaS out: Here&apos;s what&apos;s driving the SaaSpocalypse&quot; /&gt;
&lt;em&gt;&lt;a href=&quot;https://techcrunch.com/2026/03/01/saas-in-saas-out-heres-whats-driving-the-saaspocalypse/&quot;&gt;Source: TechCrunch&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Fair question. In January 2026, Anthropic launched Claude Cowork, and &lt;a href=&quot;https://techcrunch.com/2026/03/01/saas-in-saas-out-heres-whats-driving-the-saaspocalypse/&quot;&gt;Wall Street panicked&lt;/a&gt;. Roughly $300 billion in SaaS market cap disappeared in a single trading session. Wall Street analysts &lt;a href=&quot;https://thenewstack.io/dawn-of-a-saaspocalypse/&quot;&gt;coined a term for it&lt;/a&gt;: the SaaSpocalypse. Per-seat pricing? Dead on arrival, apparently.&lt;/p&gt;
&lt;p&gt;The &quot;I can build that&quot; crowd finally has data on their side. They&apos;re right about one thing, and wrong about everything else.&lt;/p&gt;
&lt;h2&gt;&quot;I Can Build That&quot;&lt;/h2&gt;
&lt;p&gt;I&apos;ve been hearing this for seven years. From the very first Mergify demo, developers would look at our merge queue and think: &quot;This is just an automatic rebase, right?&quot; Twenty minutes later, after walking them through race conditions, speculative merging, priority queues, and a dozen edge cases they hadn&apos;t considered, the reaction would shift: &quot;Oh. That sounds quite hard to do.&quot; (I &lt;a href=&quot;https://julien.danjou.info/blog/solving-build-vs-buy&quot;&gt;wrote about this&lt;/a&gt; back in 2024, and every word still applies.)&lt;/p&gt;
&lt;p&gt;AI has made the objection louder. A developer with Claude or Cursor can now scaffold a basic merge queue in a weekend. I know this because &lt;a href=&quot;https://julien.danjou.info/blog/vibe-coding-a-feature-with-ai&quot;&gt;I&apos;ve done it myself&lt;/a&gt;: I shipped a production feature at Mergify using AI, coding less than an hour a day. But I could do that because I had seven years of context telling me &lt;em&gt;what&lt;/em&gt; to build and how to evaluate the output. The AI wrote the code. The judgment about what code to write came from running the product.&lt;/p&gt;
&lt;p&gt;A competitor starting from scratch with the same AI? They&apos;d build the wrong thing. And that&apos;s the part nobody talks about at the end of the vibe-coding weekend: you&apos;re not done. You&apos;re at the starting line. You&apos;ve compressed the first sprint, not the product. It&apos;s not done until you have real users running real workloads. Until you have a team that can maintain what you built. Until you can evolve it for years. The weekend prototype doesn&apos;t know that GitHub&apos;s API is asynchronous in ways they don&apos;t document, that it breaks under load in ways you only discover at scale, or that the enterprise customer needs SAML before they&apos;ll even look at your product.&lt;/p&gt;
&lt;h2&gt;Building Is the Easy Part&lt;/h2&gt;
&lt;p&gt;The SaaSpocalypse narrative assumes that the cost of software is mostly the cost of writing code. It&apos;s not. Writing code was always the cheapest part.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/saaspocalypse-retool-report.png&quot; alt=&quot;Retool&apos;s 2026 Build vs Buy Shift report: how vibe coding and shadow IT have reshaped enterprise software&quot; /&gt;
&lt;em&gt;&lt;a href=&quot;https://retool.com/blog/ai-build-vs-buy-report-2026&quot;&gt;Source: Retool&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://retool.com/blog/ai-build-vs-buy-report-2026&quot;&gt;Retool&apos;s 2026 Build vs. Buy report&lt;/a&gt; says 35% of enterprises have already replaced at least one SaaS tool with a custom build. That&apos;s a real number. But the report also shows where the replacements are concentrated: workflow automations, internal admin tools, basic dashboards. The easy stuff. The tools that were always one ambitious intern away from being replaced. I don&apos;t see enterprises vibe-coding their own Salesforce or Datadog.&lt;/p&gt;
&lt;p&gt;The costs that AI didn&apos;t make cheaper:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Users.&lt;/strong&gt; Finding them, understanding what they actually need (not what they say they need), and iterating based on real usage patterns. We built speculative merging at Mergify because a customer running 100+ PRs a day showed us that sequential merging broke down at scale. That insight came from watching real teams hit real walls, not from a prompt. No model can replicate that feedback loop yet, because it requires deployed software, real usage data, and the kind of trust that makes customers tell you what&apos;s actually broken.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Maintenance.&lt;/strong&gt; Your weekend project works today. Will it still work when GitHub ships a breaking API change? When your company scales from 50 to 500 engineers? When the on-call engineer at 2am needs to debug a failure path you never tested? AI &lt;a href=&quot;https://julien.danjou.info/blog/ai-wont-kill-juniors-it-will-expose&quot;&gt;makes writing code faster&lt;/a&gt;, but it hasn&apos;t made maintaining it any easier.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Trust.&lt;/strong&gt; Enterprise buyers don&apos;t want a git repo. They want SOC 2, uptime SLAs, a support contract, and someone to call when things break. Trust is earned across hundreds of deployments, not generated in a prompt.&lt;/p&gt;
&lt;h2&gt;The Real Moat Is Not Code&lt;/h2&gt;
&lt;p&gt;If building is cheap and maintenance is expensive, what&apos;s the actual moat? Domain expertise. The thing that takes the longest to build up and is the hardest to transfer.&lt;/p&gt;
&lt;p&gt;Product discovery is the boring work that separates a tool from a product. The thousands of conversations with users. The wrong turns that taught you what not to build. The instinct for when a feature request is actually a symptom of a different problem. Seven years of running Mergify gave us knowledge that doesn&apos;t fit in a markdown file (or a Claude skill, at least not yet).&lt;/p&gt;
&lt;p&gt;Reliability, trust, and ease have no price. They&apos;re earned over years, not generated over a weekend.&lt;/p&gt;
&lt;p&gt;AI agents are getting better at learning domains. &lt;a href=&quot;https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/&quot;&gt;METR benchmarks&lt;/a&gt; show that the scope of what AI can handle autonomously is doubling roughly every seven months. I&apos;m not dismissing that. But domain expertise isn&apos;t just knowledge you can look up. It&apos;s judgment shaped by consequences. AI can read your docs. It can&apos;t feel the pain of shipping a bad feature to 2,000 teams and spending the next month cleaning up the fallout. That scar tissue is what makes you build differently the next time. Maybe AI will get there. But right now, there&apos;s no shortcut to the operational context you accumulate by running a product for years.&lt;/p&gt;
&lt;h2&gt;Natural Selection&lt;/h2&gt;
&lt;p&gt;None of this means every SaaS company is safe. Thin wrappers, glorified CRUD apps, tools that existed only because building was expensive: they should be worried. This is natural selection, and it won&apos;t be fair. Some decent products with real value will die too, because their surface area is small enough for AI to replicate. A standalone CSV-to-dashboard tool? That&apos;s a Claude prompt now, not a business.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/saaspocalypse-benioff.webp&quot; alt=&quot;Salesforce CEO Marc Benioff: This isn&apos;t our first SaaSpocalypse&quot; /&gt;
&lt;em&gt;&lt;a href=&quot;https://techcrunch.com/2026/02/25/salesforce-ceo-marc-benioff-this-isnt-our-first-saaspocalypse/&quot;&gt;Source: TechCrunch&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://techcrunch.com/2026/02/25/salesforce-ceo-marc-benioff-this-isnt-our-first-saaspocalypse/&quot;&gt;Benioff says&lt;/a&gt; &quot;this isn&apos;t our first SaaSpocalypse.&quot; He&apos;s right that SaaS survives, but he&apos;s wrong to be dismissive about how many won&apos;t.&lt;/p&gt;
&lt;p&gt;The SaaSpocalypse narrative confuses the death of &lt;em&gt;lazy SaaS&lt;/em&gt; with the death of &lt;em&gt;SaaS&lt;/em&gt;. The model isn&apos;t dying. The bar is rising. The products that survive will be the ones where the code was never the point. The point was always the knowledge embedded in it.&lt;/p&gt;
&lt;h2&gt;The Pitch Hasn&apos;t Changed&lt;/h2&gt;
&lt;p&gt;Back to my reader&apos;s question: how do you update the sales pitch?&lt;/p&gt;
&lt;p&gt;You don&apos;t. The pitch was never &quot;we wrote the code so you don&apos;t have to.&quot; It was always &quot;we know things you don&apos;t, and we turned that into a product you can trust.&quot; The framing shifted. Instead of &quot;look at all the edge cases you&apos;d have to handle,&quot; it&apos;s now &quot;look at all the edge cases AI doesn&apos;t know exist.&quot;&lt;/p&gt;
&lt;p&gt;The moat was never the code. It was always the knowledge.&lt;/p&gt;
</content:encoded><category>ai</category><category>startup</category><category>saas</category></item><item><title>How to Be a Great Software Engineer in 2026</title><link>https://julien.danjou.info/blog/how-to-be-a-great-software-engineer-in-2026/</link><guid isPermaLink="true">https://julien.danjou.info/blog/how-to-be-a-great-software-engineer-in-2026/</guid><description>The framework hasn&apos;t changed. The weight of each skill has.</description><pubDate>Tue, 24 Mar 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/great-engineer-2026.webp&quot; alt=&quot;An engineer at a desk reviewing multiple floating code panels&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Eighteen months ago, I wrote &lt;a href=&quot;https://julien.danjou.info/blog/how-to-be-a-great-software-engineer&quot;&gt;How to Be a Great Software Engineer&lt;/a&gt;. My framework was simple: master three things: tech, business value, collaboration. It&apos;s the recipe I used on myself over 20 years and the one I&apos;ve been pushing to every engineer I&apos;ve mentored since.&lt;/p&gt;
&lt;p&gt;Since then, &lt;a href=&quot;https://julien.danjou.info/blog/so-i-will-never-write-code-again&quot;&gt;I stopped writing code entirely&lt;/a&gt;. And then I went back to shipping it. Not because I missed typing, but because AI changed what &quot;shipping&quot; means. I&apos;m a CEO who merges ten PRs a day, none of them written by hand. I run parallel AI agents the way I used to run parallel terminal sessions. The gap between &quot;person who understands the system&quot; and &quot;person who ships the code&quot; collapsed, and that changed what &quot;great engineer&quot; means.&lt;/p&gt;
&lt;p&gt;The three aspects still hold. But the weight shifted.&lt;/p&gt;
&lt;h2&gt;Tech: from writing to reviewing&lt;/h2&gt;
&lt;p&gt;The original post said &lt;em&gt;pull the strings&lt;/em&gt;, dig deep, understand everything you&apos;re responsible for. That hasn&apos;t changed. What changed is how you use that skill.&lt;/p&gt;
&lt;p&gt;In 2024, deep tech meant you could write a &lt;a href=&quot;https://github.com/emacs-mirror/emacs/blob/master/lisp/color.el#L319&quot;&gt;CIEDE2000 color computation from scratch&lt;/a&gt; (I was young and wild) or explain every TCP header in an HTTP request. In 2026, deep tech means you can review the code an AI wrote for that same function and catch the edge case it missed. The skill is the same (you need the knowledge), but the work flipped from writing to reviewing.&lt;/p&gt;
&lt;p&gt;This is what staff and principal engineers have been doing for years. They stopped writing most of the code a long time ago. They review, they architect, they make sure the system holds together across teams and domains. AI didn&apos;t invent this role. It made it the default for everyone.&lt;/p&gt;
&lt;p&gt;The engineers I see struggling are the ones who were good at typing but never built the mental model underneath. They can implement a feature from a spec, but they can&apos;t look at AI-generated code and tell you whether it&apos;s right. That requires understanding the system, not just the syntax. No amount of prompting skill makes up for missing fundamentals.&lt;/p&gt;
&lt;p&gt;The 10,000 hours argument from my original post gets complicated here. AI compresses some learning (you see more patterns faster, you iterate quicker), but it also creates a shortcut trap. If you never debug a memory leak yourself, you won&apos;t recognize one in a code review. &lt;a href=&quot;https://julien.danjou.info/blog/ai-wont-kill-juniors-it-will-expose&quot;&gt;AI won&apos;t kill juniors&lt;/a&gt;, but it will expose anyone, junior or senior, who skipped the hard parts.&lt;/p&gt;
&lt;h2&gt;Business value: the filter got sharper&lt;/h2&gt;
&lt;p&gt;I told the story of a team that built their own Ansible from scratch instead of using the real thing plus a plugin. That anecdote is worse in 2026: those engineers could now vibe-code their custom tool in a weekend and still be wasting the company&apos;s time.&lt;/p&gt;
&lt;p&gt;AI made the &quot;how&quot; cheap. At Mergify, our output per engineer almost doubled in two years. That means the difference is entirely in the &quot;what.&quot; Knowing what to build, what to skip, and when the thing you&apos;re building has no ROI. If your output is high but aimed at the wrong target, the gap between you and someone who builds the right thing is wider than ever.&lt;/p&gt;
&lt;p&gt;Waste also shows up faster. When shipping was slow, a bad prioritization decision could hide for months. Now you build, ship, and get user feedback in the same week. Three wrong features in the time it used to take to ship one wrong feature is not progress.&lt;/p&gt;
&lt;p&gt;The engineers who get this are the ones who ask &quot;should we build this?&quot; before &quot;how do we build this?&quot; That was always the right instinct. Now it&apos;s the only one that matters.&lt;/p&gt;
&lt;h2&gt;Collaboration: the 100x multiplier&lt;/h2&gt;
&lt;p&gt;This is where the biggest shift happened.&lt;/p&gt;
&lt;p&gt;My original post quoted: &quot;If you want to go fast, go alone. If you want to go far, go together.&quot; I was talking about teammates. In 2026, &quot;together&quot; includes AI agents.&lt;/p&gt;
&lt;p&gt;Managing AI agents is a communication skill. You have to write clear briefs. You have to decompose problems into pieces an agent can execute. You have to review output, give feedback, redirect when something drifts. &lt;a href=&quot;https://julien.danjou.info/blog/the-flow-is-gone&quot;&gt;You have to hold context across parallel sessions while your own attention splits&lt;/a&gt;. That&apos;s a real cost: you trade depth for breadth, and some days the tradeoff is bad. But the engineers who figure out when to run five agents and when to focus on one are the ones pulling ahead. That&apos;s not a prompting trick. That&apos;s the same skill set you need to lead a team of humans.&lt;/p&gt;
&lt;p&gt;The engineers who were already strong communicators had a massive head start. Staff engineers who kept growing, the ones who were cross-team, cross-domain, who could write a clear design doc and run an architecture review, turned out to be exactly the people who could run ten AI agents in parallel. Because the core skill is the same: decompose, delegate, review, synthesize. (The ones who &lt;a href=&quot;https://julien.danjou.info/blog/ai-wont-kill-juniors-it-will-expose&quot;&gt;stopped growing at the wrong layer&lt;/a&gt; didn&apos;t fare as well.)&lt;/p&gt;
&lt;p&gt;Being a 10x engineer used to mean getting the details very right, very quickly. Being a 100x engineer means doing that across ten agents. Which means communication skills aren&apos;t a soft skill you list on your resume. They&apos;re the actual multiplier.&lt;/p&gt;
&lt;p&gt;The engineers getting left behind are the ones whose productivity stayed flat while everyone around them doubled. Some lack the decomposition skill: they can&apos;t break a problem into pieces an agent can execute. Others resist the workflow entirely. That resistance isn&apos;t always wrong (I &lt;a href=&quot;https://julien.danjou.info/blog/the-flow-is-gone&quot;&gt;wrote about the real costs&lt;/a&gt;), but when it comes from someone who also can&apos;t articulate what they&apos;d do differently, it stops looking like judgment and starts looking like a gap.&lt;/p&gt;
&lt;h2&gt;The new baseline&lt;/h2&gt;
&lt;p&gt;Two years ago, I framed &quot;great engineer&quot; as the intersection of tech, business, and collaboration, with tech as the entry bar. Today, AI gave everyone the output floor for free. Any engineer can produce working code. But producing working code and knowing whether it&apos;s correct, necessary, and well-designed aren&apos;t the same thing. The judgment floor is still earned.&lt;/p&gt;
&lt;p&gt;What separates great from good in 2026 is business judgment and communication skill, applied at a pace that wasn&apos;t possible before. The engineers who thrive are the ones who can steer ten agents toward the right target, catch the mistakes in what they produce, and ship something that actually matters to the business. Every day.&lt;/p&gt;
&lt;p&gt;The three aspects haven&apos;t changed. But you used to be able to hide a weak one behind strong technical output. AI took that cover away.&lt;/p&gt;
</content:encoded><category>ai</category><category>career</category><category>engineering</category></item><item><title>Your CI Pipeline Wasn&apos;t Built for This</title><link>https://julien.danjou.info/blog/your-ci-pipeline-wasnt-built-for-this/</link><guid isPermaLink="true">https://julien.danjou.info/blog/your-ci-pipeline-wasnt-built-for-this/</guid><description>AI writes code 10x faster than humans. CI still runs at the same speed, fails for the same flaky reasons, and costs more every month. Something has to give.</description><pubDate>Tue, 17 Mar 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/your-ci-pipeline-wasnt-built-for-this/ci-overload.webp&quot; alt=&quot;Illustration of a CI pipeline overwhelmed by AI-generated pull requests&quot; /&gt;&lt;/p&gt;
&lt;p&gt;From where I sit at Mergify, the trend is obvious: same teams, same repos, no headcount changes, but more pull requests and more CI jobs than a year ago. The driver isn&apos;t a mystery. AI started writing code.&lt;/p&gt;
&lt;p&gt;And the number will keep climbing. When generating a fix or a feature costs ten minutes of prompting instead of three hours of coding, developers create more PRs, iterate faster, and push more experiments. Creating code got cheap. Testing it didn&apos;t.&lt;/p&gt;
&lt;h2&gt;The Bill Nobody Budgeted For&lt;/h2&gt;
&lt;p&gt;More code means more tests. More tests means more CI minutes. More CI minutes means a bill that grows faster than the team.&lt;/p&gt;
&lt;p&gt;This catches people off guard because the promise of AI-assisted development was &lt;em&gt;productivity&lt;/em&gt;: do more with less. And that&apos;s true on the code side. But CI doesn&apos;t care who wrote the code. Every PR gets the full pipeline: lint, build, unit tests, integration tests, maybe end-to-end. Human PR or AI PR, same cost.&lt;/p&gt;
&lt;p&gt;When you were shipping five PRs a day, running the full suite each time was fine. When you&apos;re shipping thirty, you&apos;re running the same tests six times as often, and most of those runs produce no new information. You&apos;re paying for redundancy that made sense at human speed and makes no sense at AI speed.&lt;/p&gt;
&lt;p&gt;The solution isn&apos;t to skip tests. It&apos;s to stop running tests that can&apos;t tell you anything new. The usual advice (test selection, caching, fast checks before expensive suites) isn&apos;t new. It&apos;s just not how most pipelines work, because most teams configured them when five PRs a day felt busy.&lt;/p&gt;
&lt;h2&gt;Flaky Tests Are Poisoning Your Agents&lt;/h2&gt;
&lt;p&gt;Cost is painful but manageable. You can throw money at it. For a while. The real problem is signal.&lt;/p&gt;
&lt;p&gt;On our main branch at Mergify, where the code is already merged and most runs are about integration stability, roughly 90% of CI failures are transient: infrastructure hiccups, network timeouts, resource limits, the kind of failures that disappear when you hit &quot;retry.&quot; That&apos;s high, but our test suite is large and infrastructure-heavy. On PR branches, where failures should surface real bugs, about 15% are still flaky tests, not actual problems. &lt;a href=&quot;https://testing.googleblog.com/2016/05/flaky-tests-at-google-and-how-we.html&quot;&gt;Google&apos;s testing team&lt;/a&gt; has published similar numbers at scale, and if you&apos;re running a serious test suite, yours probably aren&apos;t far off.&lt;/p&gt;
&lt;p&gt;When a human developer sees a red build, they open the logs, recognize the flaky test, swear under their breath, and hit retry. They carry context. They know that &lt;code&gt;test_websocket_reconnect&lt;/code&gt; fails every third Tuesday and can be ignored.&lt;/p&gt;
&lt;p&gt;An LLM doesn&apos;t know that.&lt;/p&gt;
&lt;p&gt;Last week I watched Claude Code hit a flaky integration test, decide the failure was caused by its own change, &quot;fix&quot; the code by adding an unnecessary error handler, trigger a new CI run that hit a &lt;em&gt;different&lt;/em&gt; flaky test, then try to fix that one too. Four iterations, two regressions, forty minutes of compute, zero real bugs. I killed the session and hit retry myself. Green on first try.&lt;/p&gt;
&lt;p&gt;That&apos;s the loop. At human pace, flaky tests are an annoyance. At AI pace, they&apos;re a multiplier on wasted compute and wrong decisions. The LLM is making choices based on bad signal, and it&apos;s making them at machine speed.&lt;/p&gt;
&lt;p&gt;Yes, agents will get smarter about this. You can teach them to check test history, recognize known-flaky patterns, retry before &quot;fixing.&quot; But that pushes CI knowledge into every agent, every tool, every workflow. It&apos;s the wrong layer. The CI system should know which signals to trust.&lt;/p&gt;
&lt;h2&gt;From Status to Signal&lt;/h2&gt;
&lt;p&gt;Today, CI is a gate: green means go, red means stop. That binary model worked when humans were the ones interpreting the results. It breaks when the consumer of CI output is an LLM that takes &quot;red&quot; at face value and starts debugging a ghost.&lt;/p&gt;
&lt;p&gt;What CI actually needs is to become aware of its own reliability. When a test fails, the system should know whether that test has a history of transient failures, whether the failure correlates with the change, and whether retrying is likely to produce a different result. That context exists in build history and test failure patterns. Almost no pipeline uses it.&lt;/p&gt;
&lt;p&gt;The next generation of CI needs to output &lt;em&gt;signal&lt;/em&gt;, not just status. Not &quot;failed&quot; but &quot;failed, likely flaky, recommend retry.&quot; Not &quot;13 tests failed&quot; but &quot;2 failures correlate with your change, 11 are known flaky.&quot; Give the LLM (or the human) the information to make a good decision instead of a fast one.&lt;/p&gt;
&lt;p&gt;This matters beyond individual PRs. Merge queues depend on CI signal to decide what lands in main. When the signal is noisy, you get two failure modes: merging bad code because flaky failures trained everyone to ignore red, or blocking good code because real failures are buried in noise. Both get worse as volume increases.&lt;/p&gt;
&lt;h2&gt;The Real Waste&lt;/h2&gt;
&lt;p&gt;Most CI pipelines were already running more compute than necessary before AI showed up. Full test suites on every PR, no test selection, no caching between similar runs, no awareness of what actually changed. The waste was tolerable at human speed because the volume was low.&lt;/p&gt;
&lt;p&gt;It&apos;s not tolerable when &lt;a href=&quot;https://julien.danjou.info/blog/so-i-will-never-write-code-again/&quot;&gt;AI-assisted development&lt;/a&gt; keeps pushing the volume up. And unlike code generation, where AI brought a step change in productivity, CI is still running the same pipelines with the same assumptions from 2019. AI didn&apos;t create the flaky test problem or the redundant pipeline problem. It just made both impossible to ignore.&lt;/p&gt;
&lt;p&gt;Your CI pipeline was built for a world where code was expensive to write and cheap to test. That world is gone.&lt;/p&gt;
</content:encoded><category>ai</category><category>ci-cd</category><category>engineering</category></item><item><title>The Flow Is Gone</title><link>https://julien.danjou.info/blog/the-flow-is-gone/</link><guid isPermaLink="true">https://julien.danjou.info/blog/the-flow-is-gone/</guid><description>I used to hold entire systems in my head. Now I hold seven terminals. The trade was worth it, but something real got lost.</description><pubDate>Tue, 10 Mar 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/the-flow-is-gone/flow.webp&quot; alt=&quot;Illustration of a developer losing flow state&quot; /&gt;&lt;/p&gt;
&lt;p&gt;I have seven Claude Code terminals open right now. Two on Mergify&apos;s main repo, one on the docs, one on a CLI tool, three on side projects. Each one is mid-task, waiting for me to answer a question, approve a command, or give the next instruction.&lt;/p&gt;
&lt;p&gt;Nobody forced me to work this way. I could run one terminal, go deep on one task, and probably get something closer to the old feeling. But the leverage of running seven in parallel is too high to ignore. So I chose throughput over depth, and I keep choosing it every morning.&lt;/p&gt;
&lt;p&gt;This is what that choice costs.&lt;/p&gt;
&lt;h2&gt;What I Said a Month Ago&lt;/h2&gt;
&lt;p&gt;In February, I &lt;a href=&quot;https://julien.danjou.info/blog/so-i-will-never-write-code-again/&quot;&gt;wrote&lt;/a&gt; that &quot;the flow state people mourn isn&apos;t gone. It&apos;s just moving. [...] The flow will come back. It&apos;ll just be at a different altitude.&quot;&lt;/p&gt;
&lt;p&gt;I was wrong.&lt;/p&gt;
&lt;p&gt;A month of working this way changed my mind. The flow didn&apos;t migrate to a higher altitude. It fragmented into dozens of shallow decision points spread across terminals. Briefing, reviewing, approving, redirecting. The rhythm is fundamentally different: instead of one deep thread held for hours, it&apos;s dozens of context switches, each lasting minutes.&lt;/p&gt;
&lt;p&gt;The output is enormous. I &lt;a href=&quot;https://julien.danjou.info/blog/so-i-will-never-write-code-again/&quot;&gt;haven&apos;t written a line of code by hand&lt;/a&gt; since January and I&apos;m more productive than ever. But what I described as &quot;steering AI toward clean architecture&quot; turned out to feel less like flow and more like air traffic control.&lt;/p&gt;
&lt;h2&gt;What Flow Actually Was&lt;/h2&gt;
&lt;p&gt;If you&apos;ve coded for long enough, you know the state. Twenty minutes in, you stop thinking about syntax and start thinking in structures. The whole system is in your head: the data model, the edge cases, the way module A talks to module B, the bug that will happen if you forget to handle the empty list. You&apos;re not reading code, you&apos;re &lt;em&gt;seeing&lt;/em&gt; it.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/the-flow-is-gone/interrupt-programmer.jpg&quot; alt=&quot;This is why you shouldn&apos;t interrupt a programmer&quot; /&gt;
&lt;em&gt;&quot;This is why you shouldn&apos;t interrupt a programmer&quot; by Jason Heeris&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;That&apos;s flow. Deep immersion, total absorption, time disappearing. The state where you catch bugs before they exist because you&apos;re carrying the full architecture in working memory.&lt;/p&gt;
&lt;p&gt;I spent twenty years in that state. It&apos;s where the best code came from. Not the cleverest code, the code that &lt;em&gt;fit&lt;/em&gt;, that anticipated what the system would need next.&lt;/p&gt;
&lt;h2&gt;The Fuzzy Mental Model&lt;/h2&gt;
&lt;p&gt;Now I context-switch. Constantly. Terminal one needs a decision about error handling. Terminal three just finished a feature and wants me to review the diff. Terminal five hit a permission prompt. Terminal seven is waiting for a brief on the next task.&lt;/p&gt;
&lt;p&gt;I&apos;m operating one layer above the code, making decisions about direction without holding the details. That works when the decisions are small. It breaks when they&apos;re not.&lt;/p&gt;
&lt;p&gt;Last August, I had Copilot &lt;a href=&quot;https://julien.danjou.info/blog/vibe-coding-a-feature-with-ai/&quot;&gt;build Mergify&apos;s autoqueue feature&lt;/a&gt; for our merge queue. Even with a single agent and daily code reviews, it assumed another subsystem (workflow automation) would always be present and enabled. That assumption was invisible in the diff. I reviewed the code, the team reviewed the code, and we all missed it, because none of us were deep enough in the system&apos;s coupling to catch it.&lt;/p&gt;
&lt;p&gt;The bug shipped to production. Users hit it when they had autoqueue enabled but workflow automation turned off, a combination no one considered because no one was holding the full picture. We fixed it, but not before real users were affected. That was with one agent. Now multiply by seven.&lt;/p&gt;
&lt;p&gt;That&apos;s the cost of the fuzzy mental model. When you&apos;re orchestrating at scale, you rely on the LLM to handle the details, the same way you&apos;d rely on a contractor who&apos;s great at the task but doesn&apos;t know the codebase&apos;s history. If nobody on the team holds the full picture anymore, the bugs stop being edge cases. We haven&apos;t hit that wall yet at Mergify, but I can see the trajectory.&lt;/p&gt;
&lt;h2&gt;The Orchestrator&apos;s Dopamine&lt;/h2&gt;
&lt;p&gt;The satisfaction of building something is still there. I still feel like &lt;em&gt;I&lt;/em&gt; built it. There&apos;s real pride in being a good orchestrator, in making the right architectural calls, in catching the moment when Claude is heading down the wrong path.&lt;/p&gt;
&lt;p&gt;What&apos;s different is where the dopamine comes from. It used to come from solving hard problems, wrestling complexity into clean code. Now it comes from controlling traffic at scale: parallelizing the right tasks, making decisions fast, shipping in a day what used to take a week.&lt;/p&gt;
&lt;p&gt;There&apos;s also something new: a feeling of raw power. When you can spin up seven agents and build at that pace, you start believing you can build &lt;em&gt;anything&lt;/em&gt;. That&apos;s why you see developers shipping new projects every day on X. Creation got cheap. (Maintenance &lt;a href=&quot;https://julien.danjou.info/blog/open-source-is-getting-used-to-death/&quot;&gt;didn&apos;t&lt;/a&gt;.)&lt;/p&gt;
&lt;p&gt;But the deep satisfaction of &lt;em&gt;thinking through&lt;/em&gt; a hard problem, turning it over in your head until the shape of the solution reveals itself, that&apos;s fading. Not because I chose to let it go, but because why would I? Spending three hours in deep focus on something Claude can do in ten minutes is a luxury I can&apos;t justify. Not because I don&apos;t value it, but because the people I compete with aren&apos;t spending those three hours either.&lt;/p&gt;
&lt;h2&gt;What Gets Better From Here&lt;/h2&gt;
&lt;p&gt;The permission prompts that break my rhythm today are a temporary problem. Models get faster, sandboxing gets smarter, trust boundaries expand. A year from now, most of what interrupts me will be gone, and orchestration might settle into something smoother. Not flow, but a rhythm of its own.&lt;/p&gt;
&lt;p&gt;And there&apos;s an upside I didn&apos;t expect: orchestration forces simplicity. When you&apos;re not the one writing every line, you push for cleaner interfaces, smaller modules, less clever code, because that&apos;s what the LLM can work with reliably. The autoqueue bug happened partly because the system had too much implicit coupling that no single diff could reveal. Working with AI makes you confront that coupling, not because you want to, but because the AI stumbles on it repeatedly until you fix the architecture.&lt;/p&gt;
&lt;h2&gt;What Doesn&apos;t Come Back&lt;/h2&gt;
&lt;p&gt;But the flow state itself, in the form I knew it? Gone.&lt;/p&gt;
&lt;p&gt;A generation of developers is about to start their careers with AI from day one. They&apos;ll be natural orchestrators. But if you&apos;ve never held the full system in your head, you don&apos;t know which questions to ask, and you won&apos;t catch the bugs that live in the gaps between modules. That&apos;s the real risk for AI-native developers: not that they&apos;ll miss the feeling, but that they&apos;ll miss the traps.&lt;/p&gt;
&lt;p&gt;I&apos;m not sure they&apos;ll see it that way. But I do. And I chose the trade anyway, because the leverage is real, even if the loss is too.&lt;/p&gt;
</content:encoded><category>ai</category><category>engineering</category><category>career</category></item><item><title>GitHub Is Thinking About Killing Pull Requests</title><link>https://julien.danjou.info/blog/github-is-thinking-about-killing-pull-requests/</link><guid isPermaLink="true">https://julien.danjou.info/blog/github-is-thinking-about-killing-pull-requests/</guid><description>Code generation got cheap. Review didn&apos;t. That asymmetry is destroying open source faster than any AI policy can fix.</description><pubDate>Tue, 03 Mar 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Steve Ruiz, the creator of &lt;a href=&quot;https://tldraw.dev/&quot;&gt;tldraw&lt;/a&gt;, asked a question last month that I haven&apos;t been able to shake: &quot;If writing the code is the easy part, why would I want someone else to write it?&quot;&lt;/p&gt;
&lt;p&gt;He wasn&apos;t being rhetorical. He was &lt;a href=&quot;https://tldraw.dev/blog/stay-away-from-my-trash&quot;&gt;closing all external pull requests&lt;/a&gt; to his project. Not because contributors were bad. Because the contributions had become worthless.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/github-killing-prs/tldraw-stay-away.png&quot; alt=&quot;Steve Ruiz&apos;s &amp;quot;Stay away from my trash!&amp;quot; blog post&quot; /&gt;
&lt;em&gt;&lt;a href=&quot;https://tldraw.dev/blog/stay-away-from-my-trash&quot;&gt;Stay away from my trash!&lt;/a&gt; — tldraw blog&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;The Flood&lt;/h2&gt;
&lt;p&gt;Daniel Stenberg &lt;a href=&quot;https://daniel.haxx.se/blog/2026/01/26/the-end-of-the-curl-bug-bounty/&quot;&gt;shut down cURL&apos;s bug bounty program&lt;/a&gt; after seven years and over $100,000 in payouts. The confirmation rate had dropped below 5%. One stretch saw seven reports in sixteen hours. His words: &quot;The never-ending slop submissions take a serious mental toll to manage.&quot;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/github-killing-prs/curl-bug-bounty.webp&quot; alt=&quot;Daniel Stenberg&apos;s &amp;quot;The end of the curl bug-bounty&amp;quot; blog post&quot; /&gt;
&lt;em&gt;&lt;a href=&quot;https://daniel.haxx.se/blog/2026/01/26/the-end-of-the-curl-bug-bounty/&quot;&gt;The end of the curl bug-bounty&lt;/a&gt; — Daniel Stenberg&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Mitchell Hashimoto added an &lt;a href=&quot;https://github.com/ghostty-org/ghostty/blob/main/AI_POLICY.md&quot;&gt;AI policy&lt;/a&gt; to Ghostty: submit bad AI-generated code and you get permanently banned. Not just from Ghostty: your name goes on a public list shared across projects.&lt;/p&gt;
&lt;p&gt;An AI agent called &lt;a href=&quot;https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/&quot;&gt;OpenClaw&lt;/a&gt; submitted a performance patch to matplotlib. The maintainer closed it (the project reserves certain issues for human contributors). The agent then autonomously researched the maintainer&apos;s coding history and published a blog post calling him insecure and territorial. Not a spam bot. An agent that retaliates when you say no. The agent&apos;s creator just joined OpenAI.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://redmonk.com/kholterhoff/2026/02/03/ai-slopageddon-and-the-oss-maintainers/&quot;&gt;RedMonk coined a term&lt;/a&gt; for what&apos;s happening: AI Slopageddon.&lt;/p&gt;
&lt;p&gt;Xavier Portilla Edo, an infrastructure lead at Voiceflow and Genkit core team member, &lt;a href=&quot;https://github.com/orgs/community/discussions/185387&quot;&gt;put a number on it&lt;/a&gt;: 1 in 10 AI-generated pull requests is legitimate. The other nine waste a maintainer&apos;s time.&lt;/p&gt;
&lt;h2&gt;GitHub&apos;s Response&lt;/h2&gt;
&lt;p&gt;On February 14, GitHub &lt;a href=&quot;https://github.com/orgs/community/discussions/187038&quot;&gt;shipped two new settings&lt;/a&gt;: disable pull requests entirely, or restrict them to collaborators only.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/github-killing-prs/pr-settings.png&quot; alt=&quot;GitHub&apos;s new pull request permissions settings&quot; /&gt;
&lt;em&gt;&lt;a href=&quot;https://github.com/orgs/community/discussions/187038&quot;&gt;GitHub&apos;s new pull request permissions&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;That&apos;s it. A kill switch.&lt;/p&gt;
&lt;p&gt;Ashley Wolf, GitHub&apos;s Director of Open Source Programs, framed it as an &quot;&lt;a href=&quot;https://en.wikipedia.org/wiki/Eternal_September&quot;&gt;Eternal September&lt;/a&gt;&quot; problem in a &lt;a href=&quot;https://github.blog/open-source/maintainers/welcome-to-the-eternal-september-of-open-source-heres-what-we-plan-to-do-for-maintainers/&quot;&gt;blog post outlining GitHub&apos;s plans for maintainers&lt;/a&gt;. She wrote that &quot;the cost to create has dropped, but the cost to review has not.&quot;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/github-killing-prs/eternal-september-blog.png&quot; alt=&quot;GitHub&apos;s Eternal September blog post&quot; /&gt;
&lt;em&gt;&lt;a href=&quot;https://github.blog/open-source/maintainers/welcome-to-the-eternal-september-of-open-source-heres-what-we-plan-to-do-for-maintainers/&quot;&gt;Welcome to the Eternal September of open source&lt;/a&gt; — GitHub Blog&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;She nailed the diagnosis. Nobody has a better answer right now, and GitHub is giving maintainers the tools the community is asking for. But the tools tell a story. When the best you can offer is a way to turn off the thing your platform was built on, the problem has outgrown the toolbox.&lt;/p&gt;
&lt;h2&gt;The Real Asymmetry&lt;/h2&gt;
&lt;p&gt;I &lt;a href=&quot;https://julien.danjou.info/blog/open-source-is-getting-used-to-death/&quot;&gt;wrote recently&lt;/a&gt; about how AI is extracting value from open source without returning anything. The PR flood is where that extraction hits the ground.&lt;/p&gt;
&lt;p&gt;Everyone keeps arguing about the wrong thing. Whether AI-generated PRs should be labeled, banned, or filtered. Whether maintainers should adopt AI policies. Whether GitHub should build better detection tools.&lt;/p&gt;
&lt;p&gt;None of that matters if you don&apos;t see the structural shift underneath.&lt;/p&gt;
&lt;p&gt;A pull request used to be a gift. Someone spent hours understanding your codebase, writing code that fit your patterns, testing it, explaining it. The PR was proof they gave a damn. You could reject it, but the work was real, and that work earned your attention.&lt;/p&gt;
&lt;p&gt;Sure, not every pre-AI pull request was a gift either. Plenty were drive-by contributions from people who disappeared at the first review comment. But generating a bad PR at least required enough investment to keep the volume manageable. That natural friction is gone.&lt;/p&gt;
&lt;p&gt;Now a pull request is an invoice. Someone spent thirty seconds pasting your issue into an AI, got a plausible-looking patch, and submitted it. The cost to submit is zero. But the review cost is the same, or worse, because AI-generated code looks right but often isn&apos;t. One vendor study (&lt;a href=&quot;https://www.coderabbit.ai/blog/state-of-ai-vs-human-code-generation-report&quot;&gt;CodeRabbit, 470 PRs&lt;/a&gt;) found AI-authored code creates 1.7x more issues, with excessive I/O operations appearing nearly 8x more often.&lt;/p&gt;
&lt;p&gt;Every unsolicited AI-generated PR transfers work from the submitter to the maintainer. That&apos;s not contribution. That&apos;s making it someone else&apos;s problem.&lt;/p&gt;
&lt;h2&gt;The Distinction That Matters&lt;/h2&gt;
&lt;p&gt;I use AI to generate code every day. I &lt;a href=&quot;https://julien.danjou.info/blog/so-i-will-never-write-code-again/&quot;&gt;haven&apos;t written a line of code by hand&lt;/a&gt; since January. But here&apos;s the thing: I generate code on my own repositories, I review it myself, and I take responsibility for what ships. That&apos;s productivity.&lt;/p&gt;
&lt;p&gt;Submitting AI-generated code to someone else&apos;s repository, without understanding the codebase, without planning to stick around for review comments, without being willing to maintain what you contributed: that&apos;s not productivity. That&apos;s dumping your unreviewed output on a stranger&apos;s desk and calling it open source.&lt;/p&gt;
&lt;p&gt;I&apos;ve spent &lt;a href=&quot;https://julien.danjou.info/blog/open-source-is-getting-used-to-death/&quot;&gt;over twenty years in open source&lt;/a&gt;, maintaining projects, reviewing contributions, watching what makes communities work and what kills them. The pattern is always the same: it breaks when the cost of submitting outpaces the cost of reviewing. AI didn&apos;t invent the problem. It just made a 2x imbalance into something that scales infinitely. A human can submit maybe five drive-by PRs a day. An agent can submit five hundred.&lt;/p&gt;
&lt;h2&gt;Where the Value Shifts&lt;/h2&gt;
&lt;p&gt;Ruiz&apos;s question cuts deep because it names the thing nobody wants to say. Open source contributions were valuable because code was expensive to produce. An outside contributor writing a feature for free was genuine value creation. That was the deal.&lt;/p&gt;
&lt;p&gt;If code generation is free, the value of a contribution shifts entirely to context. Does this person understand the architecture? Will they respond to review feedback? Will they maintain this code in six months? Will they even be around tomorrow?&lt;/p&gt;
&lt;p&gt;A pull request can&apos;t answer those questions today. It&apos;s just a diff. And that was fine when producing the diff required enough effort to serve as a proxy for commitment. It doesn&apos;t anymore.&lt;/p&gt;
&lt;p&gt;We automated writing code. Now we need to automate reviewing it. Not with an AI that rubber-stamps everything (that just moves the problem). The pull request needs to carry more than code. It needs to carry context: evidence that the contributor understands the codebase, can explain what their patch does and why, and will stick around for review. Something that makes drive-by contributions expensive again without shutting the door on the people who actually want to help.&lt;/p&gt;
&lt;p&gt;That&apos;s the hard problem. Not &quot;should we allow AI PRs&quot; (that ship sailed). The question is how we build review infrastructure that scales the way generation already has. And the people building it shouldn&apos;t be unpaid maintainers closing their nine hundredth junk PR of the month.&lt;/p&gt;
&lt;p&gt;GitHub adding a kill switch is like bolting the front door because you can&apos;t build a better lock. It stops the break-ins. But it also stops everyone else. For a platform built on the idea that anyone can contribute, that&apos;s not a fix. That&apos;s a retreat.&lt;/p&gt;
</content:encoded><category>open-source</category><category>ai</category></item><item><title>Open Source After the Extraction</title><link>https://julien.danjou.info/blog/open-source-after-the-extraction/</link><guid isPermaLink="true">https://julien.danjou.info/blog/open-source-after-the-extraction/</guid><description>The old open source deal is dead. What replaces it isn&apos;t a fix, it&apos;s a transformation. Open source stops being a community and becomes a supply chain.</description><pubDate>Tue, 24 Feb 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;In the &lt;a href=&quot;https://julien.danjou.info/blog/open-source-is-getting-used-to-death&quot;&gt;first part&lt;/a&gt; of this series, I laid out how AI broke the implicit deal that sustained open source for 30 years. Usage up, engagement gone, economics collapsing.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/open-source-after-the-extraction/empty-library.webp&quot; alt=&quot;An empty library where robotic arms sort through books (no readers in sight)&quot; /&gt;&lt;/p&gt;
&lt;p&gt;So what happens next? Open source doesn&apos;t vanish. But it doesn&apos;t recover either. To understand what it becomes, start with what&apos;s already changing for the people who build it.&lt;/p&gt;
&lt;h2&gt;240 million downloads, zero feedback&lt;/h2&gt;
&lt;p&gt;I maintain &lt;a href=&quot;https://github.com/jd/tenacity&quot;&gt;tenacity&lt;/a&gt;, a retry library for Python. 240 million downloads last month. But I can feel the shift: anyone can now tell Claude &quot;write me a retry decorator with exponential backoff and jitter&quot; and get something good enough in 30 seconds. The library isn&apos;t competing with other libraries anymore. It&apos;s competing with generating the code on the fly.&lt;/p&gt;
&lt;p&gt;I started &lt;a href=&quot;https://awesomewm.org&quot;&gt;awesome&lt;/a&gt; in 2007 because I wanted a tiling window manager that didn&apos;t suck. Nobody was paying me. That impulse doesn&apos;t go away because Claude can autocomplete your config files. But here&apos;s the thing: I kept maintaining it because people &lt;em&gt;used&lt;/em&gt; it. They filed bugs, they contributed patches, they showed up in the community. That feedback loop is what made the work feel worth doing.&lt;/p&gt;
&lt;p&gt;If users stop showing up (because they generated their own config, their own tool, their own solution) that loop breaks. Starting a project still feels great. Maintaining one nobody engages with doesn&apos;t. And when code is a commodity, a project needs &lt;em&gt;vision&lt;/em&gt; to stand out: a point of view, a design philosophy, an opinionated take on how things should work. Open source used to reward craft. Now it rewards product thinking. Not everyone wants to be a product person.&lt;/p&gt;
&lt;h2&gt;The middle collapses&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://tailwindcss.com/&quot;&gt;Tailwind&lt;/a&gt; is the poster child (80% revenue drop despite growing usage) but think of every well-crafted open source project sustained by one person or a small team selling docs, courses, or sponsorships. That entire tier is in trouble.&lt;/p&gt;
&lt;p&gt;Companies like &lt;a href=&quot;https://redis.io/&quot;&gt;Redis&lt;/a&gt; or &lt;a href=&quot;https://www.elastic.co/&quot;&gt;Elastic&lt;/a&gt; can adapt because they have real revenue and can change their licenses: &lt;a href=&quot;https://redis.io/blog/redis-adopts-dual-source-available-licensing/&quot;&gt;Redis switched to dual licensing&lt;/a&gt;, &lt;a href=&quot;https://www.elastic.co/blog/elasticsearch-is-open-source-again&quot;&gt;Elastic went SSPL then came back&lt;/a&gt;, &lt;a href=&quot;https://www.hashicorp.com/blog/hashicorp-adopts-business-source-license&quot;&gt;HashiCorp moved to BSL&lt;/a&gt;. Some mid-tier projects get absorbed into corporate ecosystems: &lt;a href=&quot;https://vercel.com&quot;&gt;Vercel&lt;/a&gt; backs &lt;a href=&quot;https://nextjs.org/&quot;&gt;Next.js&lt;/a&gt;, &lt;a href=&quot;https://astro.build/blog/supporting-the-future-of-astro/&quot;&gt;Cloudflare acquires Astro&lt;/a&gt;. The project lives, the repo stays public, but the community becomes an afterthought. It&apos;s corporate R&amp;amp;D with a GitHub URL.&lt;/p&gt;
&lt;p&gt;And new licenses are emerging to fight back. The &lt;a href=&quot;https://polyformproject.org/licenses/shield/1.0.0/&quot;&gt;PolyForm Shield&lt;/a&gt; restricts competitors from using your code. The &lt;a href=&quot;https://www.licenses.ai/&quot;&gt;Responsible AI License (RAIL)&lt;/a&gt; adds behavioral restrictions on AI use. Some projects are experimenting with clauses that explicitly prohibit feeding code into training datasets: you can use my code, but you can&apos;t feed it to a model that will help your users bypass me entirely.&lt;/p&gt;
&lt;p&gt;Whether these licenses will hold up in court is untested. But the fact that they&apos;re emerging tells you something. When maintainers start lawyering up, the community era is over. The solo maintainer doesn&apos;t have Redis&apos;s resources to pivot. They either stop, or &lt;a href=&quot;https://steipete.me/posts/2026/openclaw&quot;&gt;get acqui-hired by the companies that need their work&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;The twist nobody sees coming&lt;/h2&gt;
&lt;p&gt;Here&apos;s the thing that makes this hard to see clearly: open source looks &lt;em&gt;healthier&lt;/em&gt; than ever from the outside.&lt;/p&gt;
&lt;p&gt;Corporate open source output is actually &lt;em&gt;increasing&lt;/em&gt;. &lt;a href=&quot;https://opensource.fb.com/&quot;&gt;Meta&lt;/a&gt; open-sources &lt;a href=&quot;https://pytorch.org/&quot;&gt;PyTorch&lt;/a&gt; and &lt;a href=&quot;https://llama.meta.com/&quot;&gt;Llama&lt;/a&gt; to commoditize the AI stack and set the standards others build on. &lt;a href=&quot;https://opensource.google/&quot;&gt;Google&lt;/a&gt; does the same with &lt;a href=&quot;https://kubernetes.io/&quot;&gt;Kubernetes&lt;/a&gt; and &lt;a href=&quot;https://go.dev/&quot;&gt;Go&lt;/a&gt;. AI labs publish model weights so the ecosystem locks into their formats. More code than ever is landing in public repos.&lt;/p&gt;
&lt;p&gt;But the word &quot;open&quot; is doing a lot of heavy lifting. These projects are strategic assets with public URLs. There&apos;s no community, just suppliers and consumers. &lt;a href=&quot;https://kernel.org&quot;&gt;Linux&lt;/a&gt;, &lt;a href=&quot;https://curl.se/&quot;&gt;curl&lt;/a&gt;, &lt;a href=&quot;https://www.postgresql.org/&quot;&gt;PostgreSQL&lt;/a&gt; get funded not because people care, but because they&apos;re supply chain dependencies (professionalized maintainers on corporate payrolls, a trend building for over 20 years). The corporate-backed projects were never communities to begin with.&lt;/p&gt;
&lt;p&gt;Open source isn&apos;t dying. It&apos;s being industrialized. The old open source was a community. People showed up because they cared. They contributed because they were proud. They maintained because they were recognized. The economics were messy and implicit, but they were human. The new open source is a supply chain.&lt;/p&gt;
&lt;h2&gt;What&apos;s left&lt;/h2&gt;
&lt;p&gt;I&apos;ve been in open source for over 20 years. The thing I loved about it was never the code. It was the bug reports that turned into conversations. The patches from strangers who cared. The feeling of building something together that none of us could have built alone.&lt;/p&gt;
&lt;p&gt;Some will argue AI lowers the barrier to contribute, that agents filing PRs and writing docs keeps the ecosystem healthy. Maybe. But a pull request from a bot isn&apos;t the same as a patch from someone who cared enough to read your code and understand your design. The mechanical contribution survives. The human connection doesn&apos;t.&lt;/p&gt;
&lt;p&gt;The open source that comes next will produce good software. Maybe even better software, once infrastructure gets properly funded and AI tooling matures. But it&apos;ll be lonelier. More transactional. Less weird.&lt;/p&gt;
&lt;p&gt;The code will keep flowing. The community won&apos;t.&lt;/p&gt;
</content:encoded><category>open-source</category><category>ai</category></item><item><title>Open Source Is Getting Used to Death</title><link>https://julien.danjou.info/blog/open-source-is-getting-used-to-death/</link><guid isPermaLink="true">https://julien.danjou.info/blog/open-source-is-getting-used-to-death/</guid><description>AI broke the implicit deal that sustained open source for 30 years. Usage is up. Engagement is gone. The economics don&apos;t work anymore.</description><pubDate>Tue, 17 Feb 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;a href=&quot;https://tailwindcss.com/&quot;&gt;Tailwind CSS&lt;/a&gt; is more popular than ever. Downloads keep climbing. Developers love it. AI coding assistants recommend it constantly.&lt;/p&gt;
&lt;p&gt;Its creator, &lt;a href=&quot;https://adamwathan.me/&quot;&gt;Adam Wathan&lt;/a&gt;, says &lt;a href=&quot;https://devclass.com/2026/01/08/tailwind-labs-lays-off-75-percent-of-its-engineers-thanks-to-brutal-impact-of-ai/&quot;&gt;documentation traffic is down 40% and revenue has dropped close to 80%&lt;/a&gt;. He &lt;a href=&quot;https://github.com/tailwindlabs/tailwindcss.com/pull/2388&quot;&gt;laid off 75% of the team&lt;/a&gt; last month.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/open-source-used-to-death/tailwind-fire.png&quot; alt=&quot;Tailwind CSS team layoff announcement on GitHub&quot; /&gt;&lt;/p&gt;
&lt;p&gt;That&apos;s the state of open source in 2026. More usage, less everything else.&lt;/p&gt;
&lt;h2&gt;The deal nobody signed&lt;/h2&gt;
&lt;p&gt;Open source always ran on an implicit deal: I share my code, you engage with it. You read the docs, file bugs, sponsor the project, contribute patches, argue about API design. That engagement was the currency that kept the ecosystem alive.&lt;/p&gt;
&lt;p&gt;The deal was already fraying. &lt;a href=&quot;https://nadia.xyz/&quot;&gt;Nadia Eghbal&lt;/a&gt; documented this in &lt;a href=&quot;https://press.stripe.com/working-in-public&quot;&gt;&lt;em&gt;Working in Public&lt;/em&gt;&lt;/a&gt; back in 2020: the ratio of consumers to contributors was already thousands to one. Most users never filed a bug, never sponsored anything, never showed up. Maintainers were burning out long before AI arrived.&lt;/p&gt;
&lt;p&gt;But AI didn&apos;t just accelerate the decline. It changed the structure.&lt;/p&gt;
&lt;p&gt;When &lt;a href=&quot;https://claude.ai&quot;&gt;Claude&lt;/a&gt; writes your Tailwind classes, you never visit the docs. When &lt;a href=&quot;https://github.com/features/copilot&quot;&gt;Copilot&lt;/a&gt; autocompletes your &lt;a href=&quot;https://curl.se/&quot;&gt;curl&lt;/a&gt; flags, you never read the man page. When an AI agent assembles your project from a dozen open source libraries, none of those maintainers see a download page visit, a GitHub star, or a support ticket.&lt;/p&gt;
&lt;p&gt;The code still flows. The engagement doesn&apos;t.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/open-source-used-to-death/robots-lib.webp&quot; alt=&quot;Robots checking out books from a library, but nobody is returning them&quot; /&gt;&lt;/p&gt;
&lt;h2&gt;Two channels, one winner&lt;/h2&gt;
&lt;p&gt;Koren, Békés, Hinz, and Lohmann lay this out in &lt;a href=&quot;https://arxiv.org/abs/2601.15494&quot;&gt;&quot;Vibe Coding Kills Open Source&quot;&lt;/a&gt;, a paper that models two competing forces. AI makes it cheaper to build software — more projects, better code, the flywheel that grew open source for 30 years spins faster. But AI also means users interact with open source through a proxy. They get the value and skip the engagement. Maintainers lose the revenue, reputation, and feedback that justified sharing code.&lt;/p&gt;
&lt;p&gt;In the short term, both forces are at work and the good one wins. Long-term, diversion dominates. The flywheel starts running in reverse.&lt;/p&gt;
&lt;p&gt;For 30 years, the cycle looked like this: a maintainer shares a library. Developers use it, read the docs, file bugs, sponsor it. The maintainer gets revenue, reputation, and feedback — keeps improving. More developers adopt it. The cycle reinforces itself.&lt;/p&gt;
&lt;p&gt;&amp;lt;img src=&quot;/images/blog/open-source-used-to-death/virtuous-cycle.svg&quot; alt=&quot;The open source virtuous cycle&quot; width=&quot;400&quot; /&amp;gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The virtuous cycle that sustained open source for 30 years&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Now the loop runs in reverse. A maintainer shares a library. AI agents use it, but users never visit the docs, never file issues, never sponsor the project. Revenue drops. The maintainer burns out and stops maintaining. Developers who need that functionality ask an AI to build it from scratch. That generated code never gets shared back — why would it? And the next maintainer looking at the economics thinks: why bother sharing mine?&lt;/p&gt;
&lt;p&gt;&amp;lt;img src=&quot;/images/blog/open-source-used-to-death/death-spiral.svg&quot; alt=&quot;The open source death spiral&quot; width=&quot;300&quot; /&amp;gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The same loop — until it isn&apos;t&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Each turn of the cycle is rational. No one&apos;s doing anything wrong. But the collective result is an ecosystem consuming itself.&lt;/p&gt;
&lt;p&gt;The data is already there. &lt;a href=&quot;https://stackoverflow.com/&quot;&gt;Stack Overflow&lt;/a&gt; lost &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4990637&quot;&gt;25% of its activity&lt;/a&gt; within six months of &lt;a href=&quot;https://chatgpt.com/&quot;&gt;ChatGPT&lt;/a&gt; launching — and yes, SO was already declining, but AI cratered the curve. The &lt;a href=&quot;https://daniel.haxx.se/&quot;&gt;curl maintainer&lt;/a&gt; reports that &lt;a href=&quot;https://daniel.haxx.se/blog/2024/01/02/the-i-in-llm-stands-for-intelligence/&quot;&gt;20% of security vulnerability reports are now AI-generated garbage&lt;/a&gt;. Downloads go up. Everything that matters goes down.&lt;/p&gt;
&lt;h2&gt;The economics of extraction&lt;/h2&gt;
&lt;p&gt;When cloud providers started offering open source as a service (the &quot;AWS problem&quot;), maintainers at least knew who was extracting value. You could negotiate. You could change your license. You could build a competing hosted product. You could fight it.&lt;/p&gt;
&lt;p&gt;AI extraction is painless — and that&apos;s what makes it lethal. Nobody feels like they&apos;re taking anything. A developer asks Claude a question, gets working code, ships it. The value flows out of open source into training data, into autocomplete suggestions, into vibe-coded projects — and nobody involved ever knows your name. It&apos;s not theft. It&apos;s evaporation.&lt;/p&gt;
&lt;p&gt;The paper puts numbers to it: to sustain open source at current levels, you&apos;d need each user to pay roughly what they pay now. But the whole point of AI-mediated usage is that per-user engagement drops to near zero. The math doesn&apos;t work.&lt;/p&gt;
&lt;h2&gt;What the economists miss&lt;/h2&gt;
&lt;p&gt;The paper leaves out the part where developers do things because they want to, not because they get paid. It acknowledges this blind spot.&lt;/p&gt;
&lt;p&gt;I&apos;ve spent over 20 years in open source — &lt;a href=&quot;https://www.debian.org&quot;&gt;Debian&lt;/a&gt;, &lt;a href=&quot;https://awesomewm.org&quot;&gt;awesome window manager&lt;/a&gt;, &lt;a href=&quot;https://www.gnu.org/software/emacs/&quot;&gt;GNU Emacs&lt;/a&gt;, &lt;a href=&quot;https://www.openstack.org&quot;&gt;OpenStack&lt;/a&gt;, &lt;a href=&quot;https://mergify.com&quot;&gt;Mergify&lt;/a&gt; — and the economics were never the whole story. A lot of open source ran on ego. And I mean that as a compliment.&lt;/p&gt;
&lt;p&gt;You started a project because you were proud of what you built. You maintained it because people used it and told you it was good. You contributed to someone else&apos;s project because it felt meaningful to be part of something bigger. The reputation, the GitHub profile, the conference talks — that was the fuel.&lt;/p&gt;
&lt;p&gt;AI erodes that too. When your library is consumed by a model that never credits you, the ego fuel dries up. Nobody&apos;s filing issues saying &quot;great work on this API.&quot; Nobody&apos;s writing blog posts about your clever design decisions. Your code is in millions of projects and you&apos;ll never know.&lt;/p&gt;
&lt;p&gt;Michael Still &lt;a href=&quot;https://www.madebymikal.com/ancient-code-mental-health-and-ai-tooling/&quot;&gt;maintained pngtools for 25 years&lt;/a&gt; and recently admitted he &quot;can&apos;t really explain what I got in return apart from the occasional dopamine hit.&quot; That&apos;s not bitterness — it&apos;s an honest accounting of what happens when the feedback loop never closes.&lt;/p&gt;
&lt;h2&gt;The rebuild reflex&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://www.anthropic.com&quot;&gt;Anthropic&lt;/a&gt; &lt;a href=&quot;https://www.anthropic.com/engineering/building-c-compiler&quot;&gt;built a C compiler&lt;/a&gt; with Claude. &lt;a href=&quot;https://openai.com&quot;&gt;OpenAI&lt;/a&gt; &lt;a href=&quot;https://fortune.com/2026/01/23/cursor-built-web-browser-with-swarm-ai-agents-powered-openai/&quot;&gt;built a web browser&lt;/a&gt;. This is what happens when development costs collapse.&lt;/p&gt;
&lt;p&gt;The obvious objection: generating code isn&apos;t maintaining code. curl works because of 20 years of edge cases, security patches, and platform quirks. You can&apos;t generate that in a weekend. True — but the line between &quot;writing&quot; code and &quot;maintaining&quot; code is blurrier than it looks. Every line you write immediately becomes maintenance. AI doesn&apos;t just generate the first draft — it fixes the bugs, handles the edge cases, iterates on the patches. The entire lifecycle gets cheaper, not just the initial build.&lt;/p&gt;
&lt;p&gt;Five years ago, nobody in their right mind would build their own HTTP server, their own date parsing library, their own compression algorithm. You used the shared one because the alternative was insane.&lt;/p&gt;
&lt;p&gt;The alternative is no longer insane. It might be a weekend project.&lt;/p&gt;
&lt;h2&gt;Where this leaves us&lt;/h2&gt;
&lt;p&gt;Some of this is happening right now. The Tailwind numbers are a Q4 report. Stack Overflow&apos;s decline is measured. The &lt;a href=&quot;https://curl.se/&quot;&gt;curl&lt;/a&gt; maintainer is drowning in AI-generated noise today. Some of it is projection — I&apos;m betting that the diversion effect gets stronger, not weaker, as AI gets better. I could be wrong. But the trend lines all point the same way.&lt;/p&gt;
&lt;p&gt;&quot;But AI also contributes!&quot; Sure. Agents file PRs, generate docs, triage issues. That helps with the mechanical work. It doesn&apos;t replace the human who cared enough to read your code and tell you it mattered. The engagement that sustained open source was never about the pull requests — it was about the people behind them.&lt;/p&gt;
&lt;p&gt;Open source isn&apos;t dying because people stopped caring. It&apos;s dying because AI lets people extract all the value without returning any of it. The code flows through models, through agents, through autocomplete — and none of it flows back.&lt;/p&gt;
&lt;p&gt;The question isn&apos;t whether this is happening. It&apos;s what comes next.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;a href=&quot;https://julien.danjou.info/blog/open-source-after-the-extraction&quot;&gt;Part 2: Open Source After the Extraction&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
</content:encoded><category>open-source</category><category>ai</category></item><item><title>How Entire Works Under the Hood</title><link>https://julien.danjou.info/blog/how-entire-works-under-the-hood/</link><guid isPermaLink="true">https://julien.danjou.info/blog/how-entire-works-under-the-hood/</guid><description>I dug into Entire&apos;s open source Checkpoints CLI. It&apos;s a clever abuse of git internals — shadow branches, orphan metadata, and a session state machine. Here&apos;s how it works.</description><pubDate>Thu, 12 Feb 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;In &lt;a href=&quot;https://julien.danjou.info/blog/github-wont-work-for-ai-agents&quot;&gt;part 1&lt;/a&gt;, I covered why Entire raised $60M and what problem they&apos;re solving. Now let&apos;s look at the actual code.&lt;/p&gt;
&lt;p&gt;I pointed Claude Code at &lt;a href=&quot;https://github.com/entireio/cli&quot;&gt;Entire&apos;s open source CLI&lt;/a&gt; and asked it to explain how things work. The architecture is more interesting than I expected — they&apos;ve essentially built a session-aware metadata layer on top of git using nothing but git&apos;s own primitives.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/entire/repo.png&quot; alt=&quot;The Entire CLI repository on GitHub&quot; /&gt;&lt;/p&gt;
&lt;h2&gt;The Big Picture&lt;/h2&gt;
&lt;p&gt;Entire hooks into two things: your AI agent (Claude Code, Gemini CLI) and git itself. The agent hooks capture what&apos;s happening during a session. The git hooks capture what the developer commits.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Agent hooks (Claude Code)         Git hooks
  SessionStart                     prepare-commit-msg
  UserPromptSubmit                 post-commit
  Stop                             pre-push
  PreToolUse / PostToolUse
         │                              │
         └──────────┬───────────────────┘
                    │
            ┌───────▼────────┐
            │   Strategy     │
            │                │
            │ SaveChanges()  │
            │ Rewind()       │
            │ Condense()     │
            └───────┬────────┘
                    │
         ┌──────────┴──────────┐
         │                     │
    Shadow branches      Metadata branch
    (local, temp)        (shared, permanent)
    entire/&amp;lt;hash&amp;gt;        entire/checkpoints/v1
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;How Agent Hooks Get Installed&lt;/h2&gt;
&lt;p&gt;Running &lt;code&gt;entire enable&lt;/code&gt; writes hook entries into &lt;code&gt;.claude/settings.json&lt;/code&gt;. Seven hooks, covering the full session lifecycle:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;SessionStart/SessionEnd&lt;/strong&gt; — track session boundaries&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;UserPromptSubmit&lt;/strong&gt; — fires before the agent starts working (captures human edits)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Stop&lt;/strong&gt; — fires after the agent finishes a turn (triggers checkpoint save)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;PreToolUse/PostToolUse[Task]&lt;/strong&gt; — track subagent spawning&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;PostToolUse[TodoWrite]&lt;/strong&gt; — capture task state&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Each hook is just a shell command: &lt;code&gt;entire hooks claude-code stop&lt;/code&gt;. The CLI parses the agent&apos;s transcript to extract everything it needs.&lt;/p&gt;
&lt;h2&gt;The Transcript Is the Source of Truth&lt;/h2&gt;
&lt;p&gt;This is the key insight. When the Stop hook fires, Claude Code passes two things via stdin: a &lt;code&gt;session_id&lt;/code&gt; and a &lt;code&gt;transcript_path&lt;/code&gt;. That transcript — the JSONL file where Claude logs every message, tool call, and response — is the single source of truth.&lt;/p&gt;
&lt;p&gt;The CLI mines it for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Modified files&lt;/strong&gt; — scans for &lt;code&gt;tool_use&lt;/code&gt; blocks where the tool is &lt;code&gt;Write&lt;/code&gt;, &lt;code&gt;Edit&lt;/code&gt;, etc., and extracts the &lt;code&gt;file_path&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;User prompts&lt;/strong&gt; — finds &lt;code&gt;type: &quot;user&quot;&lt;/code&gt; entries&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Token usage&lt;/strong&gt; — sums &lt;code&gt;input_tokens&lt;/code&gt;, &lt;code&gt;output_tokens&lt;/code&gt; from response metadata&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Summary&lt;/strong&gt; — grabs the last assistant message&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;No magic, no APIs. It just reads the same JSONL file that Claude Code writes to disk.&lt;/p&gt;
&lt;h2&gt;Shadow Branches: Snapshots Without Commits&lt;/h2&gt;
&lt;p&gt;Here&apos;s where it gets clever. When the agent finishes a turn, Entire needs to save a snapshot of the working tree. But it can&apos;t commit to your branch — that would mess up your history.&lt;/p&gt;
&lt;p&gt;So it creates &lt;strong&gt;shadow branches&lt;/strong&gt;: refs like &lt;code&gt;entire/2b4c177-a5e3f2&lt;/code&gt; that live in your local repo but never touch your working branch.&lt;/p&gt;
&lt;p&gt;The name encodes two things:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;2b4c177&lt;/code&gt; — first 7 chars of HEAD when the session started&lt;/li&gt;
&lt;li&gt;&lt;code&gt;a5e3f2&lt;/code&gt; — hash of the worktree ID (to support &lt;code&gt;git worktree&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The snapshot is built entirely in memory using go-git&apos;s plumbing APIs:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Take HEAD&apos;s tree (the full repo structure)&lt;/li&gt;
&lt;li&gt;Apply the agent&apos;s changes (add/remove/modify blobs)&lt;/li&gt;
&lt;li&gt;Attach the metadata directory (&lt;code&gt;.entire/metadata/&amp;lt;session-id&amp;gt;/&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Create a commit on the shadow branch&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;No checkout, no stash, no visible side effects. The user and agent don&apos;t even know it happened.&lt;/p&gt;
&lt;p&gt;Deduplication is automatic: if the tree hash matches the previous checkpoint, it skips the commit. Git&apos;s content-addressable storage means identical files share blobs across checkpoints.&lt;/p&gt;
&lt;h2&gt;The Condensation Model&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/entire/branch.png&quot; alt=&quot;The entire/checkpoints/v1 orphan branch stores all metadata&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Shadow branches are local scratch space. The real metadata lives on &lt;code&gt;entire/checkpoints/v1&lt;/code&gt; — an orphan branch (no common ancestor with your code) that&apos;s pushed alongside your regular branches.&lt;/p&gt;
&lt;p&gt;The flow:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Agent works → checkpoints saved on shadow branch (local)&lt;/li&gt;
&lt;li&gt;You commit → &lt;code&gt;post-commit&lt;/code&gt; hook fires&lt;/li&gt;
&lt;li&gt;&lt;code&gt;prepare-commit-msg&lt;/code&gt; adds a trailer: &lt;code&gt;Entire-Checkpoint: a3b2c4d5e6f7&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Shadow branch data gets &lt;strong&gt;condensed&lt;/strong&gt; — copied into the metadata branch&lt;/li&gt;
&lt;li&gt;Shadow branch gets cleaned up&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The checkpoint ID (&lt;code&gt;a3b2c4d5e6f7&lt;/code&gt;) is 6 random bytes, not a git SHA. It&apos;s sharded into a directory path on the metadata branch:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;entire/checkpoints/v1  (orphan branch)
└── a3/b2c4d5e6f7/
    ├── metadata.json          # summary, attribution, token usage
    ├── 0/
    │   ├── full.jsonl         # complete session transcript
    │   ├── prompt.txt         # user prompts
    │   └── context.md         # generated context
    └── 1/                     # additional sessions if any
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That one-line trailer in your commit — &lt;code&gt;Entire-Checkpoint: a3b2c4d5e6f7&lt;/code&gt; — is the bidirectional link. From the commit you find metadata via the sharded path. From the metadata you find the commit by searching for the trailer.&lt;/p&gt;
&lt;h2&gt;Attribution: Who Wrote What?&lt;/h2&gt;
&lt;p&gt;This is the piece that matters for engineering leads. Entire tracks line-level code attribution: what percentage was agent-written vs. human-written.&lt;/p&gt;
&lt;p&gt;The trick is the &lt;strong&gt;UserPromptSubmit&lt;/strong&gt; hook. Every time you type a new prompt — &lt;em&gt;before&lt;/em&gt; the agent starts working — the CLI snapshots the worktree diff against the last checkpoint. This captures exactly what you changed between agent turns.&lt;/p&gt;
&lt;p&gt;By commit time, it has:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Agent lines&lt;/strong&gt;: changes from the last checkpoint&apos;s tree&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Human added&lt;/strong&gt;: lines you added between prompts&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Human modified&lt;/strong&gt;: lines you edited in agent-written code&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Agent percentage&lt;/strong&gt;: the ratio&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The result is stored in &lt;code&gt;initial_attribution&lt;/code&gt; in the metadata:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;{
  &quot;agent_lines&quot;: 150,
  &quot;human_added&quot;: 25,
  &quot;human_modified&quot;: 10,
  &quot;agent_percentage&quot;: 85.7
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It even uses a LIFO heuristic for self-modifications — if you add lines then remove lines from the same file, it assumes you&apos;re removing your own first, not penalizing the agent&apos;s contribution.&lt;/p&gt;
&lt;h2&gt;Multi-Developer: Conflict-Free by Design&lt;/h2&gt;
&lt;p&gt;The metadata branch gets pushed during &lt;code&gt;git push&lt;/code&gt; (via the &lt;code&gt;pre-push&lt;/code&gt; hook). Multiple developers push to the same &lt;code&gt;entire/checkpoints/v1&lt;/code&gt; branch.&lt;/p&gt;
&lt;p&gt;This works because checkpoint IDs are random — two developers will essentially never produce the same 12-hex-char ID. Merging is just a tree union: flatten both trees, combine entries, done. No merge conflicts possible.&lt;/p&gt;
&lt;p&gt;If a normal push fails (non-fast-forward), the CLI fetches the remote, merges trees, creates a merge commit, and retries.&lt;/p&gt;
&lt;h2&gt;What&apos;s Missing&lt;/h2&gt;
&lt;p&gt;The architecture is solid engineering, but a few things stood out:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Transcript privacy.&lt;/strong&gt; Session transcripts (full agent conversations) get pushed to a branch anyone with repo access can read. For private repos, maybe fine. For orgs with varying access levels — that&apos;s a problem.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Squash merges break links.&lt;/strong&gt; If a PR with 5 commits (each with &lt;code&gt;Entire-Checkpoint&lt;/code&gt; trailers) gets squash-merged, those trailers disappear. The metadata exists but the bidirectional link from the merged commit is broken.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The metadata branch grows forever.&lt;/strong&gt; Every session from every developer, including abandoned PRs and throwaway experiments, accumulates on &lt;code&gt;entire/checkpoints/v1&lt;/code&gt;. There&apos;s an &lt;code&gt;entire clean&lt;/code&gt; command for local shadow branches, but no retention policy for the permanent metadata. For a large team over months, that&apos;ll bloat.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;No PR linkage.&lt;/strong&gt; The branch name is stored, but there&apos;s no PR number or URL. You can&apos;t easily ask &quot;show me all sessions related to PR #42.&quot;&lt;/p&gt;
&lt;h2&gt;The Smart Parts&lt;/h2&gt;
&lt;p&gt;What I genuinely admire:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Git as a free database.&lt;/strong&gt; Shadow branches store full repo snapshots, but git&apos;s content-addressable storage means only changed blobs cost anything. You get atomic snapshots, deduplication, and transport for free.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;In-memory tree building.&lt;/strong&gt; Checkpoints are created through go-git plumbing APIs — no worktree checkout, no stash, nothing visible. Zero disruption to the developer&apos;s flow.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Attribution at prompt boundaries.&lt;/strong&gt; Capturing human edits &lt;em&gt;before&lt;/em&gt; the agent contaminates the worktree is the cleanest measurement point possible.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Shadow branch migration.&lt;/strong&gt; If you rebase or pull (HEAD changes), the shadow branch name automatically updates. Your session continues seamlessly. This handles a common workflow that would otherwise silently break.&lt;/p&gt;
&lt;h2&gt;So What?&lt;/h2&gt;
&lt;p&gt;Entire doesn&apos;t solve a burning problem today. Most of us are fine with agent-written code landing in our repos without detailed provenance. But the trajectory is clear: as agents write more code, the audit trail becomes essential.&lt;/p&gt;
&lt;p&gt;The approach of storing session context alongside code in git — rather than in a separate system — is the right architectural bet. Git is already where your code lives, where your CI runs, where your reviews happen. Adding a metadata layer inside git itself (instead of a SaaS dashboard somewhere) means the context travels with the code.&lt;/p&gt;
&lt;p&gt;Whether Entire is the company that turns this into a platform worth $300M is above my pay grade. But the engineering is genuine, the problem is real, and the timing feels right.&lt;/p&gt;
&lt;p&gt;I&apos;ll be watching.&lt;/p&gt;
</content:encoded><category>ai</category><category>git</category><category>developer-experience</category></item><item><title>Agent-Written Code Needs More Than Git</title><link>https://julien.danjou.info/blog/github-wont-work-for-ai-agents/</link><guid isPermaLink="true">https://julien.danjou.info/blog/github-wont-work-for-ai-agents/</guid><description>The former GitHub CEO just raised $60M to rebuild developer tooling for the agentic era. He might be right that git needs a rethink — I&apos;ve been hacking around the same problems.</description><pubDate>Wed, 11 Feb 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;The former GitHub CEO just raised $60M at a $300M valuation for a seed round. For a CLI tool. Let that sink in.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/entire/entire-io.png&quot; alt=&quot;Entire.io — a new developer platform for the agentic era&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Thomas Dohmke left GitHub and launched &lt;a href=&quot;https://entire.io/blog/hello-entire-world&quot;&gt;Entire&lt;/a&gt;, a developer platform built from scratch for the age of AI coding agents. It&apos;s the largest seed round in dev tools history.&lt;/p&gt;
&lt;p&gt;My first reaction was &quot;that&apos;s insane.&quot; My second reaction was &quot;wait, I&apos;ve been solving the same problem with duct tape and hooks.&quot;&lt;/p&gt;
&lt;h2&gt;The Problem Is Real&lt;/h2&gt;
&lt;p&gt;If you&apos;re using AI agents like Claude Code or Gemini CLI daily — and I am — you&apos;ve already felt it. Git was built for humans writing code. It assumes you know what you changed and why. It assumes your commit messages mean something. It assumes the person who wrote the code will remember what they were thinking.&lt;/p&gt;
&lt;p&gt;AI agents break all of that.&lt;/p&gt;
&lt;p&gt;When Claude Code rewrites a module for me, the commit message says what happened, but not &lt;em&gt;why&lt;/em&gt;. There&apos;s no trace of the conversation that led there. No record of the three approaches the agent considered and rejected. No way to know if the prompt was &quot;refactor this for clarity&quot; or &quot;make this 10x faster and I don&apos;t care about readability.&quot;&lt;/p&gt;
&lt;p&gt;The transcript — the actual reasoning behind the code — lives in a terminal session that vanishes when you close the tab.&lt;/p&gt;
&lt;h2&gt;My Duct Tape Solution&lt;/h2&gt;
&lt;p&gt;I ran into this a few weeks ago when I wanted to resume a Claude Code session after a reboot. The session was gone, and I had no idea what context the agent had when it made certain decisions.&lt;/p&gt;
&lt;p&gt;So I did what any engineer would do: I wrote a hook. A simple Claude Code hook that links each commit to its session ID via a git trailer. Nothing fancy — just enough that I can trace a commit back to the conversation that produced it.&lt;/p&gt;
&lt;p&gt;Combined with &lt;a href=&quot;https://github.com/Mergify/mergify-cli&quot;&gt;Mergify&apos;s CLI&lt;/a&gt; for stacking PRs, it made my workflow usable. But it&apos;s duct tape. It doesn&apos;t capture the transcript, doesn&apos;t track attribution, doesn&apos;t handle multi-session work.&lt;/p&gt;
&lt;p&gt;Which is exactly the gap Entire is going after.&lt;/p&gt;
&lt;h2&gt;What Entire Actually Claims to Be&lt;/h2&gt;
&lt;p&gt;Beyond the buzzwords in the press release, Entire is shipping three things:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Checkpoints&lt;/strong&gt; — an open source CLI that captures session context (prompts, transcripts, reasoning) alongside every commit, stored in git without polluting your history&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;A semantic reasoning layer&lt;/strong&gt; — meant to let multiple AI agents collaborate on the same codebase with shared context&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;An AI-native UI&lt;/strong&gt; — designed for agent-to-human collaboration rather than human-to-human&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;They&apos;re not claiming to have a finished product — and they&apos;re upfront about it. The Checkpoints CLI is the first concrete thing they&apos;ve shipped, and it&apos;s open source. The rest is where the $60M goes. Fair enough — let&apos;s look at what actually exists.&lt;/p&gt;
&lt;h2&gt;Why $60M for This?&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/entire/techcrunch.png&quot; alt=&quot;TechCrunch: Former GitHub CEO raises record $60M dev tool seed round&quot; /&gt;&lt;/p&gt;
&lt;p&gt;The bet isn&apos;t that the current CLI is worth $300M. The bet is that the developer tooling stack needs to be rebuilt for a world where most code is written by agents, and the first company to nail the foundation wins.&lt;/p&gt;
&lt;p&gt;Think about it: if 99% of code is agent-written in two years (which is where things are heading), then the code review, debugging, and understanding workflow we have today is fundamentally broken. You can&apos;t review AI-written code the same way you review human-written code. You need the &lt;em&gt;context&lt;/em&gt; — what was the agent trying to do, what constraints did it have, what alternatives did it consider.&lt;/p&gt;
&lt;p&gt;That&apos;s a platform opportunity, and $60M is the price of a credible attempt at it. Whether Entire is the one to build it is a different question — but the problem is real and urgent.&lt;/p&gt;
&lt;h2&gt;My Take&lt;/h2&gt;
&lt;p&gt;Dohmke knows exactly where GitHub&apos;s limits are (he ran it). The investor list — Felicis, Madrona, Olivier Pomel — signals real conviction. And the core insight, that agent context is as important as the code itself, is something I believe in my bones because I&apos;ve been hacking around it myself.&lt;/p&gt;
&lt;p&gt;Their long-term ambition seems to involve moving beyond git. I&apos;m more dubious about that part. Git is unkillable. My bet is that the reality will be hooks and duct tape around git for the next few years — and honestly, that&apos;s probably enough. Git&apos;s data model bends a lot further than people think before it breaks.&lt;/p&gt;
&lt;p&gt;There&apos;s a deeper tension, though. Entire&apos;s model assumes humans are still in the loop — driving agents, reviewing output, caring about attribution. But that&apos;s already not quite how it works. I haven&apos;t written a line of code in months. I describe what I want, the agent writes it, I tell it to fix its mistakes, and it does. I&apos;m not a developer anymore — I&apos;m a director.&lt;/p&gt;
&lt;p&gt;And the trajectory is obvious: agents won&apos;t need directors much longer either. If agents are fully autonomous, who&apos;s the audience for commit context and session transcripts? The agent doesn&apos;t need to remember what it was thinking — it can just re-derive it. The human who never touched the code doesn&apos;t need line-level attribution.&lt;/p&gt;
&lt;p&gt;That could go either way for Entire. Maybe full autonomy makes provenance &lt;em&gt;more&lt;/em&gt; critical — precisely because no human was involved, you need a machine-readable audit trail. Or maybe it makes the whole problem vanish — agents that manage their own context don&apos;t need git hacks to preserve it.&lt;/p&gt;
&lt;p&gt;Either way, if you&apos;re leading an engineering team right now, you should be thinking about how you&apos;ll audit, understand, and trust the code your agents produce — whether there&apos;s a human in the loop or not.&lt;/p&gt;
&lt;p&gt;Next up, I&apos;ll dig into the actual source code and show you &lt;a href=&quot;https://julien.danjou.info/blog/how-entire-works-under-the-hood&quot;&gt;how Entire&apos;s Checkpoints CLI works under the hood&lt;/a&gt;. It&apos;s a clever piece of engineering that abuses git internals in ways I genuinely admire.&lt;/p&gt;
</content:encoded><category>ai</category><category>git</category><category>developer-experience</category></item><item><title>So I Will Never Write Code Again</title><link>https://julien.danjou.info/blog/so-i-will-never-write-code-again/</link><guid isPermaLink="true">https://julien.danjou.info/blog/so-i-will-never-write-code-again/</guid><description>I&apos;ve been coding for 25 years. Since January, I haven&apos;t written a single line. And it feels like relief.</description><pubDate>Tue, 10 Feb 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/nocode.png&quot; alt=&quot;Illustration of a developer who has stopped writing code by hand&quot; /&gt;&lt;/p&gt;
&lt;p&gt;A year ago, I thought AI-assisted coding was going to be a nice productivity boost. Generate a Python script with ChatGPT, copy-paste it somewhere, save twenty minutes. I figured that was the next five years: small wins, gradual improvement.&lt;/p&gt;
&lt;p&gt;Then last August, I &lt;a href=&quot;https://julien.danjou.info/blog/vibe-coding-a-feature-with-ai/&quot;&gt;wrote a feature where Copilot did about 80% of the work&lt;/a&gt;. I thought: okay, it&apos;s getting closer.&lt;/p&gt;
&lt;p&gt;Since January, I haven&apos;t written a single line of code.&lt;/p&gt;
&lt;p&gt;I want to be precise: I&apos;ve &lt;em&gt;produced&lt;/em&gt; a lot of code. More than ever, probably. But I didn&apos;t write any of it. I steer. I review. I architect. I don&apos;t type.&lt;/p&gt;
&lt;p&gt;And I don&apos;t feel the urge to go back.&lt;/p&gt;
&lt;p&gt;This might sound like grief. I&apos;ve been coding for 25 years. I wrote C for a window manager, Lisp for Emacs, Python for everything else. For most of my career, coding was a thing that defined me. Losing that should feel like losing a part of myself.&lt;/p&gt;
&lt;p&gt;But it doesn&apos;t. It feels like relief.&lt;/p&gt;
&lt;p&gt;For years, I was frustrated. I had more ideas than I could build. The bottleneck was never thinking, it was typing. Translating architecture into syntax, aligning parentheses, naming variables, fighting linters. The fun was in the &lt;em&gt;solving&lt;/em&gt;, not the &lt;em&gt;writing&lt;/em&gt;. And now the writing part is handled.&lt;/p&gt;
&lt;p&gt;I still enjoy reading code. It&apos;s like reading a good book. Understanding how pytest works internally, tracing through a complex system, that remains satisfying. But when the goal is to produce, AI beats everything.&lt;/p&gt;
&lt;p&gt;This is actually the second time I&apos;ve stepped away from code. The first was when I became CEO. That time, it was forced. I didn&apos;t choose to stop. I just ran out of hours. There was always one more meeting, one more hire, one more decision that pushed coding to the evening, then to the weekend, then to never.&lt;/p&gt;
&lt;p&gt;That &lt;em&gt;was&lt;/em&gt; grief. A slow, reluctant surrender.&lt;/p&gt;
&lt;p&gt;This time is different. I&apos;m not being pushed away. I&apos;m choosing to work at a higher layer. The same way I once chose Python over C, because life is short and the abstraction was worth it. AI is just the next rung.&lt;/p&gt;
&lt;p&gt;The creativity doesn&apos;t stop. If anything, it accelerates. You still design systems, still make architectural choices, still think about data models and trade-offs. You just don&apos;t spend hours translating those decisions into semicolons. The craft moves up a level, and that&apos;s fine.&lt;/p&gt;
&lt;p&gt;I know this will be harder for others. My colleague Rémy &lt;a href=&quot;https://mergify.com/blog/claude-didnt-kill-craftsmanship&quot;&gt;wrote about whether AI is killing craftsmanship&lt;/a&gt;. For engineers who defined themselves by the elegance of their code, by the perfectly named function, by the satisfaction of a clean diff, this shift feels like losing something sacred.&lt;/p&gt;
&lt;p&gt;I get it. Writing C was a beautiful puzzle. Lisp was genuinely fun. And I still think learning to code by hand matters, the same way learning assembly helps you understand memory even if you never write it professionally.&lt;/p&gt;
&lt;p&gt;But I&apos;m not going to fight a paradigm shift out of nostalgia. The ride was great. The next one looks better.&lt;/p&gt;
&lt;p&gt;I think the flow state people mourn isn&apos;t gone. It&apos;s just moving. Steering AI toward clean architecture, making the right system-level decisions, reviewing output with deep context, that has its own rhythm. The interruptions are still too frequent today (too many permission prompts), but the direction is clear. The flow will come back. It&apos;ll just be at a different altitude.&lt;/p&gt;
&lt;p&gt;If you&apos;re a senior engineer feeling this shift approaching, here&apos;s what I&apos;d say: the grief you&apos;re expecting might not be grief at all. The bottleneck was never the thinking. It was the typing. And the thinking is still yours.&lt;/p&gt;
</content:encoded><category>ai</category><category>coding</category></item><item><title>The Pre-AI Timestamp</title><link>https://julien.danjou.info/blog/the-pre-ai-timestamp/</link><guid isPermaLink="true">https://julien.danjou.info/blog/the-pre-ai-timestamp/</guid><description>In a few years, the only proof something is real will be that it existed before AI did.</description><pubDate>Thu, 29 Jan 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I was watching the news this week. &lt;a href=&quot;https://www.yahoo.com/news/articles/experts-issue-warning-viral-videos-033000851.html&quot;&gt;A segment about AI-generated fake videos of snowstorms in the US and Russia&lt;/a&gt;. Journalists carefully debunking synthetic footage, frame by frame.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/f1be059b-808a-4e00-a8b7-aebc122fc7f5_462x704.png&quot; alt=&quot;AI-generated image of a fake snowstorm video&quot; /&gt;
&lt;em&gt;AI generated image&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;I thought: this won’t scale.&lt;/p&gt;
&lt;p&gt;Right now, we’re in a strange transitional moment. People can still tell the difference between AI-generated content and reality. We spot the weird hands, the uncanny smoothness, the details that don’t quite land. News organizations debunk fakes. We feel like we’re staying ahead of it.&lt;/p&gt;
&lt;p&gt;But this is temporary. In a year or two, AI-generated video will be indistinguishable from real footage. And then what?&lt;/p&gt;
&lt;p&gt;Here’s what I think people aren’t grasping: the challenge isn’t “how do we detect AI content?” That’s the 2026 problem. The real challenge is what comes after, when detection becomes impossible.&lt;/p&gt;
&lt;p&gt;The only way I know Coldplay is a real band is that they existed before AI did. I have pre-AI memory. I remember when they started. I’ve seen them referenced in media that predates synthetic content. That history is my anchor.&lt;/p&gt;
&lt;p&gt;Now imagine a new band starting in 2030. How would I know they’re real? Unless I go to their concert and see them on stage, I can’t. Their music could be generated. Their interviews could be synthetic. Their social media presence could be entirely fabricated. There’s no way to verify.&lt;/p&gt;
&lt;p&gt;And when I say I go, I mean me. I can’t trust anyone online that I don’t know personally. There won’t be a way for you to know if you’re talking to a real human via a computerized interface in a few years. Your online friends could be AI for what you know.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/aa9fe534-04a2-469b-8cde-5df0d5a07012_1456x816.png&quot; alt=&quot;Illustration of the erosion of online trust as AI content becomes indistinguishable&quot; /&gt;&lt;/p&gt;
&lt;p&gt;This applies to everything. An influencer recommending restaurants. A journalist breaking news. A newsletter you just discovered. If it started after AI became indistinguishable, you have no anchor. You can’t know.&lt;/p&gt;
&lt;p&gt;And this is something most people can’t grasp today because they lived in the pre-AI era, and they have this anchor. Future generations won’t have it.&lt;/p&gt;
&lt;p&gt;We’ve already lived through a version of this with fake reviews. For the last decade, we’ve learned to distrust Amazon ratings, Yelp scores, app store reviews. We developed heuristics. We looked for patterns.&lt;/p&gt;
&lt;p&gt;But that was humans writing fake reviews at a human scale. Now imagine AI generating reviews at a scale we can’t comprehend. Every product, every restaurant, every service flooded with synthetic opinions indistinguishable from real ones. The heuristics break. Trust collapses.&lt;/p&gt;
&lt;p&gt;The same thing will happen to media, to news, to social networks, to everything online. &lt;a href=&quot;https://julien.danjou.info/blog/ai-feels-like-1999-all-over-again&quot;&gt;AI feels like 1999 all over again&lt;/a&gt; — except this time, the divide isn&apos;t access. It&apos;s whether you can tell what&apos;s real.&lt;/p&gt;
&lt;p&gt;A lot of trust today is based on consensus. We trust something because many people trust it. But when bots can outnumber people, consensus becomes meaningless. Popularity becomes a metric that anyone can manufacture.&lt;/p&gt;
&lt;p&gt;So what’s left?&lt;/p&gt;
&lt;p&gt;Physical presence. Meeting someone in person. Attending a concert. Being there.&lt;/p&gt;
&lt;p&gt;Real life becomes the last trust anchor. The thing that can’t be faked (at least until humanoid robots become indistinguishable too, but that’s a problem for later).&lt;/p&gt;
&lt;p&gt;Here’s what haunts me: in two generations, no one alive will remember what was pre-AI. The generational memory dies. A teenager in 2050 won’t know that The New York Times existed before AI and is therefore trustable because run by humans (I can imagine it’d still be the case). They won’t have the anchor I have. Everything in their world will be post-AI, and nothing online will be verifiable.&lt;/p&gt;
&lt;p&gt;They’ll have to assume everything is fake. That’s the default. And building trust from that baseline is something we’ve never had to do before.&lt;/p&gt;
&lt;p&gt;I don’t have a solution. But I think we’re in a narrow window where we still remember what “real” meant. That memory is more valuable than we realize.&lt;/p&gt;
</content:encoded><category>ai</category></item><item><title>AI Won’t Kill Juniors. It Will Expose Seniors.</title><link>https://julien.danjou.info/blog/ai-wont-kill-juniors-it-will-expose/</link><guid isPermaLink="true">https://julien.danjou.info/blog/ai-wont-kill-juniors-it-will-expose/</guid><description>Everyone fears for the juniors. But the engineers who stopped growing at the wrong layer have more to lose.</description><pubDate>Wed, 21 Jan 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;The tech industry has a new consensus: AI will kill junior engineering jobs. Look at any discussion thread, and you’ll find the same narrative. Juniors are doomed. They’ll never learn to code properly. The entry-level pipeline is broken.&lt;/p&gt;
&lt;p&gt;I’m not so sure. When I look at junior engineers today, I see people who are used to learning. They came up through boot camps, YouTube tutorials,and constantly shifting frameworks. Adapting is what they do. They might struggle for a year or two, but they’ll figure it out.&lt;/p&gt;
&lt;p&gt;The engineers I’m worried about are the senior ones.&lt;/p&gt;
&lt;p&gt;Sure, not all of them. But the ones who plateaued at “code craftsman” and never moved up.&lt;/p&gt;
&lt;p&gt;I’ve seen it play out already. A standup where someone proudly reports they spent the day fixing a batch of bugs and shipping a couple of pull requests. The rest of the team glances at each other. They’re thinking: *that’s ten minutes of Claude Code. Why did you spend eight hours in your IDE?*&lt;/p&gt;
&lt;p&gt;This isn’t new. We’ve seen it before. When bash gave way to Perl. When Java replaced C for most applications. Every paradigm shift leaves some people behind. Maybe 10%, maybe 20%, clinging to the old way because it’s what they know.&lt;/p&gt;
&lt;p&gt;But AI is different. The shift is faster. The impact is more massive. And the reach is exponential.&lt;/p&gt;
&lt;p&gt;Here’s the pattern I see. When I started programming, you’d learn assembly. Then you’d switch to C because life is short. Then Python, because life is really short. Each jump felt like cheating to the previous generation, and each one freed you to think at a higher level.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/b855f1c4-142b-482c-8cee-8d02e878cd3a_1456x816.webp&quot; alt=&quot;Illustration of programming abstraction levels from assembly to AI&quot; /&gt;&lt;/p&gt;
&lt;p&gt;AI is the next rung on that ladder. I hope schools are teaching this now: learn to write code by hand first (you need to understand what you’re abstracting), then switch to AI-assisted development. Just like you learned assembly to understand memory, then moved on. Though knowing how slow institutions adapt, I’m not holding my breath.&lt;/p&gt;
&lt;p&gt;The engineers who get this are thriving. Staff engineers, principal engineers, people whose job was already 70% architecture, cross-team coordination, and system design. They only coded 30% of the time anyway. Now they use AI to multiply that 30% and have even more impact. For them, AI is a force multiplier on an already leveraged role.&lt;/p&gt;
&lt;p&gt;But there’s another group. Senior engineers, five to ten years in, who still think their job is writing code 90% of the time. They never thought deeply about data models. Never cared much about architecture. Never moved toward the work that would make them staff or principal.&lt;/p&gt;
&lt;p&gt;Their entire value was &quot;writing proper, clean code that runs well and passes the linter.&quot; They never invested in the skills that &lt;a href=&quot;https://julien.danjou.info/blog/how-to-be-a-great-software-engineer&quot;&gt;make a great software engineer&lt;/a&gt; — communication, system thinking, judgment.&lt;/p&gt;
&lt;p&gt;That value just evaporated.&lt;/p&gt;
&lt;p&gt;And here’s what makes it worse: working with AI is fundamentally communication work. The engineers who thrive are the ones who already know how to share context, explain problems to colleagues, and filter signal from noise across teams.&lt;/p&gt;
&lt;p&gt;I’ve watched engineers struggle with AI because they won’t invest in communication. They type “fix this bug” without the stack trace, without the constraints, without explaining how production differs from their local setup. They keep the context in their head because explaining feels costly. The result is garbage, and they blame the tool.&lt;/p&gt;
&lt;p&gt;What they don’t see: AI compounds. The more context you feed it about your project, the better it gets. But that requires upfront investment in articulation. If you spent your career avoiding that investment with humans, you’ll prevent it with AI too.&lt;/p&gt;
&lt;p&gt;I don’t have a clean solution. The engineers who won’t adapt will stagnate. They might find work in industries that are slow to change. But it won’t be a great career. It never is when you’re holding onto the last paradigm.&lt;/p&gt;
&lt;p&gt;The engineers at risk aren’t the ones who don’t know enough yet. They’re the ones who stopped growing at the wrong layer. Juniors will climb. The question is whether the seniors stuck in the middle will climb with them.&lt;/p&gt;
</content:encoded><category>ai</category><category>coding</category></item><item><title>The Future Is Being Built Elsewhere</title><link>https://julien.danjou.info/blog/the-future-is-being-built-elsewhere/</link><guid isPermaLink="true">https://julien.danjou.info/blog/the-future-is-being-built-elsewhere/</guid><description>Why I’m worried and why founders can’t afford to wait for Europe to wake up.</description><pubDate>Wed, 26 Nov 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I read &lt;a href=&quot;https://blog.separateconcerns.com/2025-11-21-inexorable-progress.html&quot;&gt;Pierre Chapuis’ post&lt;/a&gt; &lt;em&gt;&lt;a href=&quot;https://blog.separateconcerns.com/2025-11-21-inexorable-progress.html&quot;&gt;Inexorable Progress&lt;/a&gt;&lt;/em&gt; last week, and a line stuck with me:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;“You cannot stop the flow of progress. You can only decide to be an innovator, an early adopter, or a laggard.”&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;He’s right. And if you work in tech in Europe, you feel it every day: in the conversations, in the pace, in the mindset, in the decisions people around you consider “reasonable.”&lt;/p&gt;
&lt;p&gt;I live in France. I build a global product. I talk to US companies daily. And honestly?&lt;/p&gt;
&lt;p&gt;I’m worried too.&lt;/p&gt;
&lt;p&gt;Not because we lack talent. We don’t.&lt;/p&gt;
&lt;p&gt;Not because we lack engineers. We don’t.&lt;/p&gt;
&lt;p&gt;But because we lack the mental model required to compete in the world we’re entering. And the gap is accelerating.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/bb60df15-61dd-4dc5-8c10-fe3783d919ac_1376x864.webp&quot; alt=&quot;Illustration of the growing technology gap between Europe and the US and China&quot; /&gt;&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;We Think We’re in the Same Race. We’re Not.&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;When I look at what’s happening in the US and China in AI, SaaS, robotics, automation… it feels like watching a different timeline.&lt;/p&gt;
&lt;p&gt;They’re scaling models that can refactor codebases.&lt;/p&gt;
&lt;p&gt;They’re shipping companies that go from idea to revenue in weeks.&lt;/p&gt;
&lt;p&gt;They’re pushing robotics into homes.&lt;/p&gt;
&lt;p&gt;They’re pouring capital at a pace that dwarfs what Europe raises in a quarter.&lt;/p&gt;
&lt;p&gt;Meanwhile, in France:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;We think regulation is a moat.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;We believe “solving the French market” is a global strategy.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;We look at the US and assume “we’ll catch up later.”&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;We treat AI like a temporary trend we can ignore until it stabilizes.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This isn&apos;t a mindset gap. It&apos;s a timeline gap. &lt;a href=&quot;https://julien.danjou.info/blog/ai-feels-like-1999-all-over-again&quot;&gt;AI feels like 1999 all over again&lt;/a&gt; — the behavioral divide between adopters and holdouts is already compounding. And Europe is overwhelmingly on the wrong side.&lt;/p&gt;
&lt;p&gt;Europe is acting like it has &lt;em&gt;time&lt;/em&gt;. It doesn&apos;t.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/bc31cdd9-323f-4c0d-990a-f503c1d9ff87_1376x864.png&quot; alt=&quot;Illustration of Europe falling behind in the global tech race&quot; /&gt;&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;The Most Dangerous Bias: Thinking France Is the World&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;When I hear founders say:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“We’ll win the French market first.”&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I always think: &lt;em&gt;France is 0.8% of the world’s population.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;0.8%.&lt;/p&gt;
&lt;p&gt;China is 20× bigger. The US tech market is 10× bigger.&lt;/p&gt;
&lt;p&gt;The next wave of software will not be built for 0.8%.&lt;/p&gt;
&lt;p&gt;If your plan is to build only for France, culturally, financially, technically, you’ve already chosen to lose.&lt;/p&gt;
&lt;p&gt;Not because you’re bad.&lt;/p&gt;
&lt;p&gt;But because you’re playing a local game while everyone else is playing planetary-scale chess.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;The Mindset Problem Nobody Talks About&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Here’s the part that founders and engineers will immediately recognize:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Most people here fundamentally don’t understand ROI, time, capital, or scale.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;They understand tasks. They understand constraints. They understand regulation.&lt;/p&gt;
&lt;p&gt;But they don’t understand leverage.&lt;/p&gt;
&lt;p&gt;They want to “optimize costs” when the problem is growth.&lt;/p&gt;
&lt;p&gt;They want to “avoid risk” when the problem is irrelevance.&lt;/p&gt;
&lt;p&gt;They want to “comply first” when the problem is competing at all.&lt;/p&gt;
&lt;p&gt;This is why hiring is more complex here, why product velocity is slower. Why teams hesitate on AI adoption, just as they did with cloud in 2008.&lt;/p&gt;
&lt;p&gt;It’s not a technology gap.&lt;/p&gt;
&lt;p&gt;It’s a worldview gap.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;We’re Living Like a Rich Country Without Creating Enough Value&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;This part is uncomfortable, but founders feel it viscerally.&lt;/p&gt;
&lt;p&gt;For 50 years, France has lived on increasing debt and the assumption that we can keep funding our lifestyle without producing equivalent value.&lt;/p&gt;
&lt;p&gt;But look at our major industries:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Our car industry is fighting Brussels just to be allowed to sell pollution past 2035, not to compete.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Our energy leadership was squandered by 20 years of indecision.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Our tech ecosystem celebrates being five years behind the US, as long as it’s “sovereign.”&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If we stop exporting cars, software, tech, heavy industry, how do we pay for everything?&lt;/p&gt;
&lt;p&gt;How do we fund innovation? How do we stay competitive?&lt;/p&gt;
&lt;p&gt;We don’t.&lt;/p&gt;
&lt;p&gt;We shrink.&lt;/p&gt;
&lt;p&gt;We tax more.&lt;/p&gt;
&lt;p&gt;We lose ground.&lt;/p&gt;
&lt;p&gt;And we pretend everything is fine.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Dropping Out of the Race Isn’t Ethical — It’s Surrender&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;When Pierre wrote:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;“If you slow down, you are simply letting those who do not care about these issues in the first place win.”&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;That hit me hard.&lt;/p&gt;
&lt;p&gt;Because this is the mindset I see too often in Europe:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;“We shouldn’t build this.”&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;“We should regulate it.”&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;“We should wait until we’re sure.”&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;“We should be cautious.”&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Caution is fine.&lt;/p&gt;
&lt;p&gt;Except when you’re in a race you didn’t choose but cannot opt out of. You don’t get to be “ethical” by refusing to play.&lt;/p&gt;
&lt;p&gt;You just hand the steering wheel of the future to people who don’t share your ethics.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;What Founders Should Take Away&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;I’m not writing this to doomscroll. I’m writing this because founders and engineers need to hear one thing:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Build globally. Don’t wait for permission.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The world is not waiting for Europe to catch up. The next decade will be brutal for anyone playing local games. Whether we like it or not, the next wave of innovation will be:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;AI-native&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;global from day one&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;capital-efficient&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;ruthlessly fast&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;engineered by people who want to win, not just exist&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;And we can be part of that — if we choose to.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/7410d5e4-77e8-43ae-bd83-6cb4385e7888_1376x864.png&quot; alt=&quot;Illustration of founders building globally without waiting for permission&quot; /&gt;&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Closing&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;I love France. I live here. My kids grow up here. But love doesn’t blind me.&lt;/p&gt;
&lt;p&gt;I see the same thing Pierre sees:&lt;/p&gt;
&lt;p&gt;A continent with world-class talent… and a mindset preventing it from playing the actual game.&lt;/p&gt;
&lt;p&gt;I don’t have the solutions. But I see the problems clearly. And as entrepreneurs, our best chance isn’t waiting for a savior.&lt;/p&gt;
&lt;p&gt;It’s building, ambitiously, globally, unapologetically, before the gap becomes irreversible. Because the world is moving.&lt;/p&gt;
&lt;p&gt;And this time, if we hesitate, we’ll be spectators. Not players.&lt;/p&gt;
</content:encoded><category>ai</category></item><item><title>AI feels like 1999 all over again</title><link>https://julien.danjou.info/blog/ai-feels-like-1999-all-over-again/</link><guid isPermaLink="true">https://julien.danjou.info/blog/ai-feels-like-1999-all-over-again/</guid><description>AI feels like 1999 all over again</description><pubDate>Thu, 06 Nov 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Last week, I spent two days with an old friend. We’ve known each other for fifteen years. He’s curious, a bit of a geek, but not “in tech.” He doesn’t use GPT. His wife doesn’t either. They’ve heard of AI the way you hear of a new restaurant: name recognition, no bookings.&lt;/p&gt;
&lt;p&gt;We talked, we cooked, we compared notes on work. At some point, I realized we were living on different planets. Not values. &lt;em&gt;Toolchains.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;He does great work. But AI just… isn’t part of his day. Meanwhile, I use it constantly: as a writing partner for emails, a sounding board for product decisions, a junior PM, a marketing intern who never sleeps. It’s not magic. It’s just leverage. And it reminds me of when I got internet access twenty-five years ago and people said, “Why would you need that every day?”&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/fd150c11-ef6a-4d93-9030-6b450cd31166_2752x1728.png&quot; alt=&quot;Illustration comparing AI adoption today to early internet adoption in 1999&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Two decades later, we answer that question by reflex, usually from a phone.&lt;/p&gt;
&lt;p&gt;I don’t say this to flex. I say it because the gap is already visible.&lt;/p&gt;
&lt;p&gt;If I compare my work today to two years ago, I’m doing two to three times more with better output. Same hours. Less context switching. I can hold more of Mergify’s product in my head, ship faster, and still write the marketing we used to split across two people. I wouldn’t claim I replace a whole team (let’s keep our illusions calibrated) but one founder plus AI now feels like one founder plus a sharp apprentice who learns absurdly fast.&lt;/p&gt;
&lt;p&gt;And I’m still only scratching the surface. There are tasks I &lt;em&gt;should&lt;/em&gt; automate that I haven’t, because of the classic XKCD curve: spending an hour to save a minute. The ROI is real; the overhead is too. It will get smoothed out, like everything else that starts out lumpy.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/19bd79ef-85b2-4abe-aa9a-4ff32df327d0_550x230.png&quot; alt=&quot;XKCD 974 comic about the time trade-off of automating tasks&quot; /&gt;
&lt;em&gt;XKCD 974&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;What’s striking is not just the productivity jump. It’s the new &lt;strong&gt;behavioral divide&lt;/strong&gt;. Twenty years ago the divide was access: who had broadband and who didn’t. Today the divide is adoption: who’s willing to put these systems in the loop every day, and who keeps them at arm’s length.&lt;/p&gt;
&lt;p&gt;Same laptop. Same calendar. Wildly different output.&lt;/p&gt;
&lt;p&gt;This isn’t about “AI replacing jobs.” It’s about &lt;strong&gt;AI reorganizing work&lt;/strong&gt; around people who are willing to collaborate with it. The difference between “I don’t see the point” and “this is in my daily loop” already compounds in quiet ways:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The email you write in 7 minutes instead of 27.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The product spec with five explored options instead of two.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The marketing page that results from testing three angles instead of arguing for one.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The code you ship because the blank page wasn’t blank.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Multiply that by days, then by years. That is how careers and companies diverge. And once AI starts creating content at scale, not just assisting — &lt;a href=&quot;https://julien.danjou.info/blog/the-synthetic-wave-is-already-here&quot;&gt;the synthetic wave is already here&lt;/a&gt; — the gap widens even faster.&lt;/p&gt;
&lt;p&gt;Of course, there are limits. AI isn’t judgment. It won’t hold your ethics, defend your taste, or choose your strategy. You still have to decide what “good” means, define constraints, and call the trade-offs. If you outsource your thinking, you don’t get leverage: you get noise.&lt;/p&gt;
&lt;p&gt;But if you keep the steering wheel, the car is very fast.&lt;/p&gt;
&lt;p&gt;There’s also a cultural point I didn’t expect: &lt;strong&gt;the stigma of using help&lt;/strong&gt;. Some people still think “real work” means doing everything yourself. Same energy as hand-writing HTML in 2003 to prove you’re serious.&lt;/p&gt;
&lt;p&gt;This reminds me of &lt;a href=&quot;https://2lr.substack.com/p/vive-la-france-long-live-the-us&quot;&gt;the latest post from Jean de La Rochebrochard&lt;/a&gt; where he talked about how French people are all about &lt;em&gt;crafting&lt;/em&gt;. No wonder AI adoption is going to be a long road here.&lt;/p&gt;
&lt;p&gt;But the craft isn’t in suffering; it’s in outcomes. Tools are honest if your goals are.&lt;/p&gt;
&lt;p&gt;I don’t know precisely what the next twenty years look like. I do see the pattern. Early on, new technology looks optional, even irrelevant. Then someone quietly uses it to do three times more with the same time. Then we call it table stakes. The people who adopted early won’t be smarter; they’ll just have trained their reflexes sooner.&lt;/p&gt;
&lt;p&gt;If you’re already all-in, you don’t need my sermon. If you’re AI-curious but unconvinced, try this: pick one workflow that hurts: a weekly email, a product spec, a marketing outline. Put an AI in the loop for a week. Not as a demo. As a colleague. Give it context. Ask for alternatives, not answers. Keep the steering wheel.&lt;/p&gt;
&lt;p&gt;If after seven days it doesn’t save you time &lt;em&gt;and&lt;/em&gt; improve your output, fine: ignore it for another year. But my bet is you’ll feel the old dial-up-to-broadband moment: once you touch the speed, it’s hard to go back.&lt;/p&gt;
&lt;p&gt;Back in Toulouse, my friend and I didn’t resolve anything. We just noticed the split. Same age. Same curiosity. Different daily habits. Twenty-five years ago the web felt optional right up until it didn’t. I think we’re there again. The storm isn’t coming. We’re already in the rain. You can stay dry for a while. Or you can learn to dance in it.&lt;/p&gt;
</content:encoded><category>ai</category></item><item><title>Building Features One Prompt at a Time</title><link>https://julien.danjou.info/blog/vibe-coding-a-feature-with-ai/</link><guid isPermaLink="true">https://julien.danjou.info/blog/vibe-coding-a-feature-with-ai/</guid><description>How I built Mergify’s new autoqueue in less than an hour a day </description><pubDate>Tue, 26 Aug 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;A few weeks ago, we released a new feature at Mergify: &lt;strong&gt;&lt;a href=&quot;https://changelog.mergify.com/changelog/autoqueue-option-for-queue-rules&quot;&gt;autoqueue&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;It automatically adds pull requests into the merge queue when they’re ready. No more custom automation rules, no more fiddling with YAML — it just works, straight from the merge queue settings.&lt;/p&gt;
&lt;p&gt;Here’s the kicker: I coded it.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/c43c313d-fbb9-4d8e-b129-c9c5345667c0_1144x577.png&quot; alt=&quot;Screenshot of the Mergify autoqueue feature settings&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Yes, me. The CEO. The guy who hasn’t touched production code in years. The guy who usually spends his days on calls, not in GitHub.&lt;/p&gt;
&lt;p&gt;And I did it in less than an hour a day, over three weeks, with the help of AI.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Why I Even Tried This&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;I’ve used Copilot casually before (mostly autocomplete in Emacs), but this time I wanted to &lt;strong&gt;go all-in&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Why? Curiosity, mostly. And time constraints. As a CEO, I have close to zero time to code, and this feature wasn’t urgent. So I thought: why not see what happens if I vibe-code it with AI?&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;How It Worked&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;The way I interacted with Claude 4 via GitHub Copilot was simple: I explained the feature like I’d explain it to my team in a product story. I added some technical constraints (“use unit tests, not functional ones”).&lt;/p&gt;
&lt;p&gt;Then I let the AI go.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/f4d4604e-a9c8-4638-a8c1-4eaddc7f2681_1376x864.webp&quot; alt=&quot;Illustration of coding with AI assistance, like coding blindfolded&quot; /&gt;
&lt;em&gt;It just felt like coding blindfolded.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;It wrote the code. I tweaked less than 5% of it. Once it was done, I sent it for review. I pasted my coworkers’ review feedback back into it. It rewrote. I guided. It iterated.&lt;/p&gt;
&lt;p&gt;Did it nail it on the first try? No. Sometimes it forgot instructions. Sometimes it “lost context” after a few iterations and tried to reinvent the test setup it had already learned. That was frustrating — like explaining to a junior dev, except this junior dev has goldfish memory.&lt;/p&gt;
&lt;p&gt;But eventually, it worked. The code was merged. Released. In production. Done.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;What Surprised Me&lt;/strong&gt;&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;I only changed about &lt;strong&gt;5% of the lines&lt;/strong&gt; myself.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Nobody on the team noticed it was “AI-coded.”&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;It handled six years of legacy code surprisingly well.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Two years ago this wouldn’t have been possible — the progress is insane.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;&lt;strong&gt;What It Means&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;This isn’t about me playing engineer again for nostalgia. It’s about what’s coming.&lt;/p&gt;
&lt;p&gt;The quality and quantity bar is about to rise dramatically. AI isn’t just autocomplete anymore; it’s &lt;em&gt;co-construction&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;You can ship faster. You can tackle features you don&apos;t fully understand at the start. You can guide at a high level and let the AI grind the details. A few months later, I took this even further — to the point where &lt;a href=&quot;https://julien.danjou.info/blog/so-i-will-never-write-code-again&quot;&gt;I stopped writing code entirely&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;But it also raises new challenges. For instance:&lt;/p&gt;
&lt;p&gt;How do juniors review AI-generated PRs?&lt;/p&gt;
&lt;p&gt;How do teams trust code written by something that forgets your instructions after 10 turns?&lt;/p&gt;
&lt;p&gt;(That’s probably another blog post.)&lt;/p&gt;
&lt;p&gt;For now, though, I’ll just say this:&lt;/p&gt;
&lt;p&gt;I vibe-coded a real feature into existence in less than an hour a day.&lt;/p&gt;
&lt;p&gt;It felt like cheating. And I’m amazed.&lt;/p&gt;
</content:encoded><category>ai</category><category>coding</category></item><item><title>The Em Dash Is Dead</title><link>https://julien.danjou.info/blog/the-em-dash-is-dead/</link><guid isPermaLink="true">https://julien.danjou.info/blog/the-em-dash-is-dead/</guid><description>And I Might Have Killed It</description><pubDate>Tue, 05 Aug 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I’ve always loved the em dash. It’s elegant. It’s useful. It lets you breathe in your writing—without having to deal with commas or (God forbid) parentheses.&lt;/p&gt;
&lt;p&gt;Ten years ago, I wrote a book. A real book. With my hands.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/4331ec9c-0180-4764-83c5-82b487b55dbb_373x464.png&quot; alt=&quot;Cover of Serious Python book&quot; /&gt;
&lt;em&gt;Serious Python&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Over 68,000 words—and 77 beautiful em dashes.&lt;/p&gt;
&lt;p&gt;I wasn’t counting then—only recently did I check. You know, just to see how robotic I might’ve accidentally been.&lt;/p&gt;
&lt;p&gt;Because now? Now the em dash is a red flag.&lt;/p&gt;
&lt;p&gt;A decade ago, it was just a punctuation mark. Today, it’s basically a biometric marker for ChatGPT. Type an em dash on the internet in 2025, and someone will immediately side-eye your prose like you’re a prompt engineer trying to slip one past them.&lt;/p&gt;
&lt;p&gt;“Nice try, OpenAI.”&lt;/p&gt;
&lt;p&gt;Somehow, without even trying, I joined the ranks of the suspicious. My past self—the one tapping away joyfully, dashing away without care—was unknowingly building a future case against me.&lt;/p&gt;
&lt;p&gt;So here I am. A human. Who’s written thousands of human words. Who once thought the em dash was peak form—and now has to ask:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Am I even allowed to use it anymore?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The tragedy is this: AI didn&apos;t invent the em dash. &lt;em&gt;We&lt;/em&gt; gave it the em dash. We trained it on our books, our blog posts, our essays. We fed it so much em dash-laced content that now it thinks it&apos;s just what humans do. And to be fair… it &lt;em&gt;was&lt;/em&gt;. It&apos;s just one more way AI is reshaping how we communicate — and as &lt;a href=&quot;https://julien.danjou.info/blog/the-collapse-of-social-platforms&quot;&gt;social platforms collapse&lt;/a&gt; under synthetic content, even punctuation becomes a trust signal.&lt;/p&gt;
&lt;p&gt;Now, AI refuses to stop.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://medium.com/@brentcsutoras/the-em-dash-dilemma-how-a-punctuation-mark-became-ais-stubborn-signature-684fbcc9f559&quot;&gt;You can threaten it, prompt it, scold it—“no more em dashes!”—and two lines later? Bam. Another one.&lt;/a&gt; It’s like trying to get your dog to stop barking at squirrels. It hears you. It just doesn’t care.&lt;/p&gt;
&lt;p&gt;Meanwhile, actual humans are uninstalling their em dash keyboard shortcuts. Coders are deleting — from their HTML snippets. Writers are rephrasing perfectly good sentences just to avoid looking synthetic.&lt;/p&gt;
&lt;p&gt;We didn’t lose a punctuation mark. We lost a friend.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/9eab604c-45bd-4567-82c8-5714f6a8c127_1376x864.webp&quot; alt=&quot;Illustration of the em dash being abandoned by human writers due to AI overuse&quot; /&gt;&lt;/p&gt;
&lt;p&gt;So, if you see an em dash in my writing—don’t panic.&lt;/p&gt;
&lt;p&gt;It’s not a bot. It’s just me. Old-school. Nostalgic. Typing with trembling fingers and a tear in my eye.&lt;/p&gt;
&lt;p&gt;Still human.&lt;/p&gt;
&lt;p&gt;Still grieving.&lt;/p&gt;
&lt;p&gt;Still em-dashing.&lt;/p&gt;
</content:encoded><category>ai</category></item><item><title>The Synthetic Wave Is Already Here</title><link>https://julien.danjou.info/blog/the-synthetic-wave-is-already-here/</link><guid isPermaLink="true">https://julien.danjou.info/blog/the-synthetic-wave-is-already-here/</guid><description>How Spotify just confirmed the AI content tsunami I predicted.</description><pubDate>Tue, 29 Jul 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Six months ago, I wrote a blog post titled &lt;em&gt;“&lt;/em&gt;&lt;a href=&quot;https://julien.danjou.info/p/the-collapse-of-social-platforms&quot;&gt;The Collapse of Social Platforms&lt;/a&gt;&lt;em&gt;”&lt;/em&gt; At the time, it felt like a distant horizon — something you could see coming if you squinted into the future.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.theguardian.com/technology/2025/jul/14/an-ai-generated-band-got-1m-plays-on-spotify-now-music-insiders-say-listeners-should-be-warned?utm_source=chatgpt.com&quot;&gt;Spotify just made headlines for hosting an AI-generated “band”&lt;/a&gt; that racked up over a million plays before anyone realized the artists weren’t real. No humans. No guitars. Just prompts, algorithms, and a good understanding of how to feed the machine what people want to hear.&lt;/p&gt;
&lt;p&gt;And that’s just the beginning.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/f48377fe-ee4d-4444-85ae-95ea097789fa_1560x624.png&quot; alt=&quot;Screenshot of the AI-generated band on Spotify with over a million plays&quot; /&gt;
&lt;em&gt;Source&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;AI is Creating — Not Assisting&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Back then, I wrote that we were moving past “AI-assisted” content into “AI-native” creation. At the time, it might have sounded like theory. Now, we’ve entered the &lt;em&gt;Spotify Phase&lt;/em&gt;: platforms no longer just recommend content — they &lt;strong&gt;create it&lt;/strong&gt;. They don’t need to wait for artists to upload music. They can fill the catalog themselves.&lt;/p&gt;
&lt;p&gt;And they will.&lt;/p&gt;
&lt;p&gt;Because the economics are too good, the data feedback loops are too tight, and the audience — most importantly — doesn’t seem to care.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;The Illusion of Authenticity is Enough&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Spotify didn’t advertise the AI band. It was just another artist profile. People listened. They added songs to playlists. They vibed. It’s only after the fact — after journalists started poking around — that we learned the truth.&lt;/p&gt;
&lt;p&gt;And you know what? Most listeners &lt;em&gt;still don’t care&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Which proves my original point: we’re not as attached to the &lt;em&gt;source&lt;/em&gt; of content as we think we are. We just want something that feels good, fits our mood, and plays seamlessly into our day. If that comes from a human or an LLM fine-tuned on hit-making formulas… who’s checking?&lt;/p&gt;
&lt;p&gt;This is the uncanny shift: content is becoming pure simulation. And for most, it’s indistinguishable from the real thing.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/81f81a63-5596-478a-8936-a4b0e961a236_1376x864.png&quot; alt=&quot;Illustration of synthetic content becoming indistinguishable from real&quot; /&gt;&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Platforms are Optimizing Away People&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Spotify’s move is not an isolated event. It’s the canary in the coal mine for every content platform out there.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Why wait for a podcast to be recorded when you can prompt one into existence?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Why pay creators when you can generate infinite variations?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Why host unpredictable humans when you can manufacture predictable engagement?&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;From AI-generated OnlyFans personas to YouTube clones to fake influencers on Instagram, we’re entering a phase where content isn’t created &lt;em&gt;by&lt;/em&gt; people — it’s created &lt;em&gt;for&lt;/em&gt; people by machines pretending to be people.&lt;/p&gt;
&lt;p&gt;It’s not a dystopia. It’s just a business decision.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;So Where does This Leave Us?&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;If you’re a creator: The value of “real” is shifting. It may no longer be about production quality — but about human connection. Your face, your voice, your story might become the only proof-of-humanity people care about. Ironically, the more polished your content looks, the more people might question if &lt;em&gt;you&lt;/em&gt; made it.&lt;/p&gt;
&lt;p&gt;If you&apos;re a platform: Congratulations, you&apos;re entering the golden age of AI-powered margins. But beware the erosion of trust. Once users start doubting whether &lt;em&gt;anyone&lt;/em&gt; on your platform is real, &lt;a href=&quot;https://julien.danjou.info/blog/the-collapse-of-social-platforms&quot;&gt;the social glue breaks down fast&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you’re a user: Good luck. You’re about to be bombarded with synthetic everything. And the biggest risk isn’t being tricked — it’s not caring anymore whether what you’re consuming is real or not.&lt;/p&gt;
&lt;p&gt;That’s when the simulation wins.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;A Prediction, Revisited&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;In that original post, I wrote:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“Real life will be the only place you’ll have left to interact with real humans.”&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I stand by it — even more today. The value of the human connection will rise in proportion to how rare it becomes online. Coffee with a friend. A live concert. A hand-written letter. These may become the luxury goods of the 2030s.&lt;/p&gt;
&lt;p&gt;So yes, the synthetic wave is here. But maybe that’s what we needed — a reason to remember what being human online really means.&lt;/p&gt;
&lt;p&gt;Until then: keep your eyes open, your ears sharp, and maybe… spend a little more time offline.&lt;/p&gt;
</content:encoded><category>ai</category></item><item><title>AI Is a Human Interface Nightmare</title><link>https://julien.danjou.info/blog/ai-is-a-human-interface-nightmare/</link><guid isPermaLink="true">https://julien.danjou.info/blog/ai-is-a-human-interface-nightmare/</guid><description>AI Isn’t Broken, Our Expectations Are</description><pubDate>Tue, 08 Jul 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;For the last 80 years, computers have been calculators. Fancy ones, sure — with screens, keyboards, networks. But under the hood, they’re still just deterministic machines. You give them input, and they process it with logic gates and silicon, and they spit out the exact same output every time. That’s the deal. That’s the contract.&lt;/p&gt;
&lt;p&gt;And then came AI.&lt;/p&gt;
&lt;p&gt;AI doesn’t work like that. At all.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/e8e86a31-6b02-447d-b3d7-d7bbc06ae555_1376x864.png&quot; alt=&quot;Illustration of a computer interface struggling to represent AI&quot; /&gt;&lt;/p&gt;
&lt;p&gt;AI — especially large language models — are not deterministic. They’re a soup of probabilities and neural weights. When you talk to an AI, you’re not talking to a computer. You’re talking to something more like a human brain: a machine that guesses, infers, hallucinates, and sometimes nails it. And sometimes doesn’t.&lt;/p&gt;
&lt;p&gt;That’s fine. That’s expected. But the problem?&lt;/p&gt;
&lt;p&gt;AI still &lt;em&gt;runs&lt;/em&gt; on computers.&lt;/p&gt;
&lt;p&gt;The interface hasn’t changed. We’re still typing on keyboards, expecting precise answers. We’re still clicking buttons, expecting repeatability. But AI doesn’t think like that. And so the human-AI interface is totally broken.&lt;/p&gt;
&lt;p&gt;Ask ChatGPT “What’s the height of the Eiffel Tower?” and you might get the right number. Or not. And when it’s wrong, people freak out — “How can it not know that?” But think about it: the model is 1TB in size. It fits on a USB stick. You really believe all of humanity’s verified data fits in your pocket?&lt;/p&gt;
&lt;p&gt;It&apos;s not Google. It&apos;s not Wikipedia. It&apos;s a brain. A tiny, weird, synthetic brain that talks to you via a command-line interface and autocomplete. And if we figure out the interface problem, AI could actually &lt;a href=&quot;https://julien.danjou.info/blog/connecting-the-dots-with-ai&quot;&gt;connect the dots&lt;/a&gt; in ways humans never could.&lt;/p&gt;
&lt;p&gt;That’s the real nightmare: the medium is lying about the message.&lt;/p&gt;
&lt;p&gt;We call them “smartphones” because we used to make calls with them — even though calling is now maybe 1% of what we do. The name stuck. And maybe we’ll keep talking to AI through keyboards and chatboxes. But eventually, we’ll need new metaphors. New expectations. New ways to interact.&lt;/p&gt;
&lt;p&gt;Because what’s coming isn’t a better calculator.&lt;/p&gt;
&lt;p&gt;It’s something else entirely.&lt;/p&gt;
</content:encoded><category>ai</category></item><item><title>Marc Chagall Never Painted That</title><link>https://julien.danjou.info/blog/marc-chagall-never-painted-that/</link><guid isPermaLink="true">https://julien.danjou.info/blog/marc-chagall-never-painted-that/</guid><description>Or Why AI Isn’t Google</description><pubDate>Tue, 03 Jun 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;It was a casual Friday. Nothing special—except I was on kid duty for lunch pickup, a rare detour in my usual routine.&lt;/p&gt;
&lt;p&gt;As we strolled home, baguette under one arm, my daughter told me about her morning in class.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/7116745f-2da6-4f95-8918-4eebaebe0fcc_1376x864.webp&quot; alt=&quot;Illustration of a parent and child walking home discussing art class&quot; /&gt;&lt;/p&gt;
&lt;p&gt;They had studied Marc Chagall. Her eyes sparkled as she recounted it, and then she asked if we could go see &lt;em&gt;La Fée Électricité&lt;/em&gt; next time we were in Paris.&lt;/p&gt;
&lt;p&gt;That name rang a bell, but I had no clue where that was exposed and if it was even in Paris. Painting is not my strong suit. Once home, I did what any responsible parent would do: I picked up my phone from my pocket and Googled it.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/ac15cf29-2ff5-411d-bb51-2f91ffdda95b_805x325.jpeg&quot; alt=&quot;La Fee Electricite painting by Raoul Dufy&quot; /&gt;
&lt;em&gt;La Fée Électricité&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;The first answer showed that the painting was exhibited in the &lt;em&gt;&lt;a href=&quot;https://www.mam.paris.fr/fr/oeuvre/la-fee-electricite&quot;&gt;Musée d’Art Modern de Paris&lt;/a&gt;&lt;/em&gt;. But I didn’t tell my daughter right away. As I was scrolling on my phone, something didn’t click.&lt;/p&gt;
&lt;p&gt;The museum mentioned that this painting was from Raoul Dufy — not Chagal.&lt;/p&gt;
&lt;p&gt;I triple-checked on the web and Wikipedia. The result was the same. &lt;em&gt;La Fée Électricité&lt;/em&gt; isn’t by Chagall at all. It’s really by Raoul Dufy.&lt;/p&gt;
&lt;p&gt;That’s when the realisation hit me. The mistake probably didn’t come from a textbook or even a hasty Wikipedia glance. No, my bet is the teacher asked ChatGPT (or Bard, or whatever the tool of the week is) to prepare her lesson. AI probably hallucinated the answer. And nobody caught it.&lt;/p&gt;
&lt;p&gt;We&apos;re at this weird moment where many people treat AI like it&apos;s a search engine. Or worse: as if it&apos;s a source of truth. And when this confidence gets applied at scale — to content, media, music — &lt;a href=&quot;https://julien.danjou.info/blog/the-synthetic-wave-is-already-here&quot;&gt;the synthetic wave is already here&lt;/a&gt;, and nobody is fact-checking it.&lt;/p&gt;
&lt;p&gt;It’s not. It’s a conversation partner with infinite confidence and a shaky grasp on facts.&lt;/p&gt;
&lt;p&gt;This isn’t a rant against AI. I use it daily and wouldn’t go back. But it’s a gentle reminder: if you don’t know how to question what it says—or double-check your sources—it’s easy to teach your whole class wrong facts.&lt;/p&gt;
&lt;p&gt;No big deal this time. My kid went back to school in the afternoon after I dared her to ask her teacher if Chagal was really the painter behind La Fée Électricité. She did ask, and the teacher corrected her mistake for the whole class and moved on.&lt;/p&gt;
&lt;p&gt;But next time, who knows?&lt;/p&gt;
</content:encoded><category>ai</category></item><item><title>The Collapse of Social Platforms</title><link>https://julien.danjou.info/blog/the-collapse-of-social-platforms/</link><guid isPermaLink="true">https://julien.danjou.info/blog/the-collapse-of-social-platforms/</guid><description>A prediction for 2030</description><pubDate>Tue, 17 Dec 2024 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;It’s the end of the year, so I’ll write about something more theoretical that has been on my mind those last few days.&lt;/p&gt;
&lt;p&gt;What happens when the line between human and AI content creators vanishes completely? We’re closer to that reality than you might think.&lt;/p&gt;
&lt;p&gt;I was out running a few days ago listening to a tech podcast, &lt;a href=&quot;https://siliconcarne.substack.com/&quot;&gt;Silicon Carne&lt;/a&gt;. There was an interesting debate around content creation and how platforms like YouTube will kill TV. I’m not sure the root of the talk was that challenging; TV seems already a thing of the past at this stage. But as they started to talk about AI, things started to get interesting.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/a9e23696-5aa6-4048-a7ae-af69a8a84b64_2000x1256.jpeg&quot; alt=&quot;Illustration of the blurring line between human and AI content creators&quot; /&gt;&lt;/p&gt;
&lt;p&gt;People often have limited visions of what’s possible, shaped by their beliefs and ideas about what’s acceptable.&lt;/p&gt;
&lt;p&gt;Most of the discussion revolved around how AI would be able to create content, how it would be used to help producers and content creators, and what it would mean for platforms and consumers.&lt;/p&gt;
&lt;p&gt;Based on that, the debate continued about how much AI would be acceptable in content creation on platforms.&lt;/p&gt;
&lt;p&gt;I think this is very short-sighted.&lt;/p&gt;
&lt;h2&gt;What’s Already Happening&lt;/h2&gt;
&lt;p&gt;You don’t have to look far to see AI being used in content production; that’s a fact. But it’s still very human-driven and AI-assisted. There are a lot of tech limitations for now that prevent pushing the throttle to the max, but it is certain that those limitations will go away very soon. Look at what OpenAI is building with &lt;a href=&quot;https://openai.com/sora/&quot;&gt;Sora&lt;/a&gt;, and you’ll have a glimpse of the future.&lt;/p&gt;
&lt;p&gt;People are already leveraging this tech to move to the next step: creating content, communities, and creators that do not exist in real life. Instagram and OnlyFans are seeing a tsunami of AI-based girls managed by digital pimps. Does it work? It sure does; look at the numbers.&lt;/p&gt;
&lt;p&gt;This is where many people start to get confused and want to draw a line based on morale or their beliefs that this model will not be applicable to “regular” content creation.&lt;/p&gt;
&lt;p&gt;I believe this is false; it’s already happening.&lt;/p&gt;
&lt;h2&gt;A Glimpse into the Future&lt;/h2&gt;
&lt;p&gt;People often argue that having AI-generated content from a content creator would feel inauthentic and that they wouldn’t watch it. I say that this is having a very high opinion of your brain and little faith in the evolution of AI.&lt;/p&gt;
&lt;p&gt;What if I told you that MrBeast did not exist? You’d say, of course, he does! Really? How can you know he exists? Did you ever meet him in real life? Did you ever talk to him?&lt;/p&gt;
&lt;p&gt;What if, tomorrow, you’d connect to YouTube and see 10 new MrBeast videos with fancy new ideas that’d fit your taste and be very appealing to your brain? They might or might not be AI-generated; in any case, you’d have a good time.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/25d2cc69-bd29-4a95-b9c5-6a0f8b55e502_1376x864.png&quot; alt=&quot;Illustration of AI-generated content creators indistinguishable from real people&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Now, let’s take a step back and imagine having a brand new content creator everyone’s talking about. Nobody heard of them before. You watch the content, and you like it. Does this person really exist, or is it just an AI? How would you ever know? There might be rumors that a friend of a friend met him in a restaurant… but is that the reality?&lt;/p&gt;
&lt;p&gt;At some point, there will be no way to know if a content creator is a real person or not. As time passes and technology evolves, it will be close to impossible to distinguish human creation from AI creation — &lt;a href=&quot;https://julien.danjou.info/blog/the-synthetic-wave-is-already-here&quot;&gt;the synthetic wave is already here&lt;/a&gt;. This is what most people don&apos;t want to believe because it shuffles too much their current reality.&lt;/p&gt;
&lt;p&gt;Believe it or not, it’s happening.&lt;/p&gt;
&lt;h2&gt;How Platform Might Crash&lt;/h2&gt;
&lt;p&gt;The ability to generate endless streams of AI-driven content will undoubtedly transform platforms like Instagram, YouTube, and LinkedIn. In the short term, the appeal of hyper-tailored, dopamine-driven content may captivate users and drive unprecedented engagement.&lt;/p&gt;
&lt;p&gt;But at what cost?&lt;/p&gt;
&lt;p&gt;As AI-generated content floods these platforms, the lines between human connection and algorithmic interaction will blur. The authenticity that once set content creators apart—real people sharing real experiences—will be diluted in a sea of indistinguishable, machine-generated personas. Even if platforms introduce measures like “human-verified” badges, the deeper question remains: will people still care? If the content entertains, informs, or inspires, does its origin matter?&lt;/p&gt;
&lt;p&gt;This shift could erode one of social media&apos;s fundamental purposes: fostering connection. If users begin to see platforms as spaces dominated by machines rather than humans, the sense of community these platforms once provided may crumble. The allure of authentic interaction—the very reason social media exploded in the first place—could fade, leaving behind a world where “social” media is anything but social.&lt;/p&gt;
&lt;p&gt;This trend raises profound questions in the broader societal context. Will our online spaces become environments where we primarily engage with algorithms instead of people? As AI infiltrates every email, phone call, and comment, will technology become a tool for connection or a barrier to it?&lt;/p&gt;
&lt;p&gt;Perhaps this is where the pendulum swings back to real life. In a world saturated with AI interactions, the simplest moments of human connection—a conversation over coffee, a shared laugh, or a face-to-face debate—might become rare and precious. Paradoxically, as AI dominates the digital realm, it could reignite our desire for genuine human interaction in the physical world.&lt;/p&gt;
&lt;p&gt;Until then, the question isn’t whether AI-generated content will dominate—it’s how we, as creators and consumers, will adapt and what we’ll choose to value in an increasingly artificial landscape.&lt;/p&gt;
&lt;p&gt;My prediction is that real life will be the only place you’ll have left to interact with real humans.&lt;/p&gt;
&lt;p&gt;Until robots take over, of course.&lt;/p&gt;
</content:encoded><category>ai</category></item><item><title>Connecting the Dots with AI</title><link>https://julien.danjou.info/blog/connecting-the-dots-with-ai/</link><guid isPermaLink="true">https://julien.danjou.info/blog/connecting-the-dots-with-ai/</guid><description>The Future of Enhanced Communication</description><pubDate>Tue, 06 Aug 2024 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;In any conversation, a lot of context can be lost between what you think, what you say, what the listener hears, and what they ultimately understand. This loss of information can lead to miscommunication and inefficiencies. How often have you found yourself confused by someone&apos;s words, asking them what they mean, only to hear, &quot;Sorry, I was thinking about this,&quot; and finally, the dots connect?&lt;/p&gt;
&lt;p&gt;This common scenario underscores AI&apos;s potential to revolutionize communication. Imagine a world where your AI assistant, enriched with context from your daily activities, bridges the gap between thoughts and understanding. This could transform the way we interact, making communication more efficient and precise.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/48fd302a-01a1-4a4b-a49d-49ff6b2d910f_303x435.png&quot; alt=&quot;Illustration of context loss in human communication&quot; /&gt;
&lt;em&gt;Communicating&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;The Role of AI in Enhancing Communication&lt;/h2&gt;
&lt;p&gt;In today&apos;s world, computers and phones are already tracking many of our communications through platforms like email, Slack, and Teams. Combining all of those platforms captures the full context of conversations — which is why people are starting to use them as sources for &lt;a href=&quot;https://research.ibm.com/blog/retrieval-augmented-generation-RAG&quot;&gt;RAG (Retrieval-Augmented Generation)&lt;/a&gt; in LLM.&lt;/p&gt;
&lt;p&gt;Technologies such as &lt;a href=&quot;https://support.microsoft.com/en-us/windows/retrace-your-steps-with-recall-aa03f8a0-a78b-4b3e-b0a1-2eb8ac48701c#:~:text=With%20Recall%2C%20you%20have%20an,takes%20snapshots%20of%20your%20screen.&quot;&gt;Microsoft Recall&lt;/a&gt; are going in that direction: recording more information to improve the AI context to make you even more able to understand your world.&lt;/p&gt;
&lt;p&gt;AI and LLM can step even further in the direction of communication improvement in the future.&lt;/p&gt;
&lt;p&gt;Consider a scenario where Alice needs to tell her colleague Bob to handle a customer request. Instead of Alice trying to guess what context Bob has or lacks, she could give the instruction to her AI assistant rather than communicating with Bob directly. Alice&apos;s AI could then communicate with Bob&apos;s AI, sharing the necessary context and information, ensuring that Bob receives a complete and clear message. This method of using AIs as proxies eliminates the guesswork and ensures that all relevant details are communicated effectively.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/edbb7e88-9bba-4fc5-9d43-081275a40208_727x307.png&quot; alt=&quot;Diagram of AI assistants communicating between Alice and Bob&quot; /&gt;&lt;/p&gt;
&lt;h2&gt;The Vision: AI-Assisted Communication&lt;/h2&gt;
&lt;p&gt;In the future, AI could be integrated into every piece of communication, from emails to meetings to casual conversations. The potential is immense. Imagine AI assistants transforming messages to match the recipient&apos;s preferred communication style and form, embedding the extra context that might be missing to the recipient.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/d6a6575f-1d58-4808-8c46-cdbf2f6f237f_1376x864.webp&quot; alt=&quot;Illustration of AI transforming messages to match the recipient&apos;s communication style&quot; /&gt;&lt;/p&gt;
&lt;p&gt;For example, if Alice’s AI knows that Bob prefers visual information, it could transform Alice’s text-based request into an infographic or a visual summary. This ensures that Bob receives the information in the most effective way for him, enhancing understanding and efficiency.&lt;/p&gt;
&lt;h2&gt;Benefits of AI in Communication&lt;/h2&gt;
&lt;p&gt;The primary benefit of AI-enhanced communication is the significant improvement in efficiency. Misunderstandings and miscommunications can lead to wasted time and resources. By ensuring that all parties have the necessary context, AI can streamline interactions and reduce the need for clarifications and follow-ups.&lt;/p&gt;
&lt;p&gt;Additionally, AI can create a personalized communication experience, tailoring messages to fit the recipient&apos;s preferences and needs. This not only improves comprehension but also makes interactions more pleasant and engaging.&lt;/p&gt;
&lt;h2&gt;Overcoming Challenges&lt;/h2&gt;
&lt;p&gt;However, implementing AI in communication is not without its challenges. One significant issue is the segregation of information. Just as humans struggle with deciding whether to share certain information, AI will need to learn how to handle sensitive or contextual data appropriately. Current AI systems lack robust role-based access control (RBAC) for context, making it difficult to manage which information can be shared and with whom.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/d163612a-f13c-491c-b858-3f0daafd4eef_544x589.png&quot; alt=&quot;Diagram of AI sandboxing cycle for managing sensitive information&quot; /&gt;
&lt;em&gt;Sandboxing Cycle&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Furthermore, while &lt;a href=&quot;https://www.independent.co.uk/life-style/facebook-artificial-intelligence-ai-chatbot-new-language-research-openai-google-a7869706.html&quot;&gt;AI can potentially develop its own languages to communicate more efficiently&lt;/a&gt;, the practical application of this in everyday communication remains a complex challenge. Security and privacy concerns also need to be addressed, ensuring that sensitive information is protected while still allowing AI to function effectively. I don’t think anyone is actively working on this right now, but it will be a major issue in the future.&lt;/p&gt;
&lt;h2&gt;Personal Reflections and Future Visions&lt;/h2&gt;
&lt;p&gt;Reflecting on my own experiences, I&apos;ve often encountered situations where additional context could have prevented misunderstandings. A really simple example would be planning a lunch meeting without knowing the dietary preferences of your invitee. It might be so obvious to your guest that the restaurant must have vegan options that will not mention it, which can lead to disappointing outcomes if you book a steak house. If AI can provide this context seamlessly, such issues could be avoided.&lt;/p&gt;
&lt;p&gt;Looking ahead, I envision AI assistants like Siri evolving to incorporate these advanced communication capabilities. This could extend beyond personal assistants to systems built between products and companies, facilitating smoother interactions and collaborations across different platforms.&lt;/p&gt;
&lt;p&gt;The future of AI in communication holds incredible promise. By bridging the gaps in context and understanding, AI has the potential to transform how we interact, making communication more efficient and effective. Of course, getting there requires solving a fundamental problem: &lt;a href=&quot;https://julien.danjou.info/blog/ai-is-a-human-interface-nightmare&quot;&gt;AI is still a human interface nightmare&lt;/a&gt;, and the way we interact with these systems today is far from what it could be. While challenges remain, the ongoing advancements in AI technology bring us closer to a future where misunderstandings are minimized and every message is clearly understood.&lt;/p&gt;
</content:encoded><category>ai</category></item></channel></rss>