<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>coding — jd:/dev/blog</title><description>Posts tagged &quot;coding&quot; on jd:/dev/blog.</description><link>https://julien.danjou.info/</link><item><title>So I Will Never Write Code Again</title><link>https://julien.danjou.info/blog/so-i-will-never-write-code-again/</link><guid isPermaLink="true">https://julien.danjou.info/blog/so-i-will-never-write-code-again/</guid><description>I&apos;ve been coding for 25 years. Since January, I haven&apos;t written a single line. And it feels like relief.</description><pubDate>Tue, 10 Feb 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/nocode.png&quot; alt=&quot;Illustration of a developer who has stopped writing code by hand&quot; /&gt;&lt;/p&gt;
&lt;p&gt;A year ago, I thought AI-assisted coding was going to be a nice productivity boost. Generate a Python script with ChatGPT, copy-paste it somewhere, save twenty minutes. I figured that was the next five years: small wins, gradual improvement.&lt;/p&gt;
&lt;p&gt;Then last August, I &lt;a href=&quot;https://julien.danjou.info/blog/vibe-coding-a-feature-with-ai/&quot;&gt;wrote a feature where Copilot did about 80% of the work&lt;/a&gt;. I thought: okay, it&apos;s getting closer.&lt;/p&gt;
&lt;p&gt;Since January, I haven&apos;t written a single line of code.&lt;/p&gt;
&lt;p&gt;I want to be precise: I&apos;ve &lt;em&gt;produced&lt;/em&gt; a lot of code. More than ever, probably. But I didn&apos;t write any of it. I steer. I review. I architect. I don&apos;t type.&lt;/p&gt;
&lt;p&gt;And I don&apos;t feel the urge to go back.&lt;/p&gt;
&lt;p&gt;This might sound like grief. I&apos;ve been coding for 25 years. I wrote C for a window manager, Lisp for Emacs, Python for everything else. For most of my career, coding was a thing that defined me. Losing that should feel like losing a part of myself.&lt;/p&gt;
&lt;p&gt;But it doesn&apos;t. It feels like relief.&lt;/p&gt;
&lt;p&gt;For years, I was frustrated. I had more ideas than I could build. The bottleneck was never thinking, it was typing. Translating architecture into syntax, aligning parentheses, naming variables, fighting linters. The fun was in the &lt;em&gt;solving&lt;/em&gt;, not the &lt;em&gt;writing&lt;/em&gt;. And now the writing part is handled.&lt;/p&gt;
&lt;p&gt;I still enjoy reading code. It&apos;s like reading a good book. Understanding how pytest works internally, tracing through a complex system, that remains satisfying. But when the goal is to produce, AI beats everything.&lt;/p&gt;
&lt;p&gt;This is actually the second time I&apos;ve stepped away from code. The first was when I became CEO. That time, it was forced. I didn&apos;t choose to stop. I just ran out of hours. There was always one more meeting, one more hire, one more decision that pushed coding to the evening, then to the weekend, then to never.&lt;/p&gt;
&lt;p&gt;That &lt;em&gt;was&lt;/em&gt; grief. A slow, reluctant surrender.&lt;/p&gt;
&lt;p&gt;This time is different. I&apos;m not being pushed away. I&apos;m choosing to work at a higher layer. The same way I once chose Python over C, because life is short and the abstraction was worth it. AI is just the next rung.&lt;/p&gt;
&lt;p&gt;The creativity doesn&apos;t stop. If anything, it accelerates. You still design systems, still make architectural choices, still think about data models and trade-offs. You just don&apos;t spend hours translating those decisions into semicolons. The craft moves up a level, and that&apos;s fine.&lt;/p&gt;
&lt;p&gt;I know this will be harder for others. My colleague Rémy &lt;a href=&quot;https://mergify.com/blog/claude-didnt-kill-craftsmanship&quot;&gt;wrote about whether AI is killing craftsmanship&lt;/a&gt;. For engineers who defined themselves by the elegance of their code, by the perfectly named function, by the satisfaction of a clean diff, this shift feels like losing something sacred.&lt;/p&gt;
&lt;p&gt;I get it. Writing C was a beautiful puzzle. Lisp was genuinely fun. And I still think learning to code by hand matters, the same way learning assembly helps you understand memory even if you never write it professionally.&lt;/p&gt;
&lt;p&gt;But I&apos;m not going to fight a paradigm shift out of nostalgia. The ride was great. The next one looks better.&lt;/p&gt;
&lt;p&gt;I think the flow state people mourn isn&apos;t gone. It&apos;s just moving. Steering AI toward clean architecture, making the right system-level decisions, reviewing output with deep context, that has its own rhythm. The interruptions are still too frequent today (too many permission prompts), but the direction is clear. The flow will come back. It&apos;ll just be at a different altitude.&lt;/p&gt;
&lt;p&gt;If you&apos;re a senior engineer feeling this shift approaching, here&apos;s what I&apos;d say: the grief you&apos;re expecting might not be grief at all. The bottleneck was never the thinking. It was the typing. And the thinking is still yours.&lt;/p&gt;
</content:encoded><category>ai</category><category>coding</category></item><item><title>AI Won’t Kill Juniors. It Will Expose Seniors.</title><link>https://julien.danjou.info/blog/ai-wont-kill-juniors-it-will-expose/</link><guid isPermaLink="true">https://julien.danjou.info/blog/ai-wont-kill-juniors-it-will-expose/</guid><description>Everyone fears for the juniors. But the engineers who stopped growing at the wrong layer have more to lose.</description><pubDate>Wed, 21 Jan 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;The tech industry has a new consensus: AI will kill junior engineering jobs. Look at any discussion thread, and you’ll find the same narrative. Juniors are doomed. They’ll never learn to code properly. The entry-level pipeline is broken.&lt;/p&gt;
&lt;p&gt;I’m not so sure. When I look at junior engineers today, I see people who are used to learning. They came up through boot camps, YouTube tutorials,and constantly shifting frameworks. Adapting is what they do. They might struggle for a year or two, but they’ll figure it out.&lt;/p&gt;
&lt;p&gt;The engineers I’m worried about are the senior ones.&lt;/p&gt;
&lt;p&gt;Sure, not all of them. But the ones who plateaued at “code craftsman” and never moved up.&lt;/p&gt;
&lt;p&gt;I’ve seen it play out already. A standup where someone proudly reports they spent the day fixing a batch of bugs and shipping a couple of pull requests. The rest of the team glances at each other. They’re thinking: *that’s ten minutes of Claude Code. Why did you spend eight hours in your IDE?*&lt;/p&gt;
&lt;p&gt;This isn’t new. We’ve seen it before. When bash gave way to Perl. When Java replaced C for most applications. Every paradigm shift leaves some people behind. Maybe 10%, maybe 20%, clinging to the old way because it’s what they know.&lt;/p&gt;
&lt;p&gt;But AI is different. The shift is faster. The impact is more massive. And the reach is exponential.&lt;/p&gt;
&lt;p&gt;Here’s the pattern I see. When I started programming, you’d learn assembly. Then you’d switch to C because life is short. Then Python, because life is really short. Each jump felt like cheating to the previous generation, and each one freed you to think at a higher level.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/b855f1c4-142b-482c-8cee-8d02e878cd3a_1456x816.webp&quot; alt=&quot;Illustration of programming abstraction levels from assembly to AI&quot; /&gt;&lt;/p&gt;
&lt;p&gt;AI is the next rung on that ladder. I hope schools are teaching this now: learn to write code by hand first (you need to understand what you’re abstracting), then switch to AI-assisted development. Just like you learned assembly to understand memory, then moved on. Though knowing how slow institutions adapt, I’m not holding my breath.&lt;/p&gt;
&lt;p&gt;The engineers who get this are thriving. Staff engineers, principal engineers, people whose job was already 70% architecture, cross-team coordination, and system design. They only coded 30% of the time anyway. Now they use AI to multiply that 30% and have even more impact. For them, AI is a force multiplier on an already leveraged role.&lt;/p&gt;
&lt;p&gt;But there’s another group. Senior engineers, five to ten years in, who still think their job is writing code 90% of the time. They never thought deeply about data models. Never cared much about architecture. Never moved toward the work that would make them staff or principal.&lt;/p&gt;
&lt;p&gt;Their entire value was &quot;writing proper, clean code that runs well and passes the linter.&quot; They never invested in the skills that &lt;a href=&quot;https://julien.danjou.info/blog/how-to-be-a-great-software-engineer&quot;&gt;make a great software engineer&lt;/a&gt; — communication, system thinking, judgment.&lt;/p&gt;
&lt;p&gt;That value just evaporated.&lt;/p&gt;
&lt;p&gt;And here’s what makes it worse: working with AI is fundamentally communication work. The engineers who thrive are the ones who already know how to share context, explain problems to colleagues, and filter signal from noise across teams.&lt;/p&gt;
&lt;p&gt;I’ve watched engineers struggle with AI because they won’t invest in communication. They type “fix this bug” without the stack trace, without the constraints, without explaining how production differs from their local setup. They keep the context in their head because explaining feels costly. The result is garbage, and they blame the tool.&lt;/p&gt;
&lt;p&gt;What they don’t see: AI compounds. The more context you feed it about your project, the better it gets. But that requires upfront investment in articulation. If you spent your career avoiding that investment with humans, you’ll prevent it with AI too.&lt;/p&gt;
&lt;p&gt;I don’t have a clean solution. The engineers who won’t adapt will stagnate. They might find work in industries that are slow to change. But it won’t be a great career. It never is when you’re holding onto the last paradigm.&lt;/p&gt;
&lt;p&gt;The engineers at risk aren’t the ones who don’t know enough yet. They’re the ones who stopped growing at the wrong layer. Juniors will climb. The question is whether the seniors stuck in the middle will climb with them.&lt;/p&gt;
</content:encoded><category>ai</category><category>coding</category></item><item><title>Building Features One Prompt at a Time</title><link>https://julien.danjou.info/blog/vibe-coding-a-feature-with-ai/</link><guid isPermaLink="true">https://julien.danjou.info/blog/vibe-coding-a-feature-with-ai/</guid><description>How I built Mergify’s new autoqueue in less than an hour a day </description><pubDate>Tue, 26 Aug 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;A few weeks ago, we released a new feature at Mergify: &lt;strong&gt;&lt;a href=&quot;https://changelog.mergify.com/changelog/autoqueue-option-for-queue-rules&quot;&gt;autoqueue&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;It automatically adds pull requests into the merge queue when they’re ready. No more custom automation rules, no more fiddling with YAML — it just works, straight from the merge queue settings.&lt;/p&gt;
&lt;p&gt;Here’s the kicker: I coded it.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/c43c313d-fbb9-4d8e-b129-c9c5345667c0_1144x577.png&quot; alt=&quot;Screenshot of the Mergify autoqueue feature settings&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Yes, me. The CEO. The guy who hasn’t touched production code in years. The guy who usually spends his days on calls, not in GitHub.&lt;/p&gt;
&lt;p&gt;And I did it in less than an hour a day, over three weeks, with the help of AI.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Why I Even Tried This&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;I’ve used Copilot casually before (mostly autocomplete in Emacs), but this time I wanted to &lt;strong&gt;go all-in&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Why? Curiosity, mostly. And time constraints. As a CEO, I have close to zero time to code, and this feature wasn’t urgent. So I thought: why not see what happens if I vibe-code it with AI?&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;How It Worked&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;The way I interacted with Claude 4 via GitHub Copilot was simple: I explained the feature like I’d explain it to my team in a product story. I added some technical constraints (“use unit tests, not functional ones”).&lt;/p&gt;
&lt;p&gt;Then I let the AI go.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/f4d4604e-a9c8-4638-a8c1-4eaddc7f2681_1376x864.webp&quot; alt=&quot;Illustration of coding with AI assistance, like coding blindfolded&quot; /&gt;
&lt;em&gt;It just felt like coding blindfolded.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;It wrote the code. I tweaked less than 5% of it. Once it was done, I sent it for review. I pasted my coworkers’ review feedback back into it. It rewrote. I guided. It iterated.&lt;/p&gt;
&lt;p&gt;Did it nail it on the first try? No. Sometimes it forgot instructions. Sometimes it “lost context” after a few iterations and tried to reinvent the test setup it had already learned. That was frustrating — like explaining to a junior dev, except this junior dev has goldfish memory.&lt;/p&gt;
&lt;p&gt;But eventually, it worked. The code was merged. Released. In production. Done.&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;What Surprised Me&lt;/strong&gt;&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;I only changed about &lt;strong&gt;5% of the lines&lt;/strong&gt; myself.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Nobody on the team noticed it was “AI-coded.”&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;It handled six years of legacy code surprisingly well.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Two years ago this wouldn’t have been possible — the progress is insane.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;&lt;strong&gt;What It Means&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;This isn’t about me playing engineer again for nostalgia. It’s about what’s coming.&lt;/p&gt;
&lt;p&gt;The quality and quantity bar is about to rise dramatically. AI isn’t just autocomplete anymore; it’s &lt;em&gt;co-construction&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;You can ship faster. You can tackle features you don&apos;t fully understand at the start. You can guide at a high level and let the AI grind the details. A few months later, I took this even further — to the point where &lt;a href=&quot;https://julien.danjou.info/blog/so-i-will-never-write-code-again&quot;&gt;I stopped writing code entirely&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;But it also raises new challenges. For instance:&lt;/p&gt;
&lt;p&gt;How do juniors review AI-generated PRs?&lt;/p&gt;
&lt;p&gt;How do teams trust code written by something that forgets your instructions after 10 turns?&lt;/p&gt;
&lt;p&gt;(That’s probably another blog post.)&lt;/p&gt;
&lt;p&gt;For now, though, I’ll just say this:&lt;/p&gt;
&lt;p&gt;I vibe-coded a real feature into existence in less than an hour a day.&lt;/p&gt;
&lt;p&gt;It felt like cheating. And I’m amazed.&lt;/p&gt;
</content:encoded><category>ai</category><category>coding</category></item><item><title>Why We Still Care About Quality</title><link>https://julien.danjou.info/blog/why-we-still-care-about-quality/</link><guid isPermaLink="true">https://julien.danjou.info/blog/why-we-still-care-about-quality/</guid><description>Quality is slow, hard, and totally worth it</description><pubDate>Tue, 24 Jun 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I recently read &lt;a href=&quot;https://linear.app/blog/why-is-quality-so-rare&quot;&gt;Linear’s excellent blog post on why quality is so rare&lt;/a&gt;, and it resonated deeply with me. Craft, quality, care — these aren’t buzzwords. They’re a way of working, a way of thinking, and frankly, the only way I’ve ever known how to build things.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/d3224e88-3b48-4bbd-ad4c-04f364308e0d_809x394.png&quot; alt=&quot;Screenshot of Linear&apos;s blog post on why quality is so rare&quot; /&gt;
&lt;em&gt;Linear: Why is quality so rare?&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;For me, it started with &lt;a href=&quot;https://julien.danjou.info/blog/open-source-is-getting-used-to-death&quot;&gt;open source&lt;/a&gt;. When you put your code out in the open, you naturally want to make it good. Maybe even beautiful. I started more than 20 years ago, polishing my Debian packages, making sure they were clean, understandable, and useful. Later I poured that same mindset into building &lt;em&gt;&lt;a href=&quot;https://awesomewm.org&quot;&gt;awesomewm&lt;/a&gt;&lt;/em&gt;, striving to write the best C code I could — because that code was me, visible to anyone curious enough to look.&lt;/p&gt;
&lt;p&gt;Open source taught me that quality is not an accident. It’s a habit. And a commitment.&lt;/p&gt;
&lt;p&gt;Even though &lt;a href=&quot;https://blog.mergify.com/why-mergify-codebase-isnt-open-source-anymore-a-tale-of-growth-change-and-adaptation/&quot;&gt;Mergify is no longer open source&lt;/a&gt;, the ethos never left. We still build like our code is going to be read by thousands, because, well, it is at least read by our folks. Our team ships work we’re proud of. Whether we win a deal or not, it’s common to hear people tell us:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The quality of Mergify stands out.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;That never gets old.&lt;/p&gt;
&lt;p&gt;I know I’m not alone in this. Mehdi, my cofounder, and I have been building together for over 15 years. It’s in our DNA: we hate mediocrity. We won’t ship something that we wouldn’t use ourselves — joyfully.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/c6432d0e-0b74-4b90-bbfa-b9f7e4b83516_1376x864.webp&quot; alt=&quot;Illustration of craftsmanship and quality in software engineering&quot; /&gt;&lt;/p&gt;
&lt;p&gt;That said, I’ve also seen the flip side. Back when I worked on &lt;a href=&quot;https://openstack.org&quot;&gt;OpenStack&lt;/a&gt;, a massive open-source project, there was a lot of code… and not always a lot of care. Many contributors came from companies that didn’t value quality — and it showed. Open source can be beautiful, but when it’s driven by quantity instead of pride, it becomes exhausting. I hated that part.&lt;/p&gt;
&lt;p&gt;Quality isn’t just aesthetic. It’s a business strategy. Linear nailed that in their post. When you build something that feels right — fast, polished, thoughtful — users notice. They stay. They tell others. We’ve seen this at Mergify: our growth has been fueled not just by features but by how those features feel to use.&lt;/p&gt;
&lt;p&gt;But quality is more than just a great UI or bug-free code.&lt;/p&gt;
&lt;p&gt;It’s also:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;A fast, reliable, intuitive product.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Clean code that enables long-term agility.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Thoughtful defaults and edge-case handling.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Being able to say “no” when something adds complexity without enough payoff.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Getting there isn’t easy. You need judgment — to know what’s worth doing and what can wait. That comes with experience and the humility to know you’ll never get everything right. We aim for 80/20, not 100/0. Sometimes that means leaving the last 20% for another day — or maybe never. Not because we don’t care, but because we care about the whole system staying healthy and fast.&lt;/p&gt;
&lt;p&gt;Quality isn’t free. But it pays back. In speed, trust, and joy.&lt;/p&gt;
&lt;p&gt;So yes, it’s a choice. One you make every day.&lt;/p&gt;
&lt;p&gt;You can take the shortcut, or you can make something that lasts.&lt;/p&gt;
&lt;p&gt;We still choose the latter.&lt;/p&gt;
</content:encoded><category>coding</category><category>mergify</category></item><item><title>“It’s Complicated” Is Not an Excuse</title><link>https://julien.danjou.info/blog/its-complicated-is-not-an-excuse/</link><guid isPermaLink="true">https://julien.danjou.info/blog/its-complicated-is-not-an-excuse/</guid><description>“It’s Complicated” Is Not an Excuse</description><pubDate>Tue, 11 Mar 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I spend a lot of time talking to engineers.&lt;/p&gt;
&lt;p&gt;I ask them about &lt;strong&gt;design choices&lt;/strong&gt;, &lt;strong&gt;technical decisions&lt;/strong&gt;, and &lt;strong&gt;why something is built a certain way&lt;/strong&gt;. I try to understand &lt;strong&gt;why this feature is so cumbersome to use&lt;/strong&gt;, &lt;strong&gt;why this API is so convoluted&lt;/strong&gt;, or &lt;strong&gt;why the user experience feels unnecessarily difficult&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;And more often than not, the response I get is:&lt;/p&gt;
&lt;p&gt;💬 &lt;strong&gt;“Well… it’s complicated.”&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Sure. Everything is complicated.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;That’s why you’re here. That’s why you’re an &lt;strong&gt;engineer&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;But &lt;strong&gt;“it’s complicated” should never be an excuse for bad design.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/51d36525-99b3-4bc2-bfc6-71543082d6b4_1376x864.png&quot; alt=&quot;Illustration of engineers using complexity as an excuse for bad design&quot; /&gt;&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Imagine If Other Professions Worked Like This&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Let’s take &lt;strong&gt;a bakery&lt;/strong&gt;, for example.&lt;/p&gt;
&lt;p&gt;You walk in and ask for &lt;strong&gt;a loaf of bread&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;The baker hands you a cup of flour and some water.&lt;/p&gt;
&lt;p&gt;🫤 &lt;strong&gt;“Uhh… I was expecting actual bread.”&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;💬 &lt;strong&gt;“Well… it’s complicated.”&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;💬 &lt;strong&gt;“We’d have to mix the dough, let it rise, bake it for a while…”&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;💬 &lt;strong&gt;“That’s a lot of steps, so we just decided to give you the raw ingredients instead.”&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;This is exactly how software feels someday.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;When users interact with your product, they don’t want to assemble the damn bread. They just want &lt;strong&gt;something that works&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Your job as an engineer is to handle complexity—not push it onto the user.&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;The Difference Between Good and Bad Engineering&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;Look, I get it. Engineering &lt;strong&gt;is&lt;/strong&gt; hard.&lt;/p&gt;
&lt;p&gt;Making things simple &lt;strong&gt;is&lt;/strong&gt; difficult.&lt;/p&gt;
&lt;p&gt;Abstracting complexity &lt;strong&gt;takes effort&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;But &lt;strong&gt;great engineers&lt;/strong&gt; don’t just write code—they design experiences.&lt;/p&gt;
&lt;p&gt;A &lt;strong&gt;bad engineer&lt;/strong&gt; builds something difficult and says, &lt;strong&gt;“Well, it’s complicated.”&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;A &lt;strong&gt;great engineer&lt;/strong&gt; builds something difficult and makes it look &lt;strong&gt;simple.&lt;/strong&gt; (More on what makes &lt;a href=&quot;https://julien.danjou.info/blog/how-to-be-a-great-software-engineer&quot;&gt;a great software engineer&lt;/a&gt;.)&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;🔹 &lt;strong&gt;Bad engineering forces users to deal with complexity.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;🔹 &lt;strong&gt;Good engineering hides the complexity behind smart design.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Take Apple, for example. You know what’s &lt;strong&gt;actually complicated&lt;/strong&gt;?&lt;/p&gt;
&lt;p&gt;🔹 Compressing a 4K video into a tiny file.&lt;/p&gt;
&lt;p&gt;🔹 Rendering realistic lighting effects in real-time on an iPhone.&lt;/p&gt;
&lt;p&gt;🔹 Syncing all your messages, contacts, and photos seamlessly across devices.&lt;/p&gt;
&lt;p&gt;But do Apple users &lt;strong&gt;ever have to think about any of that?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;No. It &lt;strong&gt;just works&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;That’s &lt;strong&gt;good engineering.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/dad2f44b-6a86-4552-b29c-d08dce3d0ea3_1376x864.png&quot; alt=&quot;Illustration of good engineering making complex things feel simple&quot; /&gt;&lt;/p&gt;
&lt;h2&gt;&lt;strong&gt;Stop Saying “It’s Complicated”—Start Making It Simple&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;When you hear yourself saying, &lt;strong&gt;“It’s complicated”&lt;/strong&gt;, stop for a second and think:&lt;/p&gt;
&lt;p&gt;🛑 &lt;strong&gt;Are you solving a hard problem in the simplest way possible?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;🛑 &lt;strong&gt;Or are you just passing the complexity to the user?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;If it’s the latter, &lt;strong&gt;you haven’t finished the job yet.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Because real engineering isn’t about making things work.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;It’s about making things work… simply.&lt;/strong&gt;&lt;/p&gt;
</content:encoded><category>coding</category><category>management</category></item><item><title>How To Test Your API Integration</title><link>https://julien.danjou.info/blog/how-to-test-with-an-api/</link><guid isPermaLink="true">https://julien.danjou.info/blog/how-to-test-with-an-api/</guid><description>The Three Rules That Should Govern Your Testing</description><pubDate>Tue, 24 Sep 2024 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;As I was publishing last week&apos;s post on whether &lt;a href=&quot;https://julien.danjou.info/p/is-github-the-future-or-becoming&quot;&gt;GitHub is becoming obsolete or the future of development platforms&lt;/a&gt;, they decided to trigger &lt;a href=&quot;https://blog.mergify.com/post-mortem-of-incident-2024-09-17/&quot;&gt;a two-hour interruption on Mergify in retaliation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Just kidding. I am sure they did not do that on purpose.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/0f8eda71-7106-45e0-8d33-e0530cd77668_1536x720.jpeg&quot; alt=&quot;Illustration of API integration testing challenges&quot; /&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://blog.mergify.com/post-mortem-of-incident-2024-09-17/&quot;&gt;Read my post-mortem&lt;/a&gt; if you want the whole story. The summary is that they broke their API for several hours until people started to complain, and they finally rolled back their change. Bringing down our service in the meantime.&lt;/p&gt;
&lt;p&gt;That event forces me to talk about APIs this week.&lt;/p&gt;
&lt;h2&gt;API Definitions Are Just Definitions&lt;/h2&gt;
&lt;p&gt;I won’t go into the definition of an API per se; it’d be boring. You can Google it if you need to.&lt;/p&gt;
&lt;p&gt;The real question is what &lt;em&gt;having&lt;/em&gt; an API &lt;em&gt;means&lt;/em&gt;. Offering an API to your users means authorizing them to interact with your service. This implies many rules, such as the data model of your API, the behavior of your API, the rules of usage, etc. Some can be encoded in a computer-readable machine; others cannot. Engineers like to talk about contracts, and I think it’s an almost good analogy.&lt;/p&gt;
&lt;p&gt;To describe this contract, you need multiple specifications.&lt;/p&gt;
&lt;p&gt;Developers have been ecstatic over &lt;a href=&quot;https://swagger.io/specification/&quot;&gt;OpenAPI&lt;/a&gt; over the last decade as a go-to media for describing their API. I want here to emphasize how little this documents your API. It illustrates the data model used but does not encode much of the behavior the system might exhibit.&lt;/p&gt;
&lt;p&gt;Hey, I can confirm that GitHub did not break its OpenAPI schema when it broke its API last week. Formidable.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/8d49cdca-7d99-4c42-a00f-c772ceea9087_2500x714.svg&quot; alt=&quot;Diagram showing the gap between OpenAPI schema and actual API behavior&quot; /&gt;&lt;/p&gt;
&lt;p&gt;However, based on the assumption that OpenAPI is enough, many engineers mock their API consumption based on that part of the contract and think they’re done.&lt;/p&gt;
&lt;p&gt;In that situation, the minimum you should do is validate that your mocking follows the OpenAPI schema you’re using. Even that is not enough because sometimes the schema changes—and sometimes it’s just not respected.&lt;/p&gt;
&lt;p&gt;Let’s take GitHub again as an example. Their API is so legacy that the &lt;a href=&quot;https://json-schema.org/blog/posts/github-case-study&quot;&gt;JSON schemas were crafted manually&lt;/a&gt; — and there might still be, I don’t know. It’s fine; it’s better than nothing, and it’s not obvious to change a legacy API that’s been there for 15 years.&lt;/p&gt;
&lt;p&gt;We know first-hand that their system does not always respect the GitHub API JSON Schema.&lt;/p&gt;
&lt;h2&gt;APIs Have Side-effects&lt;/h2&gt;
&lt;p&gt;Again, this approach is entirely based on the data model and is insufficient and of little value.&lt;/p&gt;
&lt;p&gt;Most of an API&apos;s value is in the behavior it triggers. Unless your API is a basic CRUD and does storage only, it will have side effects that might or might not be visible through the API.&lt;/p&gt;
&lt;p&gt;For example, creating an asynchronous job on any REST API will return nothing except a unique identifier, which can be used later to identify the work. You might receive the data via a webhook or have to poll the API to get the job’s status. This kind of behavior cannot be documented in OpenAPI as it’s not part of the data model; there’s nothing to tell you to expect a webhook.&lt;/p&gt;
&lt;h2&gt;API Invisible Parts&lt;/h2&gt;
&lt;p&gt;Now, let’s discuss all the invisible parts of running an API. There are many. The first that come to mind are RBAC, quota, and rate limits. Most APIs have to implement those items, and they also have a direct impact on the API behavior and access.&lt;/p&gt;
&lt;p&gt;Those features will massively impact the quality and quantity of API use. Again, they are pretty hard to test in a black box. There’s no way you can easily mock a full RBAC implementation or real-life rate limits.&lt;/p&gt;
&lt;h2&gt;Testing the Hard Way&lt;/h2&gt;
&lt;p&gt;Having consumed many different APIs for the last five years on Mergify, and especially GitHub’s one that we know by heart, gave us a few ideas on how you can or cannot test.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Rule number one: do not mock. Record your tests.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We leverage &lt;a href=&quot;https://vcrpy.readthedocs.io/en/latest/usage.html&quot;&gt;vcrpy&lt;/a&gt; in Python to do that: the idea is to run your test in a &lt;em&gt;record mode&lt;/em&gt; where real HTTP requests are done against a service. Once the recording is done, you can replay the test when running it locally or in the CI.&lt;/p&gt;
&lt;p&gt;If any of your code tries to make a different HTTP call, the test will fail, and you will have to re-record it. This ensures that no change is made to the application without being noticed.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/ca786a70-ec3e-43f4-93dd-86c8ab02b27a_942x111.png&quot; alt=&quot;Screenshot of vcrpy test recording detecting a changed HTTP call&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Now, that does prevent your application from being broken, but that does not prevent the API from breaking your app. The only way to do this is to regularly re-record all the tests and see if they break.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;So, rule number two: re-record your tests regularly — every day if possible.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;For example, we have a test that plays with GitHub pull request labels. When re-recording a test a few months ago, we noticed that if it failed. It turned out that GitHub changed its API to become case-sensitive overnight (that was not in the OpenAPI schema!).&lt;/p&gt;
&lt;p&gt;In that case, we preferred to ask GitHub to fix their API rather than fix our code, but hey, &lt;em&gt;your mileage may vary&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Rule number three: be ready to fix the code.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;No amount of testing will cover all the edge cases. For example, requests quota or rate limit might be hit in real scenarios but not in testing, meaning you’ll have to handle those specific cases without being able to test. It’s fine — you can actually mock part of the responses here.&lt;/p&gt;
&lt;p&gt;For this, we leverage &lt;a href=&quot;https://sentry.io&quot;&gt;Sentry&lt;/a&gt; to obtain evidence of the problem, replicate it in a test, and fix it. No amount of testing can fix all scenarios, so having a way to &lt;em&gt;hotfix&lt;/em&gt; your code is a must.&lt;/p&gt;
&lt;p&gt;In the end, mixing API test recording for safety and error tracking for fast action is the best combination we’ve seen for dealing with external systems.&lt;/p&gt;
&lt;p&gt;If we map those rules to last week&apos;s incident, rule number three helped to fix the issue quickly, while rule number one would have technically caught it, and rule number two would have done so in less than 24 hours. Even if it turned out in our case that reality kicked in before testing.&lt;/p&gt;
&lt;p&gt;So use that. And retry mechanisms.&lt;/p&gt;
&lt;p&gt;I guess that’ll be for another post.&lt;/p&gt;
</content:encoded><category>coding</category><category>mergify</category></item><item><title>How to Be a Great Software Engineer</title><link>https://julien.danjou.info/blog/how-to-be-a-great-software-engineer/</link><guid isPermaLink="true">https://julien.danjou.info/blog/how-to-be-a-great-software-engineer/</guid><description>There is more than one way.</description><pubDate>Tue, 03 Sep 2024 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I did not write for the last few weeks as I enjoyed taking a break. Ha! That’s probably the first point I could write about being a great software engineer: taking breaks.&lt;/p&gt;
&lt;p&gt;Nevermind.&lt;/p&gt;
&lt;p&gt;What do I know, after all? I’m not a software engineer anymore. I’m a CEO, god damn it.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/f4dfeb44-ffa2-4a3f-baf2-0dcf75586f06_320x195.jpeg&quot; alt=&quot;Illustration of a CEO dispensing advice&quot; /&gt;
&lt;em&gt;Not my role model.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;At least being a CEO gives me some excuse to dispense some pieces of advice regularly. It turns out that over the last couple of years, I had to become a &lt;em&gt;manager&lt;/em&gt; of people — and many people in my team are software engineers. The only thing I knew about management so far was &lt;em&gt;being managed&lt;/em&gt;, which taught me many things, such as:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;how to manage;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;how not to manage;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;how to be managed.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;I don’t want to talk about the first two points here, but I’d like to write about the latter. I regularly have to give feedback to people on my team, and they often rely on the Great Engineer Framework that I built in my mind.&lt;/p&gt;
&lt;p&gt;It’s time to write that down.&lt;/p&gt;
&lt;h3&gt;Expectations&lt;/h3&gt;
&lt;p&gt;Since I started my career as a software engineer 20 years ago, I always wondered how to improve. My initial appeal for this career was typing code on a keyboard, so I decided that the best way to become a great engineer was to be the best at technical stuff.&lt;/p&gt;
&lt;p&gt;I coded days and nights, learned everything I could, and became amazing. I Debian-packaged hundreds of software, wrote C code for a window manager, Linux and CPython, wrote &lt;a href=&quot;https://github.com/emacs-mirror/emacs/blob/master/lisp/color.el#L301&quot;&gt;CIEDE2000 color space computation functions in Lisp&lt;/a&gt;, wrote thousands of lines of Python to do crazy stuff, implemented XML binding for the X11 protocol, built a scalable time-series database based on object storage, etc; you name it. I did many tech-crazy things and thought I was a great engineer.&lt;/p&gt;
&lt;p&gt;It turns out I was only 33% good. As I grew into the tech and startup ecosystem, I started to understand whatever was around me, the industry, the business, the people. And I soon realized that this was not enough, even if I was among the best engineers you could probably find (sorry for bragging).&lt;/p&gt;
&lt;h3&gt;Aspects&lt;/h3&gt;
&lt;p&gt;After a few years, I built a mental model that I still use nowadays to give feedback to engineers in my team, based on 3 aspects that you must master to become a great engineer — like the 10x engineer they all talk about:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Tech&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Business value&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Collaboration&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h4&gt;Tech&lt;/h4&gt;
&lt;p&gt;I just discussed tech. You have to be &lt;em&gt;amazing&lt;/em&gt; at it, which means you have to dig &lt;em&gt;deep&lt;/em&gt; into it. As my co-founder Mehdi says, great engineers &lt;em&gt;pull the strings&lt;/em&gt;. This means that you’re not just there to paper over the problem; you’re here to understand it fully, to grasp it entirely, from top to bottom, and to fix it forever because you &lt;em&gt;understand&lt;/em&gt; it.&lt;/p&gt;
&lt;p&gt;Many junior engineers are not able to do that. They just tinker with their code until “well, it works, tests pass, whatever.” The rise of AI tooling is supporting that, and engineers working this way will have to step up their game, or they’ll disappear.&lt;/p&gt;
&lt;p&gt;It takes a large amount of time to achieve this expertise, and as common sense says, maybe 10,000 hours. This is actually a major issue for people switching to tech after another career; 10,000 hours of coding 25 hours a week (if you just do it on the job) in a typical 45-week year is more than 8 years before starting to “know what you’re talking about.” If you start at 18 years old, tinkering with computers 60 hours a week for fun, you’ll be pretty good at it by 21. I know that’s not fair, but I see this as a major roadblock for hiring tech talents coming from a career change.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/4c43b2e1-118b-4c39-9a31-4b09a1adbad1_1536x768.webp&quot; alt=&quot;Illustration of deep technical expertise and pulling the strings as an engineer&quot; /&gt;&lt;/p&gt;
&lt;p&gt;So: do tech. Don’t stop until you understand everything of what you are responsible for. I remember 15 years ago being screened by a recruiter at Google who’d ask me what happened when I typed google.com in my Web browser. Being able to explain everything, from the keyboard input to the DNS requests and TCP headers of the packet sent, to the HTTP server made me pass without a blink.&lt;/p&gt;
&lt;h4&gt;Business Value&lt;/h4&gt;
&lt;p&gt;This sounds totally stupid, and I might be slightly biased by my French experience, but there are too many engineers who do not understand &lt;em&gt;business value&lt;/em&gt;. It actually took me a few years to understand this, probably because I was only focused on tech. Let me give you a good anecdote to illustrate this.&lt;/p&gt;
&lt;p&gt;Ten years ago, I was called by a senior manager to help with a Python project in a media company. I go to the meeting and meet the manager. He comes from a famous French tech school — one where they learn the C standard library from scratch in their first year — and so do most of the engineers in his team. They’re managing hundreds of servers, and after evaluating various software to do that (Puppet, Ansible, etc) they didn’t find anything that suited 100% of their needs, so they built their own. They invested hundreds of hours in it, and now they’d need help maintaining it.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/93fa670c-6f32-494d-9ef1-d9c0eac35899_1536x768.webp&quot; alt=&quot;Illustration of engineers building custom tools instead of using existing solutions&quot; /&gt;&lt;/p&gt;
&lt;p&gt;It turns out that what they needed was probably Ansible plus a custom plugin to reach 98% of their need in a tenth of the time, but they didn’t see it that way. They just built an entire tech project, not related to their core business, from the ground up, investing hours of time. This is the kind of similar experience I talked about in my previous post, &lt;a href=&quot;https://julien.danjou.info/p/solving-build-vs-buy&quot;&gt;Solving Build vs Buy&lt;/a&gt;. I skipped that project and moved on to other things. I had no interest in maintaining a project that was not providing &lt;em&gt;core value&lt;/em&gt; to the business. That’d have been a great way to be ditch as soon as somebody smarter in the company realized the amount of wasted time that project might have been.&lt;/p&gt;
&lt;p&gt;This kind of behaviour applies everywhere. Engineers would spend hours trying to implement &lt;em&gt;perfect&lt;/em&gt; systems that will scale to millions of users. While the business might have no user — yet. Engineers would spend hours building a feature or solving a problem that would impact 0.1% of users. It’s true that engineering might not be entirely responsible for the roadmap directly, but they are responsible for the time they spend on how far they go into implementing systems and features.&lt;/p&gt;
&lt;p&gt;We live in a world where economy is the driver, which means you have to maximum throughput and minimize input. Input is your coding time, and throughput is the (extra) money the company that hires you can make with you work.&lt;/p&gt;
&lt;h4&gt;Collaboration&lt;/h4&gt;
&lt;p&gt;I could probably summarize this aspect with just:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;If you want to go fast, go alone. If you want to go far, go together.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/20afac45-c6a6-45f7-b50d-22529146569d_1536x768.png&quot; alt=&quot;Illustration of teamwork and collaboration as key to going far&quot; /&gt;&lt;/p&gt;
&lt;p&gt;That’s entirely true. Considering the team is a problem for many engineers who are frustrated by members of their team. It does take time to deal with people, and they are not as easy to understand as computers. However, in the long run, they are the best way to provide you with &lt;em&gt;leverage&lt;/em&gt; to achieve amazing things. Maybe another secret of 10x engineers, who knows?&lt;/p&gt;
&lt;p&gt;Therefore, you’ll need to understand the dynamics that make your teamwork. You have to make sure your work is not isolated and is not the only correct piece of code hidden in its own corner. You need to connect both your piece of software and your brain to other pieces of software and people. I know it requires a lot of effort for some people, especially because it can feel annoying and inefficient to talk or write things for other engineers to understand what you’re achieving.&lt;/p&gt;
&lt;p&gt;But until we can all Neuralink, yes, you’ll have to pause and do something that seems like a waste of time: talking to your teammates, your manager, or customers.&lt;/p&gt;
&lt;p&gt;Always remember: this is an investment. In the long run, it &lt;em&gt;will pay off.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Those three aspects are always the ones that I use to drive my feedback on performance reviews to engineers in the team. Not everyone is always 10/10 in every aspect, so it eases providing feedback and drives them to where they should improve next. They are probably not exhaustive, but they are a great way to spot great and inadequate behaviors.&lt;/p&gt;
</content:encoded><category>career</category><category>coding</category></item><item><title>Navigating SQL Migrations with Confidence: Introducing sql-compare</title><link>https://julien.danjou.info/blog/navigating-sql-migrations-with-confidence/</link><guid isPermaLink="true">https://julien.danjou.info/blog/navigating-sql-migrations-with-confidence/</guid><description>Delivering SQL schema change at scale.</description><pubDate>Tue, 16 Jul 2024 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;As long as I can remember, SQL has been a cornerstone of my engineering journey. My early days at university were filled with monotonous Oracle-based SQL courses, which I found uninspiring. Knowing I would likely never use Oracle, I shifted my focus to &lt;a href=&quot;https://mysql.com&quot;&gt;MySQL&lt;/a&gt;. Over time, I discovered the limitations of MySQL and was introduced to &lt;a href=&quot;https://www.postgresql.org/&quot;&gt;PostgreSQL&lt;/a&gt;, thanks to &lt;a href=&quot;https://tapoueh.org/about/&quot;&gt;Dimitri&lt;/a&gt;. I even organized a few meetups in Paris and encouraged Dimitri to publish &quot;&lt;a href=&quot;https://theartofpostgresql.com/&quot;&gt;The Art of PostgreSQL&lt;/a&gt;,&quot; arguably the best book on SQL (&lt;a href=&quot;https://julien.danjou.info/blog/the-art-of-postgresql-is-out&quot;&gt;I reviewed it here&lt;/a&gt;). Eventually, I embraced PostgreSQL wholeheartedly.&lt;/p&gt;
&lt;p&gt;SQL databases are a timeless technology that continues to evolve. From &lt;a href=&quot;https://www.timescale.com/&quot;&gt;Timescale&lt;/a&gt; to &lt;a href=&quot;https://github.com/pgvector/pgvector&quot;&gt;pgvector&lt;/a&gt;, new advancements are continually emerging. However, one persistent challenge has been managing database migrations. Modifying your data model is crucial for evolving your application, but it’s often a daunting task. At Mergify, like many companies, we’ve faced this challenge head-on.&lt;/p&gt;
&lt;p&gt;We&apos;ve tried various solutions, from custom Python scripts to using &lt;a href=&quot;https://github.com/djrobstep/migra&quot;&gt;migra&lt;/a&gt;, an open-source project that is unfortunately no longer maintained. Each solution had its drawbacks, leading us to a crossroads where we had to decide on our next move.&lt;/p&gt;
&lt;h2&gt;The Initial Struggle&lt;/h2&gt;
&lt;p&gt;At &lt;a href=&quot;https://mergify.com&quot;&gt;Mergify&lt;/a&gt;, PostgreSQL is the backbone of our data handling, from managing the state of GitHub objects to maintaining our event log. From the beginning, we’ve interacted with the database exclusively using an ORM, choosing &lt;a href=&quot;https://www.sqlalchemy.org/&quot;&gt;SQLAlchemy&lt;/a&gt; for its maturity, framework agnosticism, and support for asynchronous I/O since version 2.0.0.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/81ae39c3-f945-42cc-80c0-99ade9f0bc9f_1456x816.webp&quot; alt=&quot;Illustration of SQL database migration workflow&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Given our frequent production deployments, a robust CI/CD pipeline is essential to handle database evolution smoothly. Every schema modification must be rigorously tested and automatically applied to the production database, adhering to the principles outlined in Martin Fowler&apos;s &quot;&lt;a href=&quot;https://martinfowler.com/articles/evodb.html&quot;&gt;Evolutionary Database Design.&lt;/a&gt;&quot; Version-controlling each database artifact and scripting every change as a migration are critical steps in this process.&lt;/p&gt;
&lt;p&gt;We chose &lt;a href=&quot;https://alembic.sqlalchemy.org/&quot;&gt;Alembic&lt;/a&gt; to manage our database migrations. Maintained by the SQLAlchemy team, Alembic is a command-line tool that can automatically create migration scripts from your SQLAlchemy models. Each script is version-controlled alongside your source code. Alembic applies these migrations to the database, recording the revision number in the &lt;code&gt;alembic_version&lt;/code&gt; table to ensure only new migrations are applied subsequently. This command is typically executed in the continuous delivery pipeline to keep the production database up-to-date.&lt;/p&gt;
&lt;h2&gt;A Naive Beginning&lt;/h2&gt;
&lt;p&gt;Our initial approach to testing migration scripts was straightforward: create two databases—one using SQLAlchemy models and the other using only the migration scripts—and ensure they have identical schemas. This involved:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Creating PostgreSQL servers using Docker:&lt;/strong&gt; On a new server, create two empty databases.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Generating schemas:&lt;/strong&gt; Use the first database to create artifacts with SQLAlchemy models, and use Alembic to run migration scripts on the second database.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Comparing schemas:&lt;/strong&gt; Dump each database schema into SQL files using &lt;code&gt;pg_dump&lt;/code&gt; and compare them using Python’s &lt;code&gt;filecmp&lt;/code&gt; and &lt;code&gt;difflib&lt;/code&gt; builtin libraries.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Here’s an example command to dump a database schema into an SQL file:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;pg_dump \
    --dbname=postgresql://user:password@host:port/database \
    --schema-only \
    --exclude-table=alembic_version \
    --format=p \
    --encoding=UTF8 \
    --file /path/to/dump.sql
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To compare the files:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;assert filecmp.cmp(schema_dump_creation_path, schema_dump_migration_path, shallow=False)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If the test fails, use &lt;code&gt;difflib&lt;/code&gt; to display the differences:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;def filediff(path1: pathlib.Path, path2: pathlib.Path) -&amp;gt; str:
    with path1.open() as f1, path2.open() as f2:
        diff = difflib.unified_diff(
            f1.readlines(),
            f2.readlines(),
            path1.name,
            path2.name,
        )
        return &quot;Database dump differences: \n&quot; + &quot;&quot;.join(diff)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;While effective, this test had limitations, such as sensitivity to column order. PostgreSQL doesn’t easily allow changing column positions, necessitating consistent column order in models and production databases.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/f7a3603f-e06b-45d7-97c4-99f9bbb3da76_1456x816.png&quot; alt=&quot;Illustration of comparing database schemas side by side&quot; /&gt;&lt;/p&gt;
&lt;h2&gt;The Complexity Grows&lt;/h2&gt;
&lt;p&gt;As our models grew more complex, our naive test struggled to keep up. Consider the following example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;class Base:
    updated_at: orm.Mapped[datetime.datetime] = orm.mapped_column(
        sqlalchemy.DateTime(timezone=True),
        server_default=sqlalchemy.func.now(),
    )

class User(Base):
    id: orm.Mapped[int] = orm.mapped_column(
        sqlalchemy.BigInteger,
        primary_key=True,
    )
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In this setup, the &lt;code&gt;updated_at&lt;/code&gt; column is added to every child model, such as &lt;code&gt;User&lt;/code&gt;. Adding a new column to &lt;code&gt;User&lt;/code&gt;, like &lt;code&gt;name&lt;/code&gt;, would misalign the order, causing schema mismatches.&lt;/p&gt;
&lt;p&gt;To address this, we needed to compare schemas while ignoring column order. We explored various tools:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Alembic&lt;/strong&gt;: Can compare schemas to generate migration scripts but misses some differences.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Migra&lt;/strong&gt;: An unmaintained tool that compares database schemas effectively.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;SQL dumps&lt;/strong&gt;: The most reliable format but challenging to parse and compare directly.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Building the Solution: sql-compare&lt;/h2&gt;
&lt;p&gt;It was clear that our current solutions were insufficient. We needed a hero to rescue us from the perils of SQL migration management, so we developed &lt;strong&gt;&lt;a href=&quot;https://github.com/Mergifyio/sql-compare&quot;&gt;sql-compare&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;sql-compare is a Python library that uses &lt;a href=&quot;https://pypi.org/project/sqlparse/&quot;&gt;sqlparse&lt;/a&gt; to parse SQL files and compare schemas, ignoring irrelevant differences like comments, whitespace, and column order. This new tool became an integral part of our workflow, catching migration issues that other tools might miss.&lt;/p&gt;
&lt;p&gt;The main challenge was filtering and grouping tokens by column definition before sorting them. Despite these complexities, sql-compare emerged victorious, enabling us to ensure seamless migrations and maintain schema integrity.&lt;/p&gt;
&lt;h2&gt;The Journey Forward&lt;/h2&gt;
&lt;p&gt;We’ve open-sourced sql-compare to help others facing similar challenges. You can try it by running &lt;code&gt;pip install sql-compare&lt;/code&gt;. We plan to enhance sql-compare, such as creating functions to retrieve all schema differences for better test results. If you have suggestions or want to contribute, feel free to submit issues or pull requests on our GitHub repository.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Managing database migrations is a complex but essential task for evolving applications. With sql-compare, we found our solution, ensuring seamless migrations, maintaining schema integrity, and continuing to deliver high-quality software. Our journey through the challenges of SQL migrations has taught us valuable lessons, and with sql-compare, we’re better equipped to face the future.&lt;/p&gt;
</content:encoded><category>coding</category><category>mergify</category></item><item><title>A Journey of Embracing Linters</title><link>https://julien.danjou.info/blog/the-journey-of-embracing-linters/</link><guid isPermaLink="true">https://julien.danjou.info/blog/the-journey-of-embracing-linters/</guid><description>perl -e &apos;use strict;&apos;</description><pubDate>Tue, 09 Jul 2024 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Recently, I found myself in a spirited debate with one of our front-end developers at &lt;a href=&quot;https://mergify.com&quot;&gt;Mergify&lt;/a&gt;. This discussion, revolving around the usage of linters, reminded me of my long and storied history with these &quot;advisor tools.&quot; Having been confronted with linters for the past 25 years, I believe it&apos;s time to share some of that accumulated wisdom.&lt;/p&gt;
&lt;p&gt;My first encounter with a linter was with &lt;code&gt;use strict&lt;/code&gt; in &lt;a href=&quot;https://www.perl.org/&quot;&gt;Perl&lt;/a&gt;. Although I can&apos;t recall the specifics of what it did, I do remember it being an essential tool for writing better code. Later on, I encountered the &lt;code&gt;gcc -W&lt;/code&gt; and &lt;code&gt;-pedantic&lt;/code&gt; options, which I enabled religiously in all my projects. These early experiences set the stage for my ongoing relationship with linters.&lt;/p&gt;
&lt;h2&gt;Warnings&lt;/h2&gt;
&lt;p&gt;Fast forward to today, my recent discussion centered around &lt;a href=&quot;https://eslint.org/&quot;&gt;eslint&lt;/a&gt; and enabling all the checks for the Playwright plugin, treating every drift as an error rather than a warning. This distinction is crucial: an error causes the CI to fail, while a warning merely generates noise. Not all linters have this warning level, but in my experience, warnings will be disruptive if left unaddressed in your development workflow. An error should be a clear-cut issue: either ignore it or fix it.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Having unresolved warnings in your CI logs creates ambiguity and inefficiency. Make a decision. Commit to it.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/2966edeb-e81d-45e4-b2c4-a82215b7812b_1382x304.png&quot; alt=&quot;Screenshot of the eslint warning that triggered the linter discussion&quot; /&gt;
&lt;em&gt;The original warning that triggered our discussion.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;Picking Errors&lt;/h2&gt;
&lt;p&gt;Despite not being a JavaScript expert, my 25 years of experience with various linters gives me some perspective on this matter. Our debate also touched on which approach to use with respect to linters, either:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;stick to the recommended and default settings;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;being stricter by promoting certain warnings to errors for checks we deemed useful;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;enable everything to be an error and explicitly ignore checks that don&apos;t apply to our project or are considered incorrect.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Every linter, from &lt;code&gt;gcc -W&lt;/code&gt; flags to &lt;a href=&quot;https://docs.astral.sh/ruff/&quot;&gt;ruff&lt;/a&gt; in Python, starts with a set of &quot;recommended&quot; settings. These are designed to throw a manageable number of errors on a typical project, making the linter easy to adopt for teams. This doesn&apos;t mean the disabled options are bad; they are simply considered &quot;too much for beginners&quot; and can be enabled later.&lt;/p&gt;
&lt;p&gt;This incremental approach is how we adopted &lt;a href=&quot;https://mypy-lang.org/&quot;&gt;mypy&lt;/a&gt; at Mergify. The default typing checks are relatively light, allowing us to enable it without much friction. We spent a few weeks fixing typing issues, caught a few bugs in the process, and were satisfied. Gradually, we enabled more checks until we reached the point of enabling &lt;code&gt;strict = true&lt;/code&gt; (a nostalgic nod to Perl) and caught even more (potential) bugs.&lt;/p&gt;
&lt;p&gt;On the flip side, having a poorly calibrated set of default recommendations is why I never adopted &lt;a href=&quot;https://www.pylint.org/&quot;&gt;pylint&lt;/a&gt;. Running pylint on our otherwise impeccable Python code, which passes ruff with most checks enabled, results in 13,000 errors for 140,000 SLOC. (I wrote about similar code quality tools in &lt;a href=&quot;https://julien.danjou.info/blog/the-best-flake8-extensions&quot;&gt;The Best Flake8 Extensions&lt;/a&gt;.) This is an insurmountable barrier for any developer. The prospect of ignoring all these non-critical errors, such as missing docstrings or line lengths, seems daunting.&lt;/p&gt;
&lt;h2&gt;Eslint and Playwright&lt;/h2&gt;
&lt;p&gt;Returning to eslint and &lt;a href=&quot;https://playwright.dev/&quot;&gt;Playwright&lt;/a&gt;, we used the following code to enable all Playwright rules as errors:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;...Object.keys(playwrightPlugin.configs[&apos;flat/recommended&apos;].plugins.playwright.rules).reduce(
  (acc, rule) =&amp;gt; {
    acc[`playwright/${rule}`] = &apos;error&apos;;
    return acc;
  },
  {}
),
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This approach ensures we don&apos;t miss any linting recommendations from the Playwright team. With &lt;a href=&quot;https://docs.github.com/en/code-security/getting-started/dependabot-quickstart-guide&quot;&gt;Dependabot&lt;/a&gt; automatically updating our dependencies, new errors introduced by updates appear in brand-new pull requests, allowing us to improve our code continuously.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://julien.danjou.info/images/blog/b46b3de6-8fa1-4445-9434-da42a5fbe88b_1536x768.png&quot; alt=&quot;Illustration of enabling all linter checks and treating warnings as errors&quot; /&gt;&lt;/p&gt;
&lt;p&gt;In conclusion, &quot;recommended&quot; settings in linters are designed for ease of adoption, striking a balance between &quot;best practices&quot; and &quot;practicality,&quot; but they are not guidelines for what you should follow.&lt;/p&gt;
&lt;p&gt;Striving for perfection (assuming your linter is robust and not crazy) is always the goal. Make deliberate choices about which checks to ignore, and remember that linters are here to help you write better, more reliable code.&lt;/p&gt;
</content:encoded><category>coding</category></item><item><title>Atomic lock-free counters in Python</title><link>https://julien.danjou.info/blog/atomic-lock-free-counters-in-python/</link><guid isPermaLink="true">https://julien.danjou.info/blog/atomic-lock-free-counters-in-python/</guid><description>At Datadog, we&apos;re really into metrics. We love them, we store them, but we also generate them. To do that, you need to juggle with integers that are incremented, also known as counters.</description><pubDate>Mon, 06 Jan 2020 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;At &lt;a href=&quot;https://datadog.com&quot;&gt;Datadog&lt;/a&gt;, we&apos;re really into metrics. We love them, we store them, but we also &lt;em&gt;generate&lt;/em&gt; them. To do that, you need to juggle with integers that are incremented, also known as &lt;em&gt;counters&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;While having an integer that changes its value sounds dull, it might not be without some surprises in certain circumstances. Let&apos;s dive in.&lt;/p&gt;
&lt;h2&gt;The Straightforward Implementation&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;class SingleThreadCounter(object):
	def __init__(self):
    	self.value = 0
        
    def increment(self):
        self.value += 1
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Pretty easy, right?&lt;/p&gt;
&lt;p&gt;Well, not so fast, buddy. As the class name implies, this works fine with a single-threaded application. Let&apos;s take a look at the instructions in the &lt;code&gt;increment&lt;/code&gt; method:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt;&amp;gt;&amp;gt; import dis
&amp;gt;&amp;gt;&amp;gt; dis.dis(&quot;self.value += 1&quot;)
  1           0 LOAD_NAME                0 (self)
              2 DUP_TOP
              4 LOAD_ATTR                1 (value)
              6 LOAD_CONST               0 (1)
              8 INPLACE_ADD
             10 ROT_TWO
             12 STORE_ATTR               1 (value)
             14 LOAD_CONST               1 (None)
             16 RETURN_VALUE
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;self.value +=1&lt;/code&gt; line of code generates 8 different operations for Python. Operations that could be interrupted at any time in their flow to switch to a different thread that could also increment the counter.&lt;/p&gt;
&lt;p&gt;Indeed, the &lt;code&gt;+=&lt;/code&gt; operation is not atomic: one needs to do a &lt;code&gt;LOAD_ATTR&lt;/code&gt; to read the current value of the counter, then an &lt;code&gt;INPLACE_ADD&lt;/code&gt; to add 1, to finally &lt;code&gt;STORE_ATTR&lt;/code&gt; to store the final result in the &lt;code&gt;value&lt;/code&gt; attribute.&lt;/p&gt;
&lt;p&gt;If another thread executes the same code at the same time, you could end up with adding 1 to an old value:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Thread-1 reads the value as 23
Thread-1 adds 1 to 23 and get 24
Thread-2 reads the value as 23
Thread-1 stores 24 in value
Thread-2 adds 1 to 23
Thread-2 stores 24 in value
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Boom. Your &lt;code&gt;Counter&lt;/code&gt; class is not thread-safe. 😭&lt;/p&gt;
&lt;h2&gt;The Thread-Safe Implementation&lt;/h2&gt;
&lt;p&gt;To make this thread-safe, a &lt;em&gt;lock&lt;/em&gt; is necessary. We need a lock each time we want to increment the value, so we are sure the increments are done serially.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;import threading

class FastReadCounter(object):
    def __init__(self):
        self.value = 0
        self._lock = threading.Lock()
        
    def increment(self):
        with self._lock:
            self.value += 1
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This implementation is thread-safe. There is no way for multiple threads to increment the value at the same time, so there&apos;s no way that an increment is lost.&lt;/p&gt;
&lt;p&gt;The only downside of this counter implementation is that you need to lock the counter each time you need to increment. There might be much contention around this lock if you have many threads updating the counter.&lt;/p&gt;
&lt;p&gt;On the other hand, if it&apos;s barely updated and often read, this is an excellent implementation of a thread-safe counter.&lt;/p&gt;
&lt;h2&gt;A Fast Write Implementation&lt;/h2&gt;
&lt;p&gt;There&apos;s a way to implement a thread-safe counter in Python that does not need to be locked on write. It&apos;s a trick that should only work on CPython because of the &lt;em&gt;Global Interpreter Lock&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;While everybody is unhappy with it, this time, the GIL is going to help us. When a C function is executed and does not do any I/O, it cannot be interrupted by any other thread. It turns out there&apos;s a counter-like class implemented in Python: &lt;a href=&quot;https://docs.python.org/3/library/itertools.html#itertools.count&quot;&gt;itertools.count&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;We can use this &lt;code&gt;count&lt;/code&gt; class as our advantage by avoiding the need to use a lock when incrementing the counter.&lt;/p&gt;
&lt;p&gt;If you read the documentation for &lt;code&gt;itertools.count&lt;/code&gt;, you&apos;ll notice that there&apos;s no way to read the current value of the counter. This is tricky, and this is where we&apos;ll need to use a lock to bypass this limitation. Here&apos;s the code:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;import itertools
import threading

class FastWriteCounter(object):
    def __init__(self):
        self._number_of_read = 0
        self._counter = itertools.count()
        self._read_lock = threading.Lock()

    def increment(self):
        next(self._counter)

    def value(self):
        with self._read_lock:
            value = next(self._counter) - self._number_of_read
            self._number_of_read += 1
        return value
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;increment&lt;/code&gt; code is quite simple in this case: the counter is just incremented without any lock. The GIL protects concurrent access to the internal data structure in C, so there&apos;s no need for us to lock anything.&lt;/p&gt;
&lt;p&gt;On the other hand, Python does not provide any way to read the value of an &lt;code&gt;itertools.count&lt;/code&gt; object. We need to use a small trick to get the current value. The &lt;code&gt;value&lt;/code&gt; method increments the counter and then gets the value while subtracting the number of times the counter has been read (and therefore incremented for nothing).&lt;/p&gt;
&lt;p&gt;This counter is, therefore, lock-free for writing, but not for reading. The opposite of our previous implementation&lt;/p&gt;
&lt;h2&gt;Measuring Performance&lt;/h2&gt;
&lt;p&gt;After writing all of this code, I wanted to make sure how the different implementations impacted speed. Using the &lt;a href=&quot;https://docs.python.org/3/library/timeit.html&quot;&gt;timeit&lt;/a&gt; module and my fancy laptop, I&apos;ve measured the performance of reading and writing to this counter.&lt;/p&gt;
&lt;p&gt;Operation&lt;/p&gt;
&lt;p&gt;SingleThreadCounter&lt;/p&gt;
&lt;p&gt;FastReadCounter&lt;/p&gt;
&lt;p&gt;FastWriteCounter&lt;/p&gt;
&lt;p&gt;&lt;code&gt;increment&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;176 ns&lt;/p&gt;
&lt;p&gt;390 ns&lt;/p&gt;
&lt;p&gt;169 ns&lt;/p&gt;
&lt;p&gt;&lt;code&gt;value&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;26 ns&lt;/p&gt;
&lt;p&gt;26 ns&lt;/p&gt;
&lt;p&gt;529 ns&lt;/p&gt;
&lt;p&gt;&lt;img alt=&quot;Benchmark table comparing counter performance for read and increment operations&quot; /&gt;&lt;/p&gt;
&lt;p&gt;I&apos;m glad that the performance measurements in practice match the theory 😅. Both &lt;code&gt;SingleThreadCounter&lt;/code&gt; and &lt;code&gt;FastReadCounter&lt;/code&gt; have the same performance for reading. Since they use a simple variable read, it makes absolute sense.&lt;/p&gt;
&lt;p&gt;The same goes for &lt;code&gt;SingleThreadCounter&lt;/code&gt; and &lt;code&gt;FastWriteCounter&lt;/code&gt;, which have the same performance for incrementing the counter. Again they&apos;re using the same kind of lock-free code to add 1 to an integer, making the code fast.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;It&apos;s pretty obvious, but if you&apos;re using a single-threaded application and do not have to care about concurrent access, you should stick to using a simple incremented integer.&lt;/p&gt;
&lt;p&gt;For fun, I&apos;ve published a Python package named &lt;a href=&quot;https://pypi.org/project/fastcounter/&quot;&gt;fastcounter&lt;/a&gt; that provides those classes. The &lt;a href=&quot;https://github.com/jd/fastcounter&quot;&gt;sources are available on GitHub&lt;/a&gt;. Enjoy!&lt;/p&gt;
</content:encoded><category>python</category><category>coding</category></item></channel></rss>