Welcome to the third post in our “we-built-something-and-killed-it” series.
In the first chapter, we shared how we started building CI Optimizer—our ambitious attempt to help teams cut down on CI/CD costs.
In the second, we explained why that effort never made it past the runway: while the problem existed, no one really wanted the solution.
But that wasn’t the end of the story.
Because just as we were winding down CI Optimizer, something else started to take shape—almost accidentally.
From CI Cost to CI Chaos
As we were working on CI Optimizer, we had to dig deeper into CI platforms like GitHub Actions or CircleCI. We needed to understand the structure, failures, and performance of CI pipelines to measure their cost.
And the more we explored, the more something else stood out: teams weren’t just struggling with CI/CD costs—they were struggling with CI reliability.
❗Flaky tests.
❗Unreliable runners.
❗Timeouts.
❗Random infra failures.
And as users of our Merge Queue product kept telling us:
“Our workflow is fine—until CI starts acting up.”
So we asked ourselves a new question:
💡 What if we stopped focusing on how much CI costs, and started looking at how much CI hurts?
That’s how the idea for our next product—CI Issues—was born.
The Pivot: CI Issues
CI Issues was meant to do one thing really well:
Track, identify, and alert on CI problems before they silently torpedo developer productivity.
We wanted to give teams insight and visibility into:
How often their tests flaked
Whether their CI infrastructure was unreliable
Which PRs were impacted
Which workflows deserved attention
The goal wasn’t just dashboards. It was detection and action. You’d be able to see patterns, set alerts, and flag recurring issues before developers noticed them.
And as we started to pitch the concept to engineers, the excitement was real:
💬 “We have this exact pain.”
💬 “We’ve built half of this internally.”
💬 “Please let us know when it’s ready.”
We felt like we were onto something.
The R&D Rabbit Hole
So we jumped in headfirst. We already had code collecting and analyzing CI data, so we started adapting it for CI Issues.
We ran the system internally, refined metrics, tested detection logic, built a first UI. And then we iterated. And iterated. And iterated again.
But something was off.
Every time we looked at what we had, the same thought came back:
“This is good… but it’s not a product.”
It was barely working for us internally. Even we had trouble using it.
It was noisy. It was complex. It was fragile. It wasn’t obvious how to deploy or operate it at scale.
We had built tech.
But we hadn’t designed a product.
The Realization That Stopped Us
After almost a year of work, we paused and took a step back. And it hit us:
We had made the same mistake again—but in a different way.
With CI Optimizer, we had no market.
With CI Issues, we had no design.
This time, it wasn’t the problem that was flawed—it was our approach.
We had focused on research, experimentation, pipelines, metrics, code—but we hadn’t put the same energy into figuring out how the product should be used.
How would teams onboard?
How would they configure it?
How would they act on the data?
What does success look like for them?
The longer we waited to answer those questions, the more we realized:
💣 “If we ship this now, we’ll be building another tool that’s hard to use, hard to maintain, and ultimately, unadopted.”
So we made the call—again.
We stopped.
What We Learned (This Time)
This second failure didn’t sting the same way as the first.
In fact, it felt like a necessary part of the journey.
Here’s what we learned:
Validation isn’t enough—you need design.
Even if users want a solution, they won’t use a product that’s hard to operate or understand.
Great tech doesn’t mean great UX.
CI Issues worked, technically—but without thoughtful design, it was dead in the water.
You need both clarity and empathy.
Clarity on what you’re solving, and empathy for how your users will experience it.
What’s Next?
The story doesn’t end here.
CI Issues gave us a powerful insight into how fragile and painful the CI experience can be—and how underserved engineers still are when things go wrong.
So we took everything we learned from CI Optimizer and CI Issues, and went back to the drawing board—with a new vision, new design principles, and a better understanding of how to build the right thing the right way.
Stay tuned for the final post in the series: what we built next, and how it’s going to change how developers deal with CI failures.