"You are wrong, Pawel. You can vibe code a successful product without any technical skills. Here's one example." I liked the challenge, especially since it referenced a source. What I thought would be a short comment evolved into a series of articles. This post is the last one (or at least I believe so at the time of writing), and I will focus on the product management side. Well, just one aspect of it. The perception that the pace of shipping features (or building in general) is the bottleneck of product development is a misconception. Ultimately, that's what vibe coding tools offer: we can build it for you with no engineering team whatsoever. In fact, the original challenge was worded along the same lines: "I'm working with people and I've seen others, who only used AI to create a valid tech business, scaling it up beyond a million dollars, before they hired any software engineer." Let’s unpack it, then. Prototyping versus Building I'm a fan of vibe coding when it comes to prototyping. It is a fabulous tool to learn whether what we ideate is desirable. The first thing about prototypes, though? They are disposable. Even if we validate that we were right and our idea works (which happens maybe once every ten attempts), the prototype is still disposable. The whole idea behind prototyping is that we trade quality for a quick and cheap outcome. It can break. It can be buggy. Sometimes it may even look ugly. The point is: it conveys the idea. Conversely, a product has to deliver promised value sustainably over time. Awful UX? I'll move to an alternative. Bugs too annoying? I'll stop using it altogether. It breaks entirely? Why would I use it, let alone pay for it? The quality has to be there. Otherwise, customers will go as fast as they come, and that's not a viable product strategy. Sure, we'd still love to build our product quickly and cheaply, but at some level, quality is non-negotiable. Ultimately, we need the thing to work in the long run. The Road to Successful Product How do most successful digital products take their shape? Consider any example of your choice and try to reverse-engineer their path. Do you see a clear way, going from one milestone to another, each step an inevitable consequence of all the previous ones? Like Amazon figuring out online bookselling as the hit of the internet era, and then, inevitably, taking a shot at music, video, and other industries, while concurrently launching in non-US markets? Expanding to include third-party sellers must have been a logical next step, right? And building the biggest cloud infrastructure, their own reading device, and video streaming business... well, by this point, we’re retrofitting the connections and we know it. Amazon tried a lot of things to land with its key cash cows today. Heck, even with their foundational idea—the marketplace—they famously run thousands of experiments all the time. In fact, their whole development culture is designed around rapid experimentation. In other words, we never know upfront what will work in a product. We try stuff, see what works, stick with what does, drop what does not. Gmail: A Case In Point Paul Buchheit is famous for building the first GMail prototype in just a few hours. And then repeating the trick with AdSense. All in times where all code had to be manually written, like, you know, by hand. Yes, we’re talking about two of Google’s product slam dunks. Yet, if you read the story carefully, it was anything but an execution of a well-laid plan. "We did a lot of things wrong during the 2.5 years of pre-launch Gmail development." "We re-wrote the frontend about six times and the backend three times by launch." "I would just write the code, release the feature, and watch the response. Usually, everyone (including me) would end up hating whatever it was (especially my ideas), but we always learned something from the experience, and we were able to quickly move on to other ideas." Paul Buchheit In other words, there have been numerous dead ends that they explored, invalidated, and moved on from. There's no knowing up front. It Works For Product Features, It Works For Entire Products The same pattern applies to products, not just product features. As an example, we can stick with Google. It is known for a swath of products in line with its mission to organize the world's information. Search, Gmail, Google Workspace, and what have you. However, aside from the search, many of these products originated as experiments. Gmail and AdSense, mentioned above, are two notable examples. However, for each product idea that survived the test of time, there are a dozen that did not. And for each of the latter, there are probably an order of magnitude more that weren't even released to the public. I personally used and loved Google Reader, Google Talk, and Picasa, and the retirement of each broke my heart. There are a few more I didn't cry over, despite being an active user till their sad end. Probably everyone remembers Google's biggest lost bets: Buzz, Google+, and Google Wave. By the end of 2024, the full list of things that Google retired had almost 300 entries. It’s not a track record peppered with home runs, is it? And that’s for a company that, from the vantage point of an aspiring startup, has unlimited capabilities. With all the engineering power Google has, the pace of development could have been set arbitrarily high for any of these products. While no official information is available, it's been rumored that Google had a few hundred engineers working on Google+ alone. If the pace of development were all that counted, it would always be the incumbents who would win the product race in any niche. After all, they can pump as much engineering firepower as they want, leveraging their existing revenue streams, customer bases, and whatnot. And we know it doesn't work this way. Validation as Bottleneck Product development, in essence, is continuous discovery. First, we aim to validate the idea, then we switch to validating whether any given change brings us closer to our goals—growth, revenue, retention, or whatever that may be. The problem is that validation takes time. Leah Tharin, who shares her story about working on products with tens of millions of users, says the following. "The bottleneck of the team was waiting for statistical significance for most of our experiments, despite all the traffic we had. (...) If we changed a copy of our main website or most prominent tools, the experiments were statistically significant within hours. A more complex down the funnel change for higher value customers? Weeks, sometimes months. Ugh." Leah Tharin Weeks. Sometimes months. That's before they could have learned whether the change was for the better, worse, or had no effect at all. Let's make a thought experiment and assume they could reduce the cost and time of development by a factor of 10. Would they grow faster? Save for the simplest tweaks on the landing page, they would still need to wait to learn the outcomes. And it's not like they could improve the volume of experiments by tenfold either. Sure, it's technically feasible. Except it would make a mess out of the metrics. "So we're running these 27 concurrent experiments to improve retention, and the data says it's been better for a week and then got back to what we had before. What does it say about those experiments again?" If you look at what actually matters (growth, revenue, retention), knowing the right thing to build is the most common bottleneck. And we can't reliably know what will work from the outset. Communication as a Bottleneck But what if we do know exactly what to build? Ultimately, it's the basic pattern of project work. We define the scope, we agree on the payment, and off we go! In such a setup, we can conveniently ignore that we might be building the wrong thing entirely. Or, with a bit of luck, it's one of the rather rare cases where we either run a direct replacement project or automate the existing business, and we have a much better initial understanding of the desirability, viability, and feasibility of the idea. Still, it's not the development pace that makes or breaks such endeavors. At Lunar Logic, when estimating work for a client we've never worked with before, we always go with a wide range. Not as wide as we'd like, as to be brutally honest, we'd need to go with something like "It can take less than a month, or more than a quarter." Still, the bracket is uncomfortably wide for many of our potential customers. And that for the work with limited technical uncertainties. Why are we all over the place? Can’t we just use 20 years of experience that we so often brag about and tell in plain English how much it will cost? No, we can’t. We don’t yet know how collaboration will look, and that factor alone will sway the actual costs more than anything directly related to the scope. Poor communication creates rework. It's not unusual that the quality of communication, or rather lack thereof, adds as much as an additional 100% to the effort. That compounds with the development of all unnecessary features. If you look at such a gig in hindsight, the value-adding work may stack up to just several percent of the whole effort. Adding development speed would only exacerbate the problem. The cost of rework doesn't pile up linearly. Once you rework the rework, it's like a compound interest rate, except in reverse. Go figure what it does to the costs. Coding Pace Was Never a Bottleneck One of the many excellent observations Daniel Kahneman explains in his seminal book Thinking, Fast and Slow is the following. "This is the essence of intuitive heuristics: when faced with a difficult question, we often answer an easier one instead, usually without noticing the substitution." Daniel Kahneman We subconsciously avoid solving difficult problems by finding a similar one that's much easier to address. Then, we pretend the answer to the latter works as the answer to the former. In that manner, we respond to the question about successful product development. We have little clue about what makes products successful. However, we certainly see that the most significant part of the effort is development. It takes a great deal of time and money to turn an idea into a product. So we focus on the speed of development. And suddenly, we have an easy answer. We can make it faster. How? That's simple. Use AI. Sorry to break it to you. Code does not equal product. What follows is that more code does not equal a better product. Often, it's the opposite. Coding speed was never the bottleneck. Not even when we didn't have an AI shortcut. Vibe Coding in Product Development If we stick to the context of product development, vibe coding promises us two things. We'll get the code fast. We don't need to hire expensive technical expertise. Both parts miss the "coding speed was never the bottleneck" observation. Both respond to the simple question instead of the difficult one. To make matters worse, the price we pay for removing ourselves from understanding the code is more rework. Yes, I know, we outsource that rework to an AI agent too, but we still need to drive it. And then all that rework stacks up. Remember compound interest rate in reverse? Ultimately, using vibe coding as the main tactic to build a successful product is like solving a minor issue only to make the main problem a much bigger challenge. The previous two parts of this informal vibe coding series: