top of page

The Builder AI problem: How did investors get it wrong?

Image: Builder.ai
Image: Builder.ai

How did they get it wrong? Maybe using synthetic data for market research?


The London-based unicorn that promised a magic solution – you decide what you want in an App and let AI do the rest went belly up. Since 2016, the no-code platform raised $450 million from investors (which included companies like Jungle Ventures & Lakestar (Series A - 2018, with participation from SoftBank’s DeepCore),  Insight Partners (Series C - 2022), Microsoft (Series D - 2023), Qatar’s sovereign wealth fund (Series D - 2023) and ICONIQ Capital). It had a valuation of $1.5 billion (seriously who does these valuations – see this reditt post which was written with AI and human promoting), a list of impressive global customers, but neither were its sales revenues forecasts credible ($50million Verus $220 million projected for 2024),  its fundamental premise was incorrect – nope not AI bots but human talent behind the AI bots were used to build apps – approximately 700 Indian engineers.

 

How did we reach here? Could not good old market research which is also a skill used in audit reports, investor due diligence reports, be used here – the type where you do company site visits, talk to people (like customers, employees, distributors, supply chain, etc.), monitor content on posts, and observations help? The time-consuming more accurate type of research?

 

This article caught my eye – Andreesen Horowitz is now pitching a new business investment for AI for market research. The blog is better than their X post (something lost in translation with AI?) but the premise is the same – use simulations to predict how humans think or behave to use as a proxy for market research (based on one paper). The authors in the post say having 70% accuracy of a traditional consulting firms is acceptable to the many CMOs they spoke to (zcohen@a16z.com  and  samble@a16z.com – how many is this?) as it was about cheaper and faster data.

 

Hold on a second – isn’t market research to talk to people (not AI bots?). What am I missing in this logic? When people (investors) value companies and create these unicorns and then put more money into them – hoping they will not fail – they are betting on a future. They are researching trends (hopefully talking to people in the present not just outsourcing this task to AI or other people who may also be outsourcing this to AI). Market research is messy – often the problems are not easy to define. Anyone who works in the knowledge industry – consultancy, advertising, AI development will tell you there is a gap in what people want, how they express it and what is feasible. Who you talk to matters. How you talk to them matters. The skills to gain insights matters. I am not saying simulations are not useful even if fed with imperfect data like social media feeds (honestly – how much of this is the real you?) but we need to know how to use it and when to use it.

 

Coming back to the Builder.ai issue. As one Redittor commentedA well paid developer is usually one who has the talent to bridge the gap between the client's babbling and a technical specification document.” Another commented “first, the whole pitch of "build apps with AI, no code, super fast" sounds great - but the reality of software development is messy. It’s hard to fully automate something that usually requires deep context, custom logic, and ongoing iteration. A lot of users were saying their apps didn’t turn out the way they expected or that delivery timelines kept slipping. And yeah, having Microsoft onboard sounds impressive, but that’s not the same as product-market fit. A big partnership can be more PR than actual validation. Feels like a combo of overpromising, underdelivering, and scaling too fast before the core product was solid.”

 

Are algorithms and the people behind these that promote the media also to blame for hype on AI? Yes.

Is the AI literacy gap a problem? Yes.

Is the over-valuation of AI an issue. Yes. Sometimes the technology is not yet there and takes time to build and should be responsibly scaled.

 

Anyway – this is not the first high profile AI case nor likely to be the last one.

 

 
 
 

Commentaires


bottom of page