Even though the impacts of LLMs have never been seen before, they feel familiar to earlier assumptions.
For context: I wasn’t the “PhD scientist,” working on models. I was the guy who worked on productionizing their proof-of-concept code and turning it into something people could actually use. I worked in industries ranging from software/hardware automated testing at Motorola to small startups dealing with accessibility and education.
So here is what I've learned:
AI as a product isn’t viable: It’s either a tool or a feature
This AI hype cycle is missing the mark by building ChatGPT-like bots and “✨” buttons that perform single OpenAI API calls.
For example, Notion, Slack, and Airtable now lead with “AI” in their page titles instead of the core value they provide. Slack calls itself “AI Work Management & Productivity Tools,” but has anyone chosen Slack for its AI features?
Most of these companies seem lost on how to implement AI. A simple vector semantic search on Slack would outperform what they’ve shipped as “AI” so far.
People don’t use these products due to these “✨” AI solutions. The best AI applications work beneath the surface to empower users. Jeff Bezos comments about this (in 2016!)
You don’t see AI as a chatbot on the Amazon homepage. You see it in “demand forecasting, product search ranking, product and deals recommendations, merchandising placements, fraud detection, translations.”
That’s where AI comes in, not as “the thing” but as “the tool that gets you to the thing.”
... continue reading