Tech News
← Back to articles

Why I don't ride the AI Hype Train

read original related products more articles

Ever since ChatGPT came out, the tech world has jumped on a new hype train—just like it did before with crypto, NFTs, and the metaverse. This time, I think the hype spread even faster because it was so easy to try—just open a website and start typing. ChatGPT quickly became one of the fastest-growing products ever, reaching 100 million users in 2 months. Like past trends, it also brought a lot of debate and strong opinions. I’ve used ChatGPT and other large language models (LLMs), and I’ve even added them to products at work. But even with all that, I’m still not on the AI hype train. In this post, I’ll explain why.

Creation of the Models

The models behind ChatGPT and others are trained using text from across the internet. The problem is, these companies didn’t ask for permission from the people or websites that created the content. They just crawled the web and copied whatever they could find. While doing that, they also caused a lot of traffic to some websites, which led to higher bandwidth costs for those sites. (How OpenAI’s bot crushed this seven-person company’s website ‘like a DDoS attack’) Big publishers like news websites have already sued OpenAI for this (The New York Times is suing OpenAI and Microsoft for copyright infringement), and some have made deals to license their content (A Content and Product Partnership with Vox Media). But smaller websites and creators didn’t get a choice at all.

Some companies, like Meta, went even further. They didn’t just use web content—they also used pirated books to train their models. (The Unbelievable Scale of AI’s Pirated-Books Problem) If a regular person did this, they’d probably get into serious trouble. But when billion-dollar companies do it, they usually get away with it. And if they do get fined, it’s often such a small amount that it doesn’t even affect their yearly profits.

The way these models are trained is already a big issue. But a bigger problem is that they use all kinds of content from the internet. They do this because they need huge amounts of data to make the models better. That’s why Meta turned to pirated books—they had already used most of what was available online. And this is where things get risky. While most people only visit handful of websites, the internet is full of harmful content. These models are trained on all of it, and that means they can end up repeating or using that harmful content in their answers. (The risks of using ChatGPT to obtain common safety-related information and advice) Since the models often give answers with a lot of confidence, it can be hard—especially for younger people—to tell whether something is right or wrong.

Once the companies collect the data, they start training the models. This isn’t like running a normal website. It requires special data centers with powerful graphic cards built for AI training. These cards use a lot of electricity, which means more energy needs to be produced. That’s why companies like Microsoft have made deals to reopen old power plants. (Three Mile Island nuclear plant will reopen to power Microsoft data centers) Some companies are even building new power plants that run on fossil fuels. (AI could keep us dependent on natural gas for decades to come)

But it’s not just about needing more electricity. These data centers also cause other problems. For example, they can affect the electricity grid and cause small but important changes to the frequency of the power. (AI Needs So Much Power, It’s Making Yours Worse) That can mess with electronic devices in homes near the data centers. Because they use so much electricity, they also produce a lot of heat. To cool everything down, they need large amounts of water, which can create issues for local water supplies. (AI is draining water from areas that need it most) The situation has gotten so bad that companies like Microsoft, which once aimed to become carbon neutral by 2030, might not be able to meet those goals. (Microsoft’s AI obsession is jeopardizing its climate ambitions)

Usage of the Models

After using huge amounts of data and electricity to train these models, you’d expect something truly useful. But even though companies promote these models like they can solve the world’s biggest problems, the way people actually use them can be disappointing. A lot of students use these tools to do their homework, both in school and university. (26% of students ages 13-17 are using ChatGPT to help with homework, study finds) When teachers notice that homework quality has gone up, they start using these tools too—to grade the homework. (Teachers are using AI to grade essays.) Lawyers have also been caught using AI tools to help with their legal cases. (Law firm restricts AI after ‘significant’ staff use) Many people now use AI for advice on money, relationships, or even health. (AI models miss disease in Black and female patients) And if you look at the leaked chats with Meta AI, you can see how everyday people are really using these tools. (Meta Invents New Way to Humiliate Users With Feed of People’s Chats With AI)

These use cases are often ignored, especially by software developers who push AI tools heavily. As a software developer myself, I can say that many in our field don’t really think about the long-term impact of the tech they build. They’re often focused on interesting problems and high salaries. For developers, it’s easy to test the output of these models by running the code or writing automated tests. If something breaks, you can just undo it. But for others—like students or lawyers—it’s not that simple. A student might get a bad grade (School did nothing wrong when it punished student for using AI, court rules), and a lawyer might embarrass themselves in front of a judge. (Mike Lindell’s lawyers used AI to write brief—judge finds nearly 30 mistakes)

... continue reading