Tech News
← Back to articles

Beliefs that are true for regular software but false when applied to AI

read original related products more articles

(a note for technical folk)1 | read as pdf

When it comes to understanding the dangers of AI systems, the general public has the worst kind of knowledge: that what you know for sure that just ain’t so.

After 40 years of persistent badgering, the software industry has convinced the public that bugs can have disastrous consequences. This is great! It is good that people understand that software can result in real-world harm. Not only does the general public mostly understand the dangers, but they mostly understand that bugs can be fixed. It might be expensive, it might be difficult, but it can be done.

The problem is that this understanding, when applied to AIs like ChatGPT, is completely wrong. The software that runs AI acts very differently to the software that runs most of your computer or your phone. Good, sensible assumptions about bugs in regular software actually end up being harmful and misleading when you try to apply them to AI.

Attempting to apply regular-software assumptions to AI systems leads to confusion, and remarks such as:

“If something goes wrong with ChatGPT, can’t some boffin just think hard for a bit, find the missing semi-colon or whatever, and then fix the bug?”

or

“Even if it’s hard for one person to understand everything the AI does, surely still smart people who individually understand small parts of what the AI does?”.

or

“Just because current systems don’t work perfectly, that’s not a problem right? Because eventually we’ll iron out all the bugs so the AIs will get more reliable over time, like old software is more reliable than new software.”

... continue reading