Opinions expressed by Entrepreneur contributors are their own.
Key Takeaways AI tools fundamentally differ from traditional software because they permanently absorb every piece of shared data into their knowledge base.
Leaders must implement clear usage policies, deploy enterprise-grade solutions with data controls and foster ongoing security awareness to prevent costly data breaches.
Within months of its launch in November 2022, ChatGPT had started making its mark as a formidable tool for writing and optimizing code. Invariably, some engineers at Samsung thought it was a good idea to use AI to optimize a specific piece of code that they had been struggling with for a while. However, they forgot to note the nature of the beast. AI simply does not forget; it learns from the data it works on, quietly making it a part of its knowledge base.
As the exposure of proprietary code was discovered, Samsung immediately rolled out a memo that explicitly banned the usage of generative AI tools. And they had solid reasons to do so. Typical loss estimated from such data exposure can run into millions and result in loss of competitive advantage.
Understanding the hidden risk
How AI tools are different from traditional software:
Most of us are accustomed to working with traditional software. We share whatever data we want with them, and the results are private to us. Quite expectedly, corporate employees pay scant regard to the type of data they share, expecting standard access controls to cover any security threats.
In sharp contrast, AI systems tend to absorb every piece of data that we share with them. Every code snippet, every document and even our prompts are used to inherently improve the system’s results. This leads to the permanence problem, as the data that AI absorbs can technically be accessed by outsiders, especially if you are using a publicly accessible AI platform.
Moreover, you really do not have a Delete button in AI as opposed to traditional software, where you can simply delete all your data. AI systems ingrain learnings that cannot be removed as they end up becoming part of its knowledge corpus and are inseparable from the model itself.
... continue reading