Skip to content
Tech News
← Back to articles

US bank discloses security lapse after sharing customer data with AI app

read original get Data Security USB Drive → more articles
Why This Matters

This incident highlights the growing cybersecurity risks associated with integrating AI tools into banking operations, emphasizing the importance of robust data protection measures. For consumers, it underscores the need for vigilance regarding their personal information in digital interactions. For the tech industry, it serves as a reminder to prioritize security protocols when deploying AI applications that handle sensitive data.

Key Takeaways

In Brief

Community Bank, which operates in Pennsylvania, Ohio, and West Virginia, disclosed a cybersecurity incident that exposed customers’ names, dates of birth, and Social Security numbers.

In an 8-K filing dated May 7 with the U.S. Securities and Exchange Commission, the bank said it detected an exposure of customers’ personal data due to the use of “an unauthorized artificial intelligence-based software application.”

The bank said it disclosed the incident “due to the volume and sensitive nature of the non-public information at issue.”

It’s unclear exactly what happened, but based on the language in the filing, it appears someone working for Community Bank may have uploaded customer data to an online AI chatbot, potentially exposing that information to the chatbot maker.

While Community Bank did not disclose how many customers were affected by the incident, nor what AI application was involved, the company said it is “evaluating the customer data that was affected” and is sending notifications in accordance with relevant laws.

Community Bank’s chief executive John Montgomery did not immediately respond to TechCrunch’s request for comment.

The Register first reported the security lapse.