9to5Mac Security Bite is exclusively brought to you by Mosyle, the only Apple Unified Platform. Making Apple devices work-ready and enterprise-safe is all we do. Our unique integrated approach to management and security combines state-of-the-art Apple-specific security solutions for fully automated Hardening & Compliance, Next Generation EDR, AI-powered Zero Trust, and exclusive Privilege Management with the most powerful and modern Apple MDM on the market. The result is a totally automated Apple Unified Platform currently trusted by over 45,000 organizations to make millions of Apple devices work-ready with no effort and at an affordable cost. Request your EXTENDED TRIAL today and understand why Mosyle is everything you need to work with Apple. Around this time two years ago, OpenAI’s incredibly popular GPT-4 API was spreading like wildfire all over the App Store. It wasn’t long before AI-powered productivity apps, chatbot companions, nutritional trackers, and basically anything else you could think of dominated the charts, garnering millions of downloads. Fast forward to today, many of those vibe-coded, opportunistic apps have disappeared, partly due to cooling hype but also Apple’s tougher stance against knockoffs and misleading apps. However, this week, security researcher Alex Kleber noticed that one misleading AI chatbot, impersonating OpenAI’s branding, managed to achieve top marks in the Business category. Albeit on the less popular Mac App Store, this is still significant and warrants a PSA to be cautious sharing personal information with these apps. The number one Business “AI ChatBot” app on macOS appears to impersonate OpenAI’s branding from its logo and name to its design and logic. Investigation shows it is made by the same developer as another nearly identical app. Both share matching names, identical interfaces and screenshots, and even the same support website that leads to a free Google page. They also appear under the same developer account and company address located in Pakistan. Despite Apple’s removal of most OpenAI copycat apps, these two slipped through review and now sit among the top downloads on the U.S. Mac App Store. It goes without saying that an app’s reviews, ranking, or even approval to the store do not necessarily guarantee safety in regard to data privacy. Sketchy GPT clone on the U.S. Mac App Store – 9to5Mac A recent report published by Private Internet Access (PIA) found troubling examples of poor transparency in many of these personal productivity apps. One popular AI assistant that used the ChatGPT API quietly collected far more user data than its App Store description claimed. The listing said it only gathered messages and device IDs to improve functionality and manage accounts. Its privacy policy showed it also collected names, emails, usage stats, and device information, which often ends up being sold to the likes of data brokers or used for nefarious purposes. Any GPT clone app that collects user inputs tied to real names is a recipe for disaster. Imagine a massive pool of conversations where every message is linked to the person who said it, sitting in a sketchy database run by a shell company with an AI-generated privacy policy that holds no water in the country where they reside. That is happening somewhere right now. One might assume this is why the App Store has privacy labels. While Apple introduced them to help users understand what data an app collects and how it uses it, these labels are self-reported by developers. Apple relies on their honesty. Developers can stretch the truth, and Apple has no system to verify it. I think it’s important to continue spreading the word that these apps are still out there, collecting who knows what information and data from unsuspecting users. These undoubtedly pose huge privacy risks. Spread the word!