- Item 1
- Item 2
- Item 3
- Item 4
4 Million Accounts Compromised by Fake ChatGPT App
More than 4 million accounts have been compromised by an imposter ChatGPT software, raising cybersecurity concerns and spotlighting the potential dangers of the generative AI's widespread popularity.
ChatGPT scams continue to draw in users, with one fake app compromising 4 million accounts. Open source photo.
🧠Stay Ahead of the Curve
A counterfeit ChatGPT app compromised over 4 million accounts, stealing credentials and bypassing two-factor authentication.
The breach highlights the security risks posed by fake AI applications amid ChatGPT's unprecedented popularity.
The incident raises concerns about the need for stronger security measures and scrutiny in the rapidly evolving AI landscape amidst growing adoption of AI tools.
April 17, 2023
A fake ChatGPT application that compromised the accounts of more than 4 million users, an investigation by security firm Cyberangel has revealed. Distributed as both a Chrome Extension and Windows desktop software, this counterfeit tool steals user credentials and bypasses two-factor authentication for the affected accounts.
For Facebook users, the damage has already led to the viral TikTok hashtag, #LilyCollinsHack. The fake application locks users out of their Facebook accounts and changes their name and user profile to resemble Lily Collins, the actress from the hit Netflix series “Emily in Paris.”
Cyberangel's investigation into the stolen data, accessed via an unsecured public database, revealed its stunning scope: 4 million stolen credentials total, with over 6,000 corporate accounts, 7,000 VPN logins that could grant access to secure corporate networks, and customer logins for a wide range of software services.
ChatGPT’s Popularity Masks Criminal Schemes
Since its debut in November, ChatGPT has set records for having the fastest-growing user base of any website. By some estimates, ChatGPT gained one million users in its first week, crossing 100 million active monthly users within two months of launch.
This incredibly rapid adoption has inspired a gold rush to capitalize on ChatGPT’s popularity, attracting both well-intentioned and nefarious actors. Within days of launch, users had reverse-engineered ChatGPT’s web API and were offering native iPhone apps that imitate the ChatGPT experience.
Internet forums are flooded with users asking “how to access ChatGPT,” and software developers have been quick to release thousands of tools that utilize ChatGPT’s API, exploring the wide-ranging applications of generative AI technology. Many of these are Chrome plugins, native mobile apps, and desktop applications, allowing users to interact with ChatGPT beyond just OpenAI’s website.
Currently, three of the top twelve free productivity apps on Apple’s App Store are ChatGPT apps, many with confusing names like “Chat AI Chatbot Assistant Plus” and descriptions that don't clearly indicate their third-party nature. In-app purchases for “Plus” and “Pro” level subscriptions in these apps resemble the OpenAI’s own ChatGPT Plus paid tier.
Corporations are Scrambling to Keep Up
The proliferation of ChatGPT’s popularity and its imitator third-party apps have created a security headache for corporations, many of whose workers have unofficially begun adopting ChatGPT.
In April, the Economist Korea reported that Samsung had placed new limits on using ChatGPT after discovering employees had leaked sensitive source code and meeting notes to the chatbot. ChatGPT's data policy also states that unless users explicitly opt out, it uses their prompts to train its models, raising concerns that sensitive information could be incorporated into future versions of the chatbot.
And in February, Amazon's lawyers cautioned employees after identifying instances of ChatGPT-generated text “closely” resembling internal company data. OpenAI has refused to disclose what training data was used to build GPT-4, raising concern among AI and security researchers.
As enthusiasm for ChatGPT and AI technology continues to grow, corporations face an ongoing challenge to balance security with the adoption of innovative tools. And OpenAI itself could be vulnerable to cyber attacks or simple mishaps; in March, an OpenAI bug accidentally revealed users' chat histories to other users, leading CEO Sam Altman to apologize and explain that the company felt “awful” about what had happened.
Research
In Largest-Ever Turing Test, 1.5 Million Humans Guess Little Better Than ChanceJune 09, 2023
News
Leaked Google Memo Claiming “We Have No Moat, and Neither Does OpenAI” Shakes the AI WorldMay 05, 2023
Research
GPT AI Enables Scientists to Passively Decode Thoughts in Groundbreaking StudyMay 01, 2023
Research
GPT-4 Outperforms Elite Crowdworkers, Saving Researchers $500,000 and 20,000 hoursApril 11, 2023
Research
Generative Agents: Stanford's Groundbreaking AI Study Simulates Authentic Human BehaviorApril 10, 2023
Culture
As Online Users Increasingly Jailbreak ChatGPT in Creative Ways, Risks Abound for OpenAIMarch 27, 2023