- Item 1
- Item 2
- Item 3
- Item 4
EU's AI Act: Stricter Rules for Chatbots on the Horizon
The European Union is preparing new AI regulations that could impact development and deployment, requiring companies like OpenAI to disclose their use of copyrighted material. As the AI Act evolves, chatbots may face increased scrutiny and transparency requirements.
The EU's headquarters in Brussels. Photo credit: Open source photo.
🧠Stay Ahead of the Curve
The EU is developing new AI regulations, potentially requiring companies like OpenAI to disclose their use of copyrighted material.
This development highlights the growing concern surrounding AI safety, transparency, and responsible deployment in the EU.
Stricter regulations could shape the future of AI governance, affecting innovation and the way AI platforms operate globally.
April 14, 2023
The European Union is preparing new regulations that could significantly impact the development and deployment of artificial intelligence (AI) platforms, according to the Financial Times. As discussions continue in Brussels regarding the proposals in the comprehensive Artificial Intelligence Act, sources indicate that the forthcoming regulation may require companies like OpenAI to disclose their use of copyrighted material in training their AI.
"High Risk" Chatbots in Focus of the AI Act
Central to the EU's AI Act is a four-tiered classification system that measures the risk AI technology could pose to an individual's health, safety, or fundamental rights. The risk levels are unacceptable, high, limited, and minimal, each of which triggers different regulatory requirements.
The rapid rise of generative AI technology has caught the attention of lawmakers due to its powerful capabilities and widespread adoption, prompting individual EU member countries to take action. Italy recently banned ChatGPT, citing alleged privacy violations, while Germany's commissioner for data protection is considering a similar ban. Following Italy's announcement, data protection authorities in France and Ireland consulted with the Italian data regulator to discuss their stance on ChatGPT.
Although the AI Act has not yet passed, preliminary comments from European parliament members suggest they may view generative AI art platforms, such as Stable Diffusion, and chatbots like OpenAI’s ChatGPT, as potentially hazardous innovations. In February, lead lawmakers on the AI Act proposed classifying AI platforms that use Large Language Models (LLMs) to generate text outputs without human supervision as high-risk.
Specific Proposals to Regulate Chatbots under the AI Act
Lawmakers are integrating new proposals into the AI Act to directly address the rise of sophisticated chatbots and LLMs.
A key proposal would compel developers of AI platforms like ChatGPT to disclose if they used copyrighted material to train their AI models. As previously reported, OpenAI has declined to share details on the training of GPT-4, much to the disappointment of AI researchers advocating for greater transparency.
Another proposal under consideration would require AI chatbots to inform human users that they are not conversing with another human. With instances of people forming attachments to chatbots and some even believing they are sentient, lawmakers argue that such disclosure is a fundamental first step.
Regulatory Changes on the Horizon, but Not Imminent
Introduced in 2021, the AI Act's recent debate over additional chatbot regulations suggests that the process to finalize the law will not conclude until at least 2024. In the interim, individual EU member states continue crafting their own policies, creating a complex web of governance criteria for companies like OpenAI to navigate.
DragoČ™ Tudorache, an EU parliament member leading negotiations on the AI Act, underscored the importance of regulations for ensuring safe deployment. "It is a pioneering technology, and we need to harness it, which means putting rules in place," he said. "Self due diligence by companies is not enough."
Research
In Largest-Ever Turing Test, 1.5 Million Humans Guess Little Better Than ChanceJune 09, 2023
News
Leaked Google Memo Claiming “We Have No Moat, and Neither Does OpenAI” Shakes the AI WorldMay 05, 2023
Research
GPT AI Enables Scientists to Passively Decode Thoughts in Groundbreaking StudyMay 01, 2023
Research
GPT-4 Outperforms Elite Crowdworkers, Saving Researchers $500,000 and 20,000 hoursApril 11, 2023
Research
Generative Agents: Stanford's Groundbreaking AI Study Simulates Authentic Human BehaviorApril 10, 2023
Culture
As Online Users Increasingly Jailbreak ChatGPT in Creative Ways, Risks Abound for OpenAIMarch 27, 2023