X

With AI Act, EU Moves to Protect People From Tech Risks

The preliminary legislation is one of the first to tackle the potential risks associated with AI technology.

Kourtnee Jackson Senior Editor
Kourtnee covers TV streaming services and home entertainment news and reviews at CNET. She previously worked as an entertainment reporter at Showbiz Cheat Sheet where she wrote about film, television, music, celebrities, and streaming platforms.
Expertise Cord-cutting | TV and music streaming services | Netflix | Disney Plus | Max | Anime | Interviews | Entertainment Credentials
  • Though Kourtnee hasn't won any journalism awards yet, she's been a Netflix streaming subscriber since 2012 and knows the magic of its hidden codes.
Carrie Mihalcik Former Managing Editor / News
Carrie was a managing editor at CNET focused on breaking and trending news. She'd been reporting and editing for more than a decade, including at the National Journal and Current TV.
Expertise Breaking News | Technology Credentials
  • Carrie has lived on both coasts and can definitively say that Chesapeake Bay blue crabs are the best.
Kourtnee Jackson
Carrie Mihalcik
3 min read
electronic circuit board with an artfiicial intelligence computer chip

AI regulations will soon become law in the EU.

Jonathan Kitchen/Getty Images

The European Parliament voted on Wednesday to move forward with a draft law to govern how artificial intelligence is used in the European Union. It's one of the first pieces of sweeping legislation focused on establishing guardrails to oversee the technology

Called the AI Act, the draft legislation aims to protect people's privacy, voting rights and copyrighted material. The law includes bans on using AI for discrimination and on invasive practices such as biometric identification in public spaces and "predictive policing systems" that could be used to illegally profile citizens.

Lawmakers also established a categorization system for AI risk, which classifies it as "minimal," "limited," "high" or "unacceptable." High risk is deemed as tech that impacts voters during election campaigns, human health and security, or the environment. Additionally, tech companies will be required to abide by rules for transparency such as AI use disclosures and measures that prevent the creation of illegal content.

The law, once finalized, could affect how companies like Google, Meta, Microsoft and OpenAI develop new AI tools and products. Though artificial intelligence technologies have been around for years, the field has advanced rapidly and has begun to seep into everyday life.

OpenAI's chatbot ChatGPT went viral after its November 2022 release and amassed 100 million active users by January. The generative AI tool can respond to questions, draft poetry and dish out advice on anything from fitness regimens to event planning. This spurred other companies to follow suit, ushering in a flood of new generative AI tools and products.  

Microsoft launched Bing Chat in February using OpenAI's GPT-4 tech, while Google rolled out its Bard chatbot and introduced an experimental AI-powered search engine called Search Generative Experience. In April, Amazon announced its Bedrock tool, which is used to build AI apps for Amazon Web Services.

Increased calls for AI regulations 

EU member countries will begin negotiations on the AI Act and a finalized law is expected early next year. The law could influence how policymakers in the US and other countries create their own regulatory systems. 

"While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose," Italian lawmaker Brando Benifei, who helped lead work on the AI ACT, said on Wednesday

Last month, OpenAI CEO Sam Altman testified during a Senate hearing on artificial intelligence and agreed that some sort of government regulation is needed in order to mitigate the risks of AI, a sentiment echoed by many other technology and AI experts

Watch this: ChatGPT Creator Testifies Before Congress On AI Safety and Regulation

Senate leaders have reportedly said bipartisan efforts to craft a comprehensive AI framework are still months away, though some lawmakers have begun to tackle parts of the technology. On Wednesday, Sens. Josh Hawley and Richard Blumenthal introduced a bill which stipulates that Section 230, a law that shields internet companies from content posted by users, doesn't protect AI-generated content, according to Axios. The Senate Human Rights Subcommittee also this week held a hearing on the impact AI could have on human rights.

Editors' note: CNET is using an AI engine to help create some stories. For more, see this post.