Since the AI boom of 2022, lawmakers and policymakers have been scrambling to find a way of regulating this technology and the companies developing it. After years of work, the EU has officially established the first set of actual laws governing AI.
This is called the EU AI Act, and it’s been in development for quite some time. In fact, it’s been in development since 2020. This is a landmark bit of legislature, as it could set the tone for how the rest of the world governs AI companies.
The EU AI Act is now officially in effect
It should come as no surprise that the EU would be the first to establish actual AI laws. A few months ago, we got the news that this act was going to go into effect. It was a bit of a warning to companies to be on their guard for upcoming regulations. Now that these laws are in effect, companies that violate them could face consequences like hefty fines at the very least.
What companies will be affected?
It should come as no surprise that companies based in the EU will be affected. Companies in charge of developing or training AI models will come under heavy scrutiny from the government. These laws won’t only target tech firms, but they will also target companies that just deploy or use them. If AI has a significant impact on the business or its product, the EU will be looking in.
These laws’ scope isn’t limited to companies operating in the EU. The fact of the matter is that the companies with the most control over the AI industry are based in the U.S. We’re talking about Google, OpenAI, Microsoft, Amazon, Meta, and Apple. The Act will have a special interest in these companies, as they stand to have the biggest impact on the AI industry.
“…the AI Act will likely apply to you no matter where you’re located,” said Charlie Thompson, senior vice president of EMEA and LATAM for enterprise software firm Appian. If a company stands to have a major effect on the world, or if it has operations in the EU, then it will be under scrutiny.
How will the AI companies be governed?
The EU will evaluate companies on a risk-based system. They will be judged on how much their products pose to society. A small company making a simple AI tool to organize your email inbox will be judged differently from a major company providing AI tools used for tracking hurricanes.
AI applications deemed “High-risk” will be under much stricter regulations. High-risk applications include AI tools used in medicine, self-driving cars, loan decisioning systems, remote biometric identification, and educational systems, according to CNBC. If an AI tool messes up with any of those applications, the ramifications would be devastating to countless people.
There are also some applications that are so high-risk that they’re considered “unacceptable” by the government. A good example of this is what’s called Social Scoring, predictive policing, and emotional recognition technology, as stated by CNBC.
Applications like these are banned, and any company caught violating the ban will face consequences.
Companies developing high-risk AI applications will also have more obligations to the EU. They include constantly logging activity so that the government isn’t left in the dark. Also, these companies will need to use higher-quality datasets in order to reduce the risk of bias. High-risk companies are also required to share detailed documentation about the AI models with the appropriate authorities.
Even if the company is not based in the EU, they’re still required to obey EU copyright law along with implementing the proper cybersecurity measures. They’ll also need to report to the government how their AI models are trained.
The exception for open-sourced models
Companies like Google, OpenAI, and other companies training closed-sourced models will be put under tight regulations. However, companies like Meta, Stability AI, and Mistral that are developing open-sourced models won’t be under such tight restrictions.
Companies can qualify for exceptions from some of the rules, but in order to do so, their models will need to be truly open and transparent. As such, information about the models like model architecture, weights, and model usage available to the public. We’re not sure if the top open-source models on the market follow these requirements.
Consequences for violating the EU AI Act
If a company violates these laws, then it can expect some heavy fines. The amount of money the company is fined will vary on the size of the company and the weight of the violation. If a company does violate the law, it can be fined between €7.5 million and €35 million or between 1.5% and 7% of the company’s global revenue. Those are pretty hefty fines. Smaller companies will be fined lower amounts and larger companies could see the full fine thrown at them.
While these laws have been put into place to make sure that AI is managed and regulated safely, we’re pretty sure that some of these companies will bump up against them. We’ll just have to see if they will play nicely with the law so that we can all have a safer AI future.
The post The EU AI Act is now official! appeared first on Android Headlines.