Introducing the New AI Regulations: What They Mean For Your Business

twine thumbnail ai regulations what they mean for your business

Need help building a custom dataset? We’re here to help.

Last year, the EU released plans for new AI regulations. We sat down with Kevin Butler, founder of Centigy, to get his thoughts.

Here’s a quick summary of what these regulations are, from Kevin Butler:

“AI has the potential to solve many societal and environmental problems. However, failures of AI technologies can negatively affect some people, and (the new AI regulations) will mean that we can actually build a solid foundation for the future of these technologies … AR regulation should result in protecting people’s rights and increasing the trust and adoption of AI technologies. That’s what we can expect is going to happen. 

The new regulations are the first regulation of its kind, and much like GDPR, it was created in the EU and was expected to be adopted globally. While GDPR addresses privacy, AIA addresses ethics. It’s currently in its final proposal phase and is planned to come into effect in the second half of 2023 … It will be in effect for a transitional period of two years, after which all suppliers of AI systems that are deemed high risk – and even limited risk – must conform to the new standards or face severe penalties.”

AIA, created by the EU, is the first regulation of its kind. The goal? For AI to make large strides in the protection of people’s rights, as well as diminishing many societal and environmental problems. It’s currently in its final proposal phase and is planned to come into effect in 2023. Any suppliers of AI systems deemed any risk must conform to the new standards.

GDPR started out as a set of EU-only regulations but has since been adopted in many places around the world. We asked Kevin if he saw the same thing happening with the new AI regulations:

“Firstly, AIA has extraterritorial effects … which means they will apply to non-EU providers and users of AI systems if the content predictions and recommendations that they generate, or, decisions they influence, are used within the EU. It has an extraterritorial effect, which is similar to GDPR.

Secondly, the EU is an influential player on the world stage and has adopted some of the highest regulatory standards in the world. They influence the environment … You might be using software that was developed in Silicon Valley, and you might be in the UK, right? So you’re using American software in the UK, but that software is probably developed to be sold globally. So if the providers of that software want to sell it into the EU, they will need to adapt it … You will probably see the changes in that software as a result of that. So, absolutely, yes. AIA, while it is a European initiative, it is extraterritorial and you will see it adopted around the world or it will influence the ethical standards around the world.”

What are the different levels of risk within the new AI guidelines?

“AI systems will be categorized into four risk profiles according to AIA. So you’ve got unacceptable risk – anything that falls into this range will be prohibited. An example that will be prohibited is social scoring systems, and many others … anything really that will provide, or is considered a clear threat to the safety and livelihoods of people. 

For high-risk, I’ll be speaking from an HR perspective because that’s the area in which I specialize. For HR, high-risk systems are those that can be used to grade one person over another, (i.e.) in employment matters –  it may be from a candidate selection, It could be a sourcing tool … proactive sourcing tools you may have heard of where you feed the AI your criteria of the type of people you want to find, and it will search various websites and present you with the best candidates.

There’s also emotional detection, which will fall into the high-risk category, (i.e.) facial recognition, CV sorters, skills analysis, high potential predictors, sentiment analysis, and many more … anything that’s gonna grade one person over another, potentially a group of people over another, is high risk … There’s certain requirements on the suppliers of these systems – and on the users too, on the organizations that use them. 

Then you’ve got limited risk. Limited risk is really, (in the HR world), anything that does not contain personal information … (i.e.) chatbots, would fall into the limited risk category … The only obligation on providers of these systems and even the organizations who use them is that you will need to label them as such. So when an individual is interacting with a chatbot, you must be told that you are interacting with AI, and you are given a choice then to either continue or stop engaging and request another communication method. It must be transparent that you are conversing with AI. 

Then a very final category is minimal or no risk. And these are the spam filters that happily work in the background and don’t cause us any harm whatsoever … (i.e.) AI used in video games.

They are the four different risk profiles … what we’ll be more concerned about now is making action (on whether) the AI HR software that you’re using falls into high risk or unacceptable … In those cases, (i.e.) if it’s unacceptable, it’ll be switched off. If it continues to run, the supplier will face severe penalties.

If it’s a high risk, the suppliers will have a choice. They can either double down and ensure that the systems are compliant with the AI regulations and, and invest some time to identify some areas of the tools that will probably never pass the regulation … You can expect to see some changes in these software offerings.”

AI systems will be categorized into four risk profiles according to AIA. 

Unacceptable risk – considered a clear threat to the safety and livelihood of people – anything that falls into this range will be prohibited. 

High-risk systems – those that can be used to grade one person over another, also emotional detection, which will fall into the high-risk category. 

Limited risk – anything that doesn’t contain personal data – i.e. chatbots. 

No risk – often works in the background and doesn’t cause us any harm.

Check out what else Kevin has to say on AI regulations here, in part 2 of our interview. 


Want to hear more AI news? Don’t forget to follow our Twine AI page, to get the latest AI and Machine Learning news.

Need help building your Machine Learning model? Twine AI specializes in providing audio and video datasets via our community of freelancers for machine learning and AI.

Read more about AI in our AI category on the Twine Blog

Twine AI

Harness Twine’s established global community of over 400,000 freelancers from 190+ countries to scale your dataset collection quickly. We have systems to record, annotate and verify custom video datasets at an order of magnitude lower cost than existing methods.