Published 14. Sep. 2023

AI Governance: Balancing Competitiveness with Compliance

General

The AI landscape is innovating at full speed. From the recent release of Google’s Bard and OpenAI’s ChatGPT Enterprise to growing implementation of AI tools for business processes, the struggle to regulate AI continues.

In Europe, policymakers are scrambling to agree on rules to govern AI – the first regional bloc to attempt a significant step towards regulating this technology. However, the challenge is enormous considering the wide range of systems that artificial intelligence encapsulates and its rapidly evolving nature.

While regulators attempt to ensure that the development of this technology improves lives without threatening rights or safety, businesses are scrambling to maintain competitiveness and compliance in the same breadth.

We recently spoke to two experts on AI governance, Gregor Strojin and Aleksandr Tiulkanov, about the latest developments in AI regulation, Europe’s role in leading this charge, and how business leaders can manage AI compliance and risks within their organizations.

 
Gregor Strojin is the Vice Chair of the Committee on Artificial Intelligence at the Council of Europe and former chair of the Ad Hoc Committee on AI. He is a policy expert with various roles including senior adviser to the Slovenian President of the Supreme Court and the State Secretary of the Ministry of Justice.
Aleksandr Tiulkanov is an AI data and digital policy counsel with 18 years of experience in business and law. He has advised organizations on matters relating to privacy and compliances for digital products and in the field of AI.
 

Europe Trailblazing AI Governance

 

Why does AI need to be regulated?

Aleksandr: Artificial intelligence is a technology that we see almost everywhere nowadays. It is comparable to electricity in the past, but more influential. Crucially, it’s not always neutral in how it affects society. There are instances where technologies based on artificial intelligence affects decisions which, in turn, affect people’s lives. In some cases where there is a high risk of impact, we should take care and ensure that no significant harm arises.

Gregor: Regulations are part of how we manage societies in general. When it comes to technology that is as transformative as AI, we are already faced with consequences both positive and negative. When there is a negative impact, there is a responsibility either by designers, producers, or by the state to mitigate and minimize those negative effects on society or individuals. We’ve seen the same being done with other technologies in the past.

 

Former President Barack Obama said that the AI revolution goes further and has more impact than social media has. Do you agree?

Gregor: Definitely. Even social media has employed certain AI tools and algorithms that grab our attention and direct our behavior as consumers, votes, schoolmates – that has completely changed the psychology of individuals and the masses. AI is an umbrella term that encompasses over a thousand other types of users.

AI will change not only our psychology but also logistics and how we approach problem solving in different domains.

Aleksandr: The change is gradual. As Gregor said, we already see it in social media – for example, in content moderation. Those are largely based on language and machine learning models. AI is driving what we see on the platform as well as what we can write and even share. To some extent, it means that some private actors are influencing freedom of speech.

 

Let’s talk about the role of Europe in AI compliance regulations. Can you explain why Europe is a trailblazer here?

Gregor: Europe has a special position geopolitically due to its history. It’s not one country. It’s a combination of countries that are joined by different international organizations or multi-supranational organizations such as the European Union and the Council of Europe to which individuals’ countries have given parts of their sovereignty. This is a huge difference compared to the United States or China which are completely sovereign in their dealing.

When it comes to the European Union in particular, many types of behaviors are regulated by harmonizing instruments of the EU to have a uniform single market and provide some level of quality in terms of safety and security to all citizens – so we don’t have different rules in Slovenia, Germany, France of Spain. Instead, this is one market of over 500 million people.

 

Gregor, can you give us a brief overview of the latest developments in AI regulation and compliance in the EU?

Gregor: There are two binding legal instruments that are in the final phases of development. The most crucial one is from the European Union, the AI Act. It is directed at the market itself and is concerned with how AI is designed, developed, and applied by developers and users. The AI Act addresses a large part of the ecosystem, but it does not address the people who are affected by AI. Here is where the second instrument comes in, the Convention on AI that is being developed by the Council of Europe.

Another thing to mention is that the EU’s AI Act only applies to EU members and is being negotiated by the 27 member states. The Council of Europe’s instrument is being negotiated by 47 member states as well as observer states and non-member states such as the United States, Canada, Japan, Mexico, and Israel. The latter has a more global scope.

In this way, I see the EU’s AI Act as a possible mode of implementation of the rules set by the conventions of the Council of Europe. This is still partially theoretical, but it’s likely we’ll see both instruments finalized in the first half of next year. Of course, there will be a transitory period before they come into effect. This is already a good indication of how businesses must orient themselves to ensure compliance in due time.

 

Should what the EU is doing be a blueprint for the rest of the world?

Gregor: Yes, if they choose to. I think many in Europe will acknowledge that we have different ways of approaching problems and freedom of will, but if you want to do business in Europe, you have to play by Europe’s rules. This is an element in the proposed AI Act as well as the General Data Protection Regulation (GDPR) legislation from the past decade which employs the Brussels effect – meaning that the rules applied by Europe for Europe also apply to companies outside of Europe that do business here even if they do not have a physical presence here. So, if producers of AI from China or the United States wish to sell their technology in Europe, they have to comply with European standards.

 

What are the business implications of the European approach?

Aleksandr: The European approach harmonizes the rules for a single market. It’s beneficial for businesses as they won’t have to adapt to each country’s local market. I say it’s a win-win for businesses who are approaching the European continent. We’ve already seen this happening with the GDPR. As long as they have a European presence, they adopt the European policy globally. This could happen with AI regulations as well.

If you look at the regulatory landscape, we can see some regulatory ideas coming up in North America and other continents. In China, there are some regulatory propositions. But I would say that the European approach is the most comprehensive. Chances are it will be taken as a basis by many companies.

 

Balancing Innovation and Compliance

 

What do you say to concerns that this is just another set of regulations to comply with in a landscape that is constantly innovating at speed?

Gregor: I’ve been working with technology for more than 20 years. I also have experience with analog technology that is regulated, like construction building.

What we’re dealing with here is not just regulation for regulation’s sake, but it benefits corporations in the long run because it disperses risk and consequences of their liabilities. It creates a more predictable environment.

There are many elements of regulation that have been proposed for AI that have been agreed to by different stakeholders in the process. We must consider that the industry was involved in preparing both these regulatory instruments I’ve mentioned.

Some issues like data governance are already regulated. There are, of course, disagreements on elements like transparency because there may be businesses advantages that are affected by regulation. On the other hand, technology does not allow for everything. There are still open questions on what needs to be done to ensure a higher quality in the processes development to mitigate risk.

 

So there needs to be a balance between regulation, competitiveness, and the speed of innovation. How can we be assured that AI regulation does not harm competitiveness in business?

Gregor: The regulation proposed by the European Commission is just one element in the basket of proposals of the so-called Digital Agenda. There are, of course, some other proposals on content moderation that came into existence just recently that are binding. But there are also several instruments which address the promotion and development of AI systems, both in terms of subsidies for companies and individuals to develop digital skills and to create a comprehensive and stable environment for IT technology in Europe. There are billions being thrown into subsidies for companies and innovators. There is a big carrot, and the stick is in preparation, but it is not here yet.

Aleksandr: I must also underline that there are things in place that facilitate the upcoming EU regulation, such as the Regulatory Sandboxes. You may have seen an example of this in Spain. Businesses will be able to test out their hypothesis on how they want to operate these AI systems that could potentially be harmful.

It’s important to understand that the scope of the regulation is not over extensive. I would say it only covers really high-risk systems to a large extent, and some lower risk systems but only where it’s important. For example, there are transparency obligations when it comes to defects for lower risk systems. Then there are meaningful rules for high-risk systems which affect people’s lives – like government aid or the use of AI in law enforcement or hiring.

It’s important to have proper data governance and risk management in place for systems that affect people on a massive scale.

Also, if you look at mature organizations with this technology already in the market, they are making sure that the data used to train their AI systems is good enough. They are doing it themselves as they don’t want to get in trouble with their clients. Regulations are not so unusual.

 

In that case, will innovation be faster than the regulations can keep up with?

Gregor: That’s a pertinent question when it comes to technology. It is imprudent, from the position of a policymaker, to try to regulate future developments as that would impede innovation.

I don’t think there’s any impediment of innovation happening at this moment. Perhaps you could categorize getting subsidies for being compliant with ethical recommendations as that, but it’s not really an impediment.

In the future, there will be limitations to innovation of AI in the same degree as biotechnology, for example, where there are clear limits on what is allowed and under what conditions to prevent harm. That is narrowly defined. The general purpose, of course, is to increase the quality of these products, and create a safe environment and as predictable a playing field for customers in the market.

 

Business Focus: AI-Risk Management

 

What’s coming up next on AI governance that business leaders should consider?

Gregor: At this point, what’s coming up next for policy development is the fight back from those who do not want such legislation. It’s something we’ve already seen this year. Many think we had an AI revolution only this year. No. It’s a technology that’s been around for a few years and there have been calls for regulation of AI on the basis of existential threats.

If we take those calls seriously, we must completely backtrack and change the direction of what is already being developed.

But I do think if we follow through with what has been proposed to ensure the safety and security of this technology, we will also solve the problem of a so-called super intelligence taking over humanity. First, we need to ensure correct application of existing rules to human players.

 

With all this in mind, what advice do you have for business leaders when it comes with regulations and compliance in the field of AI? What can they start with tomorrow?

Aleksandr: Technical standards will be the main thing. I would advise all those developing this technology to take part in technical committees in their national standard setting bodies which can then translate into work on the European level of standards.

Take into account your practical concerns and considerations so that these technical standards can address business concerns in terms of product development. It is important to follow and participate in this work on regulation development for the AI ecosystem.

Another thing is to consider risk management frameworks to address AI-specific risks. The NIST or ForHumanity Risk Management Frameworks are a practical tool for organizations to control how they operate and deploy AI systems in a safe and efficient manner. Business leaders can also begin to appoint people who would be responsible for setting up processes.

There will be a transitional period, as there was with the GDPR. If companies can demonstrate that they are compliant with European standards that are still under development, they will automatically be considered compliant with the EU AI Act. But this is ongoing work.

Start considering broader risk management frameworks as a first step to get the ball rolling in organizations.

Gregor: Technical development skills alone are not sufficient to build a competitive and scalable organization, especially as not only Europe but other regions are preparing to introduce regulatory measures. My advice is similar to Aleksandr’s; build on your capacities for risk and compliance management. I think it will pay back quite soon.

Sign up as a member of our Executive Business Network Aurora Live to connect with leading tech leaders across Europe all year round.