cover of episode #265 What You Need to Know About the EU AI Act with Dan Nechita, EU Director at Transatlantic Policy Network

#265 What You Need to Know About the EU AI Act with Dan Nechita, EU Director at Transatlantic Policy Network

2024/11/28
logo of podcast DataFramed

DataFramed

People
D
Dan Nechita
Topics
Dan Nechita: 欧盟人工智能法案是首部针对人工智能制定严格规则的法律,旨在使欧盟的人工智能使用更以人为本、更安全。它主要针对那些可能对个人权利产生影响的系统,例如那些对人们做出决策的系统。法案将人工智能系统分为不可接受的风险、高风险、有限风险和无风险四类,并对不同风险等级的系统采取不同的监管措施。对于高风险系统,例如生物识别系统、关键基础设施中的系统以及在教育、就业、执法等领域中使用的系统,法案施加严格的规则,要求提供者和部署者履行一系列义务,包括建立风险管理系统、确保数据质量和数据治理、维护日志和文档以及确保系统具有足够的网络安全级别。对于有限风险的系统,如聊天机器人和深度伪造,法案实施透明度要求,要求向终端用户告知该系统是AI系统,而不是真人。无风险或最小风险的AI系统基本不受监管。对于强大的AI模型,法案采取了一种更具合作性和创造性的监管方法,设立了人工智能办公室来动态监控风险并报告事件。法案还考虑到了AI与其他受监管产品(如医疗设备)的交叉,力求避免重复监管,并强调组织需要具备AI素养,以了解使用AI系统的潜在风险。法案的实施将分阶段进行,禁止性规定将在六个月内生效,其他规定将在未来几年内生效。 Adel: Adel在访谈中主要提出问题,引导Dan Nechita阐述欧盟人工智能法案的各个方面,包括其意义、风险分类框架、组织合规策略、与现有法规的交叉、AI素养要求以及未来立法等。

Deep Dive

Key Insights

What is the main goal of the EU AI Act?

The main goal is to protect health, safety, and fundamental rights in the European Union when AI systems are used, making AI more human-centric and safer to ensure trust and broader adoption.

What are the four broad categories of risk classification in the EU AI Act?

The four categories are: unacceptable risk (prohibited uses like mass surveillance), high-risk systems (e.g., biometrics, law enforcement), limited risk systems (e.g., chatbots, deepfakes with transparency requirements), and no risk systems (e.g., AI in agriculture).

What are the implications for organizations deploying limited risk AI systems under the EU AI Act?

For limited risk systems, providers must ensure content is labeled (e.g., chatbots must disclose they are AI). Deployers have a best-effort obligation to ensure transparency, but the regulatory burden is lighter compared to high-risk systems.

How does the EU AI Act handle general-purpose AI models like GPT-4?

General-purpose models above 1 billion parameters are subject to obligations, including maintaining documentation, complying with copyright laws, and reporting incidents. Models with systemic risk require dynamic supervision by the European Commission's AI Office.

What are the key steps for organizations to prepare for compliance with the EU AI Act?

Organizations should: 1) check if their AI systems fall under the Act's definition, 2) conduct an inventory of AI systems and their intended use, 3) identify if any systems are high-risk, and 4) follow the specific obligations based on risk classification.

What is the role of AI literacy under the EU AI Act?

AI literacy is a soft obligation requiring organizations to ensure operators understand the potential risks of AI, such as bias, discrimination, and over-reliance on machine outputs. It aims to prevent misuse and ensure responsible deployment.

How does the EU AI Act evolve over time?

The Commission can amend the list of high-risk use cases based on new risks. Guidance will be provided to clarify definitions and gray areas. Member states must set up national supervisory authorities, and the AI Office will oversee powerful models with systemic risk.

What were the main challenges in creating the EU AI Act?

Challenges included negotiating sensitive topics like biometric surveillance, aligning the AI definition with international standards, and addressing copyright issues. The emergence of generative AI added complexity, requiring flexible rules for future-proofing the regulation.

Chapters
The EU AI Act is the world's first comprehensive AI regulation aiming to make AI in the EU more human-centric and safer. Its main goal is to protect health, safety, and fundamental rights when AI systems are used. This increased safety and trust will lead to more adoption and, subsequently, economic growth.
  • First comprehensive AI regulation in the world
  • Aims to make AI more human-centric and safer
  • Protects health, safety, and fundamental rights
  • Increased trust leads to more adoption and economic growth

Shownotes Transcript

Translations:
中文

We eventually ended up with a very good piece of legislation. You can see that from the fact that it's started to be copied all over the world now, and I'm sure it's going to have a tremendous impact around the world as more and more states with different regulatory traditions will look into, okay, what are the rules for AI? ♪

Hello everyone, this is Adel, data evangelist and educator at DataCamp. And if you're new here, DataFramed is a podcast in which we explore how individuals and organizations can succeed with data and AI. With the EU AI Act coming into effect on August 1st, 2024, I think it's safe to say the industry is facing a pivotal moment.

This regulation is a landmark step for AI governance and challenges data and AI teams to rethink their approach to AI development and deployment. How will this legislation influence the way AI systems are built and used? What are the key compliance requirements your organization needs to be aware of? How can companies balance regulatory obligations with the drive for innovation and growth?

Enter Dan Nikita. Dan Nikita is the EU Director at Transatlantic Policy Network. He led the technical negotiations for the EU AI Act on behalf of the European Parliament. Throughout his mandate, besides focusing on AI, he focused on digital regulation, security and defense, and the Transatlantic Partnership as head of cabinet for MEP Dragos Tudoraki.

Previously, he was a State Counselor for the Romanian Prime Minister with a mandate on e-governance, digitalization, and cybersecurity. In the episode, we spoke about the EU AI Act's significance, different risk classification frameworks within the Act, organizational compliance strategies, the intersection with existing regulations, AI literacy requirements for the EU AI Act, and the future of AI legislation, and much more.

If you enjoyed this episode, make sure to rate it and subscribe to the show. It really helps us. And now on to today's episode. Dan Nikita, it's great to have you on the show. Thanks, Adel. Good to be here. Happy to be on. Yeah, very excited to be chatting with you. So you are the lead technical negotiator for the EUAI Act. You also served as the head of cabinet for European Parliament member Dragos Tudorace. I really hope I pronounced that name correctly.

So maybe to set the stage with the EU AI Act officially coming into effect on August 1st, 2024, could you start by giving us a bit of an overview? What exactly is the EU AI Act? Why is it such a significant piece of legislation in the world of AI? Well, I hope you have quite a lot of time to give you an overview of it. But in brief, I think, you know, it's of course very important because it's the first

really hard law that sets rules for artificial intelligence and it does that throughout the European Union, so the entire European market. It's a very significant market. And I think to give you a, you know, the gist of it, it's a 150 page document after the negotiations, but trying to capture that

I would say it aims to make artificial intelligence used in the European Union more human-centric and safer to use. That's the gist of it. That is, of course, there is a rational explanation for this. Once you have safer artificial intelligence systems, then you have more trust in the technology. More trust means more adoption. More adoption then means...

you know, digital transformation, economic growth and so on. But in one sentence, I would say that its main goal is to protect health, safety and fundamental rights in the European Union when artificial intelligence systems are used.

Okay, that's really great. And you know, you mentioned it's a, we do have some time. And you mentioned that there is, it's 150 page document. Walk us through some of the guiding principles or like key foundations that shape the AI Act. Like what are the EU's principles essentially towards regulating AI?

Well, this is a product safety type of regulation, meaning that before placing on the market a product, let's assume, you know, a rubber ducky or an elevator, that product needs to be safe enough for it to be used. And it follows the same logic.

Now, for artificial intelligence, this is, and this is one of the first misconceptions that I have to go against when discussing the AI Act. For artificial intelligence, this is only for those systems that pose risks to health, safety, and fundamental rights. So basically, it's systems that could impact the lives and the rights of natural persons.

And most of the rules, and we'll get to that a little bit later, I think, but most of the rules there apply to those systems. If you're thinking about an AI system used in agriculture to decide the quantity of water for tomatoes, the AI act doesn't really impact that, doesn't have any obligations. It's really when you get into the systems that decide things for people that could impact their fundamental rights when the rules come into play.

Okay, and that I think segues really well to my next question is because I think how you classify risk in the UAI Act is really important towards the type of regulation, protection set into place when it comes to that use case. So maybe walk us through risk classification, the risk classification framework set out by the UAI Act.

How are AI systems categorized and what are the implications here for both developers and users of these types of technologies, depending on the use case? Okay, so here I'll get a little bit more concrete. So from the broad picture to a more specific picture.

The AI Act addresses, you know, mainly high-risk case systems, and we can call these, you know, high-risk because that's a denomination. We could also call them class one or class two, or it's a special category of systems.

But it does address risk on a gradient from unacceptable risk to no risk. And there are four, I would say, broad categories in which AI systems are being classified. The first one is unacceptable risk. For this, we use the most blunt instrument possible, which is prohibitions. And those, you know, unacceptable risk,

from using AI systems is really something that we don't want in the European Union. It's contravening to our values. It really has no use in the European Union. For example, mass surveillance or social scoring like we see in authoritarian states that discriminates against everybody based on a social score. These types of systems are outright prohibited.

There's a limited, and it's not really the systems, it's more the use in these scenarios that's prohibited. These are very few, of course, because prohibitions are very, very raw and blunt instruments. Then imagine, of course, the famous pyramid of risk. If this is the top, then the next layer would be the high-risk AI systems.

And these are really systems that could impact health, safety, or fundamental rights in a significant way. So we're talking about systems dealing with biometrics. We're talking about systems used in critical infrastructure, in education, in employment to make decisions about employment relationships, in law enforcement, in migration context, for example, in asylum seeking situations.

in the administration of justice, in democratic processes. And these are rather discrete categories of use cases that were studied before and were identified as having potential risks to health, safety, and fundamental rights. Then as you move down, you have limited risk systems who

or use cases that are not directly necessarily posing a significant risk, but they could have an impact. For example, and these are fairly standard to explain, for example, chatbots and deepfakes, where, of course, it really depends how these are used. It's not that they have an inherent risk.

But for those limited risk use cases, we found it appropriate to have some transparency requirements. If you're chatting with a chatbot as an end user, you need to be informed that this is an AI system, not a person on the other end of your conversation.

And then, of course, the bulk of AI systems out there are no risk or minimal risk. And like the example that I gave before in agriculture, but you can think of many other industrial applications of AI where there is no risk involved to health, safety, or fundamental rights. And those are pretty much unregulated with a small exception that probably we'll talk about a little bit later. So that is the broad classification. But now you

Your question had multiple dimensions, so I'll go to the next part, which is, of course, AI has evolved and we as regulators have tried to keep up with the evolution of artificial intelligence, making sure that the regulation is also future-proof and applies also into the future. So when dealing with very powerful artificial intelligence, we looked rather at the models powering them. And here we're talking about

the GPT-4s and the CLODS and LAMA models that are by now, you know, at the frontier of artificial intelligence. And for those, there is a possibility that a certain type of risk, a systemic risk could materialize without it being in any of the other categories before. And a systemic risk

really comes about when those models are then used and deployed in different systems downstream all across an economy or across the European Union. For those, there is a different approach because of course we're talking about the frontier of AI.

which is a more co-regulator, co-creative approach to regulation. That is, we've established within the European Commission a body, an artificial intelligence office it's called, that is meant to interact with those who build those very powerful systems

and basically supervise them in a dynamic way in terms of risk monitoring, in terms of reporting incidents when these incidents happen, and so on. And, you know, we can get into this a little bit more later on. And then I do recall that you were asking about providers versus deployers.

The providers are those who are building the systems. It's your, let's say, Microsoft or it could be an SME who's building an AI system. And many of the regulations to make those systems safe rest with them because you're there when you build it, so you have some obligations to make the system safe. Then on the other hand, the user is the entity deploying that system on the market. And here,

You really get very concrete in what I was saying earlier, you know, law enforcement, migration, and so on. These would be public authorities. For education, it would be schools. It could be institutions. It could be basically anybody who uses those systems. They have less obligations, but still because they are the ones using it in certain scenarios, they have a set of limited obligations to make sure on their end, once the system is in their hands, that the system is safe.

Okay, that's really great. And there's a lot of times back here. So first, let's think about the risk levels, right? So you mentioned unacceptable risk, high risk systems, limited risk systems, and essentially no risk systems like the ones that you find in agriculture, etc. So maybe when you think about the different levels of risk here, what are the implications from a regulatory perspective, if you're an organization like rolling out use cases that could fit these different

risk criteria. I know no risk, there's no regulation here. Walk me through what are the implications from a regulatory perspective if I'm an organization looking to maybe look at a limited risk system, for example, which I think where maybe the majority of use cases sit if you're an average organization.

Recall, I was saying this is a very long and complex regulation, right? So I think it is case by case, but I'll try to generalize in terms of the implications. And I'll walk through all of them. I think that in prohibitions, it's fairly clear, right? Like if you engage in one of those prohibited practices, well, you don't want to do that for sure. The fines and the consequences for that are really, really, really high and are meant to be very, very dissuasive.

If you're building a high-risk case system, now we're talking about those building, then you have a certain set of obligations that are comprised in a conformity assessment that you have to fulfill before placing that system on the market. And that deals with

having a risk management system in place, having good data quality, good data governance, maintaining logs, maintaining documentation, ensuring that, you know, you have an adequate level of cybersecurity for the system and so on. So those are obligations if you build it. If you use it,

You have a number of obligations depending on the type of entity that we're talking about. So public authorities and those who are providing essential services, so of course that would be, you know, have the highest impact on fundamental rights. They have the obligation to conduct a fundamental right impact assessment before putting the system into use.

And that fundamental right impact assessment is in a sense a parallel to the data protection impact assessment that comes from the GDPR. But it is really focused on making sure that within a specific context of use, the AI system, when you use it, really performs in accordance with what it was supposed to do. So say, for example, you use it in a neighborhood that is, you know, mostly immigrants.

Is the system really going to perform the same as if you were to use it in your average neighborhood? And if not, what are the corrective measures that you're taking to ensure that it doesn't lead to discrimination?

Then the limited risk, in a sense, that's very simple on the provider side. So those who build those chatbots, those systems that can generate deepfakes and generative AI that generates content need to make sure that the content is labeled. And that's on the provider side. On the deployer side, of course, there is also a best effort obligation that this happens.

In the AI value chain, there is a lot more actors here. So there's different targeted obligations for different actors. So say, for example, you take an off-the-shelf system and then you customize it to fit your company's needs.

And whether, you know, by intent or by just the way you deploy it, that becomes a high-risk AI system. Then you take on the responsibilities from the previous provider because, of course, once you've modified it, you've changed the intended purpose of the system. I think that for, to also go broader than the technical explanation, I think that the general implication for those who deploy AI is

would be to check their systems against the use cases in the regulation that are considered to be high risk. If it is, then of course you need to follow all the rules in there applicable. But also I would say if it's in the gray area, in a gray area that might or might not be

Look, the intention behind the AI Act is to make those systems safer. Safe means more trust. It means a competitive edge on the market. None of these are considered bad systems. It's just that

they're made safer. So in the gray areas around those use cases, I think that the implication would be to try to get as close to conformity as possible. But that's, you know, a whole nother story how to do that. We're definitely going to expand into that as well. I'm curious to see your point of view on what type of use cases classify under what type of risk. So I'll give you an example. If you're an organization that built with its data science team, an internal data science model looking at customer churn, for example. So you're looking at customer data, it

It's meant to understand internally who is most likely going to churn from your service and who should we target with ads, for example. Because here you are using demographic data and usage data, but the application of it, like the end stream is just targeted emails, for example. So how would you think about it from a level of risk or how would this fit from a level of risk in the EUAI Act framework?

It's a good example, and I'm sure that there is multiple examples that are in this category. I would say at first sight is that

By way of the AI Act, this is not a high-risk AI system. Now it really depends. It really depends on the application. Where exactly are you doing that? And what kind of decisions are you making with that? And the easiest way is to check what is exactly, because there is a discrete list of use cases that are considered high-risk, what is exactly the use of that particular AI system? So say, for example,

Your customers are students getting admitted into universities, right? And you let the AI system decide which student goes to which university and that's your customer, right? It's the same AI system that you presented, only now you're using it in making decisions about assignment to different educational institutions. Now, in that case, it's a high-risk AI system.

However, if you use it, for example, to target your shoe customers in your shoe store and you're selling running shoes and you're targeting different customers based on the data using AI, in this particular case, it's not a high-risk AI system.

So the algorithm itself is not necessarily what determines risks, like the impact that the predictions have at the end, just to clarify from kind of how to conceptually think about it. Yes, exactly. So the whole regulation is based on the concept of intended use. This is not a perfect basis because some systems are off the shelf and you use them as they are. But most of the times in the real world, you'll have a provider working with

a deployer to customize an AI system to a specific use. So if, you know, again, I'm going into the world of public authorities, I'm sure that they don't buy off-the-shelf solutions. Most of the time they work with the provider for customized solutions. So then they know the intended purpose. And that's when the logic of, okay, this is going to be used in a high-risk environment

use case comes into play, activates the obligations for those who build it and those who use it as well. Maybe let's switch gears here discussing, you also talked about general models. I found that very, very interesting. I think you really well defined here that there's a systemic risk with these general models, things like the CLODS, the GPT-4Os of the world, etc., because they're inherently very dynamic, they're not mechanistic, they can create output that goes against the use case that they're trying to create, for example.

And you mentioned that there's a kind of a core regulation approach there. But I'm curious as well, if a general model is used on a high risk use case, would that fall under the high risk AI system, for example, or does that go through a different kind of regulatory route as well?

Well, we struggled with that quite a bit because it's a hard question, right? So say, for example, now all of a sudden you decide as a police officer to ask ChatGPT whether or not to arrest somebody or not, and you make decisions based on that. It would be really, really hard to put the fault on the makers of your chatbot because you've decided to misuse it.

So the whole logic is addressed in the value chain, as I was explaining earlier. And we tried to cover every single possible scenario by thinking of substantial modifications, including in the intended use.

So, for example, if one takes a general purpose model and customizes it for a specific use case that is high risk, then the responsibility rests with the person or with the entity that customized that for the high risk use case. Nevertheless, these were kind of hand in hand because what if you are fulfilling all of your obligations on one hand,

But then your original model is significantly biased towards women, for example. You cannot control that in your, you know, fulfilling your responsibilities.

So there are obligations for those. Now, there are basically two providers to also cooperate in making sure that they fulfill the obligations based on whoever can do what in the set of obligations. When we get into discussions about models and especially the general purpose models, there are a set of

obligations that apply to all of them, which are fairly light touch, but that are meant to help this division of responsibilities along the value chain. So, for example, even if they're not a model with systemic risk, general purpose AI models above a certain size,

need to maintain good documentation and to pass it down to those who then further integrate them into their systems. They also need to comply with current copyright provisions, for example, when building their models. And then when you get to the really, really powerful models, the ones with systemic risk, and so this is before anybody takes them and uses them, they need to do a

model evaluations, including adversarial testing. They need to have a system for incident reporting in place and they need to work with the commission to report these serious incidents. And I think cybersecurity makes sense in a way, right? Especially if you're talking about very powerful model deployed in the whole economy. So we really tried hard to disentangle whose responsibility is what.

So depending on where exactly one is on the value chain, all the way to the final police officer who can make a decision one way or another,

There is a series of responsibilities covering, I think, most potential configurations. And, you know, I think we can definitely create an episode just focused on that, the AI value chain. Exactly. How does the UAI think about it? But I want to switch gears, maybe discuss a bit the organizational requirements for the UAI Act. I think a lot of organizations now, okay, the UAI Act came into force in August.

We need to think about how to become compliant. We need to audit our own AI systems, our own machine learning systems, and think about our vendors, what we need to do. So from a high level, I know it's 150 page document, but what are the main requirements that organizations need to be aware of today? And maybe how do they begin preparing to meet these obligations?

they would look through it as a, you know, a decision tree almost. Because it does, while it is a stuffy document, it does have a very sound logic into who and what kind of obligations they have. So the first one is to enter into the scope of the regulation by looking at the definition of artificial intelligence that basically defines what is in scope and what is not in scope of the regulation. So that

definition, in a sense, excludes first generation, you know, AI that you might encounter, for example, in video games, which is not really AI. So looking first at that definition and the scope of the regulation and understanding whether or not they are in scope.

That's very broad. You know, you can be no risk, but still in scope because of the definition. Then the second point, the second step is to basically do an inventory of the AI systems because

It's very easy to think, okay, well, one company, one AI system. Usually AI companies have multiple systems interacting with one another, relying on one another, or basically doing different tasks to achieve, you know, one objective. So an inventory of the AI system in use and their intended purpose would be the next step.

Then the third step would be to look at really the discrete list of high risk, because that's where the most obligations are, and to check whether or not any of their AI systems falls into one of those use cases.

And these are, while I say discrete, they are not discrete to the level of explaining every single potential use case, but they're rather discrete categories with discrete use cases, but that could encompass a wide range of potential individual applications. So looking at that list. Then

This is a risk-based regulation, so we tried to narrow down even those categories to make sure that we don't inadvertently burden companies with regulatory obligations if they do not pose a risk to health, safety, and fundamental rights. So there are a number of exemptions, even for those who fall in that category. So, for example, if the AI system is...

used in a preparatory or literally a small task in that particular category, but it doesn't necessarily, it's not the main deciding factor. And there might be an exemption for that.

And then finally, after determining, okay, am I in the high risk category? I'm assuming nobody is, as of now, building privately a mass surveillance AI to discriminate against the entire population of the union. So I, you know, I'm not going to spend much time on the prohibitions.

But, you know, based on where that system falls, whether it's high risk or it's a, you know, limited risk. And that's very, very clear. You know, it's deepfakes, it's chatbots and it's AI generated content. So depending on that, then that's fairly simple to go to the very concrete obligation. So, you know, it's a decision tree getting to exactly where each system falls.

Then, of course, the same kind of logic also applies for those who deploy those systems going through that logic and understanding is this AI system that I'm deploying on the market in the high risk category, then as a deployer, what are my obligations here?

As the market moves more and more, especially in the startup world and so on, are building models that are general purpose. And for those, it's also a decision tree. First one is, is my model above 1 billion parameters? That's like an entry point to approximate what would qualify as a general purpose AI model. If not, then no obligations here probably apply.

Though, of course, there is always some interpretation, especially in the gray areas around that particular number.

Then if so, moving on, is my model potentially a model with systemic risk? And that probably applies to the future models that the top five, 10 companies building AI will develop. And I think it applies to a lot of those. And maybe you could have a small model that also poses systemic risk if it's really good enough from an intelligence perspective.

Indeed, indeed. And I will go back to what I was saying about the complexity of this. We have foreseen that as well. So recall that the AI office interacts with the market to supervise those models who could pose systemic risks. And indeed, the AI office has a certain discretion based on data and on a number of parameters or conditions that we put in the regulation to designate a model as posing a systemic risk.

Because even the different outputs of the models can present a risk for smaller models as well. Or maybe, you know, I'm theorizing here, but smaller models that might not be as good and as polished could pose even bigger systemic risk. But then on the other hand, are they going to be all across the European market?

So going through that logic and understanding where your model fits, you know, first checkpoint is the number of parameters, which kind of hints at whether or not this is a general purpose model. And then at the very end, at the frontier, when we're talking about systemic risk, but that's of concern to only a few.

Maybe there's quite a few different points across the decision tree that you mentioned, right? Let's switch seats here for Dan. You're leading AI or IT or information security at an organization today that does operate in the U. What would be the first thing that you would look at?

So I'm leading it. I'm building something. I'm building an AI. Let's say you're a large pharmaceutical company or something along those lines, using AI within your operations, right? Like what's the first thing that you would look at? I think it's the same logic as before, but I will take the opportunity to go into, you know, medical devices and other products that are regulated separately by European law, but also intersect with AI. So I would follow the same logic, see if I'm using AI by the definition, see if it's high risk, see if it's not.

See if it's really a key component in those use cases and then I would follow the rules there. Or if I'm deploying it, again, seeing what the obligations are for deployers. I will take the opportunity to talk a little bit about the intersection with other products that have AI embedded in them because we also took that into account.

And there you have products such as medical devices, for example, that have AI in them, for which you have two different distinct universes. One, the AI as a standalone system that is embedded somehow into another product. Then the rules of standalone systems apply everything that we've discussed so far. On the other hand, if the AI comes as part of that product as a safety component,

That is also classified as high risk. So then the rules here also apply, but, and there is a big but because it's trying to not overburden the market with regulation. Most of those products that are already regulated, they're regulated because they could pose threats to health or safety.

And therefore, they have rules of their own to prevent that. They have conformity assessments of their own. Sometimes these conformity assessments are even more stringent than the ones in the AI Act. So we have tried throughout

to make sure that these work hand in hand, they're complementary, not duplicating the type of responsibilities and obligations one has. So let's take medical devices, the AI Act will apply insofar as the obligations herein are not already covered in the current rules that apply to medical devices.

What that means is it could be that most of them are already covered because they were foreseen. What I think the AI Act brings a little bit more outside the product safety regulation is this component of fundamental rights.

So while health and safety has been a priority in terms of European regulation, technology and fundamental rights until we dealt with AI was, you know, it's an emergent question because before you know a linear algorithm, you know exactly what it's going to output. You can't expect it to discriminate. You can look at the code and see exactly what it does.

So I do expect that even those who are already regulated will have an additional set of obligations derived from the AI Act, and we had foreseen that, but we have tried to simplify the process as much as possible so that it's not duplicating the obligations, but rather making sure that they're comprehensive.

Yeah, I love how there's complementarity between different regulations depending on the area. Because you mentioned, for example, with limited risk systems, something like deepfakes could fit in there. But if you're using deepfakes to defame someone or libel someone, libel law will kick in here. Yeah, totally.

Perfect. So maybe one last question when it comes to requirements, and this is still a bit loosely defined. If you can explain what the AI literacy requirements that organizations have as part of the EUA Act, to give here innocent background, the EUA Act emphasizes that organizations need to have AI literacy. I'd love if you can expand on what that means and what does that look like maybe in practical terms?

Definitely. So that provision is, I think that one of the provisions that are broader, you know, goes a little bit outside of the risk-based approach that we've been discussing so far. And I think that it is a soft obligation. It's not a hard obligation. It's a best effort type of obligation for organizations using AI to make sure that the people operating that AI and deploying it

are fairly well prepared to understand what the impact of using that AI system is.

So it is less about understanding, let's say, machine learning, but rather having the literacy to understand that it could go wrong, that it could make biased decisions, that it could discriminate, and that there is a natural tendency to be over-reliant on machine outputs. So that kind of literacy. Of course, it's not really a pyramid.

It's a gradient that goes from zero to fully prohibited, right? But there's a whole continuum of applications. And I would say the closer you get to a gray area where...

Your lawyers guide you through the regulation and say, this does not apply to you. However, you feel that you're very close, the more this requirement then is very important to say, okay, well, look, at least preparing for any risks that me deploying this AI will have.

I think in terms of very concrete, it will be something that the commission together with the AI board will provide guidance on in the near future. And there's many parts where the commission still needs to provide guidance to make it a lot more specific.

But look, if I were to put it in one sentence is make sure that you're not posing threats to health, safety, fundamental rights. And how do you do that is by training the people who operate the system to be aware of what could go wrong.

Even if you technically, and by the letter of the law, you don't fall into one of those categories that we have there. Yeah, and you mentioned something that the commission is still providing guidance on a few elements of the UAI Act to make it a bit more concrete. And I think this segues into my next question, which is what does the future of the UAI Act look like, right? We've covered the present implications of the UAI Act.

It's also important to look at what's next. So how do you see the EUAI Act evolving? What are the next steps in terms of regulation? What timelines I'd love to kind of get from you, how you see the regulation evolving?

I'll split this into two. One is the evolution of the regulation, which I'll leave for later. And the second is what's actually in the regulation, that is the entry into application. So the regulation as of now has a very phased and I think logical entry into application to give the market time to prepare and the member states to prepare their national competencies and so on.

But the first term is within the next six months, so starting August 1st, six months after the entry into force, where the prohibitions apply. And that again, it's very simple. You know, you can't do this in the European Union and you should not do it. There is no question about why this had to come first.

Then within a year after, you know, take August 1st as the starting point, within a year, the rules on general purpose models apply and on general purpose models with systemic risk. And also the artificial intelligence office needs to be fully functional because, as I said, you know, it's a dynamic interaction between those who build general purpose models and the supervisory tasks of the office.

Within two years, all of the rules and basically the bulk of the regulation applies. So everything that comes in compliance for high-risk AI systems

apply within two years. And then within three years, recall we were discussing overlaps, you have these safety components and products that are already regulated and disentangling that and adapting the compliance requirements to also fit with the AI Act will take a little bit more time. So those come into effect within three years. So that's really what's in there. Now, in terms of your other part of the question on

I would call it the future-proofness of how will the regulation evolve. There are mechanisms through which the Commission can amend through a simplified procedure, comitology, you don't want to get into that, but through which the Commission is empowered to change certain parts of the text depending on new evolution. So we have empowered the Commission

to modify the list of high-risk use cases based on very specific criteria. So they can just wake up all of a sudden and say, you know, Dan's tomato growing AI is now high risk. They have to follow a risk check and to see whether exactly this new category belongs in that list, but they can add certain categories there.

I think then there's over the next year, the commission needs to issue a lot of very concrete guidance on going from the law to the very specifics in margin cases with what is considered high risk and what isn't to provide clarity on the definition, to provide clarity on a lot of other parts of the text.

Member states also need to set up institutions that apply the AI Act and that enforce the rules. So there is a distinction. Most of the rules will be enforced by member states. So everything dealing with high risk systems will be enforced at the member state level. So member states need to have a national supervisory authority in charge of this.

And then, of course, as we were discussing governing more powerful AI will be done at the European level. So I think that there's still a few moving parts, but I think that there is enough guidance in the regulation itself to understand where this is going. So the difference will be, you know, between two companies, one falling under, one not falling under at the margin, but

But even so, you know, being compliant with many of the requirements here is first good practice. And second, you know, it gives everybody a competitive advantage saying, you know, even globally saying, OK, my product is compliant with this regulation. You know, I appreciate the education here. I feel much more empowered to talk EU law with my EU bubble friends in Brussels. But maybe a couple of last questions while we still have you here, Tanj.

I want to understand maybe the process of creating the UAI Act. This is quite an interesting experience, to say the least. It's been years in the making and you're leading the effort. You're negotiating with member states, with organizations, technology providers, regulatory bodies. Can you share some insights maybe on the challenges and successes you've encountered along the way? What's the biggest lesson you've learned? Maybe building one of the biggest legislation in technology history. So yeah, I'd love to understand here.

Well, what a long answer I have, but maybe I'll spend a minute explaining how this comes into being because I think it is useful that maybe I can share some of the challenges as well. So at the European level, the European Commission, sort of imagine it like a government of the European Union. Don't directly quote me on that because there's a distinction.

But the European Commission is the one that proposes regulation. However, to sign on to the regulation, it is really the Parliament and the Council who are basically the co-legislators. The European Commission proposes a regulation. It goes to Parliament and to Council. Then Parliament and Council, Parliament is member states' representatives elected democratically, directly, and sent to the European Parliament.

and Council is where the Member States are directly represented as Member States. Parliament, separately from Council, negotiates internally the changes it wants to bring to a certain piece of regulation. Council does the same. So now you have three versions: one that was initially proposed, one of the Parliament, one of the Council.

When this process is done, the three institutions meet together and the co-legislators, that is Parliament and Council, try to come to an agreement on the final text. Of course, the Commission acts as an honest broker, of course, a very vested honest broker since they have proposed the regulation in the first place.

Because of the political weight and visibility and the impact, this was a very, very challenging negotiation, I would say, very politically charged.

And very important as well, in Parliament, there were over, I think, 30 members of Parliament working on this negotiation. And in Council, you had a longer time. I'm not going to get into rotating presidency, but you had a longer time that the Council position also changed a few times before it became final. Then the whole negotiation process was...

I would say, complicated, but in a good way by the advent of generative AI, which was not foreseen in the original proposal and the need to come up with some, as I was saying, flexible rules

for those as well. And Parliament here, you know, very proud that we took the lead and we actually crafted rules that make sense and they're our future proof. We put them into the mix and negotiated those as well. So that's the process. I think in terms of the challenges, one, it was the politics dealing with very sensitive topics like remote biometric identification in public spaces where you have very strong political ideologies on the left and on the right

on how exactly that technology should or should not be used and deployed. On the definition where, you know, it really determines the scope of the regulation and also the compatibility of the regulation internationally. So we've done a lot of work

with organizations like the OECD, who had already a definition of AI, to align both our definition but also theirs. So in the end, you know, we ended up very close in terms of the definition. Also working with our partners with the US, for example, where they have a fairly similar definition that they're using in the NIST, National Institute for Standards and Technology,

So that was a big part of defining politically what is it that we're talking about. Then the whole discussion on copyright, because copyright is treated in a different part of EU law. But on the other hand, there was something that we needed to do in this text as well. And we landed on a compromise solution, which I think is fair to add some transparency requirements that is,

transparency is an obligation. However, it's not something that alters your costs, but it alters your incentives in breaking copyright law or not. Because if you're more transparent, then you can also be challenged on the way you've used copyrighted material. So I think

Negotiating these, you know, from different viewpoints. One in the parliament where you have very ideological viewpoints. In the council where you have member states really thinking about, you know, the competitiveness of their own member state in a sense. And the commission trying to keep up and to sort of also accept an update of their initial proposal was quite a challenge. But it was fun. It was a very good experience.

Very good experience, very challenging, but I think we eventually ended up with a very good piece of legislation. You can see that from the fact that it's started to be copied all over the world now, and I'm sure it's going to have a tremendous impact around the world as more and more states with different regulatory traditions change.

We'll look into, okay, what are the rules for AI? And we have a fairly good set of starting rules that I think make sense. There is only so many ways in which you can prevent risks from AI. And there's only so many ways in which you can categorize AI. Is this risky or not? I mean, eventually, you know, you don't want to over-regulate and regulate my tomato AI. But you don't want to leave outside, you know, those...

It will have, and it does already have a big impact around the globe. I think that like the juice was definitely worth the squeeze in this context. Exactly. Yeah, yeah. Dan and Kira, it was great to have you on DataFrame. Really appreciate you joining us. Thank you.