In April 2021, the European Commission proposed the first EU regulatory framework for AI in order to ensure the safe, secure and ethical development of AI systems. It stated that systems that can be used in different applications and processes would be analyzed and classified according to the risk they pose to users. The act became law on 1st August 2024, with four risk levels: unacceptable, high, limited and minimal risk. Organizations are now required to disclose what data their AI is being trained on, ensure that it's safe to use and if deemed a high risk, go through risk assessments.
The law of course has global reach in many industries and the impact of the regulation has been likened to GDPR which came into force in May 2018. There are similarities including fines being given to those who do not adhere to the rules. However, the key difference is that the EU AI law is a product safety legislation, whereas the GDPR focuses on the rights of individuals.
The ambiguity of the new law
A wide-held criticism of this new legislature, is its complexity and lack of clarity. Critics argue that the definition of 'AI systems' is particularly vague, leading to uncertainties about its scope and application. This lack of clarity could impede effective implementation and compliance across all industries. What this has meant is that most articles written about the law tend to quote the law making sure they don’t misrepresent it. However, it is clear what direction the regulation is taking us and why we all need to adhere to its guidelines. As with GDPR, the EU is prioritizing fundamental rights.
The four risk categories
The four risk categories are as follows:
UNACCEPTABLE RISK: A very limited set of particularly harmful uses of AI that contravene EU values because they violate fundamental rights and will therefore be banned. For example, exploitation of vulnerabilities of persons, manipulation and use of subliminal techniques.
HIGH RISK: A number of AI systems, which could potentially create an adverse impact on people's health, safety or their fundamental rights. This a broader category and includes AI deployed in medical devices.
LIMITED RISK: This category includes deep fakes and chatbots. Their compliance obligations focus on transparency. The user needs to be informed that they are dealing with an AI system unless it is obvious on the face of it.
MINIMAL RISK: AI systems not falling into the three categories above will not be subject to compliance.
The impact on pharmacovigilance
For the PV industry, its impact will be far reaching. As mentioned, medical devices have been agreed as high risk and they will need to comply with new requirements including a conformity assessment to prove that the AI system complies with the specified requirements, which include a risk management system and having continuous tracking measures in place during the whole lifecycle.
There are an increasing number of AI systems and models used across the whole medicines lifecycle and they are unlikely to be deemed as medical devices. Then the act only requires transparency in their requirements and training for operators to use and interpret results correctly.
What we do know is that the act will affect data governance and data representativeness as companies will need to apply automated controls to ensure that only the right people gain access to data, thus preventing misuse or risk exposure. These systems are unlikely to be in place currently, so there will be a lot of work for companies to make sure their data is handled correctly. The act will also mean that most sizeable companies will have to employ an expert in this legislation, either a lawyer or an AI expert.
The positives
The reason the law has been brought into effect is about protecting and prioritizing fundamental rights as AI technology, and its use, is developing at such an exponential rate. For the PV industry, there will be much preparation and conformity work to schedule. There is a long timeline ahead for the regulations to come into force and it could be another year or more before we know exactly what the full impact will be on PV processes. But anyone who uses AI in their systems, has to be prepared for this coming into force in the EU and in other countries across the globe who are likely to follow suit. The end result, however, has to be seen as very positive as it will all lead to higher quality data and in turn improved patient safety.