Table of Contents
ToggleIntroduction
Businesses are moving towards Artificial Intelligence (“AI”) enabled services/product delivery. This move towards AI underscores the importance of regulating the field of AI development to ensure public interest and at the same time promote innovation within the space.
Governments across the world are taking and (in some cases) have taken a proactive role in regulating the use and deployment of AI. In May 2023, India issued a joint statement with the European Union (“EU”) which stated that India and EU would coordinate within the Global Partnership on Artificial Intelligence and explore bilateral cooperation on trustworthy and responsible AI, including in research and innovation.
The European Parliament has been deliberating the AI act since it was proposed in 2021. After considerable discussions and amendments, on June 14 2023, the European Parliament adopted the European Union Artificial Intelligence Act (AI Act). This means that, after it gets approval from the European Council, the AI Act will become the first comprehensive regulation on Artificial Intelligence in the world.
Scope
The AI Act has a wide scope of application covering anyone who is a part of the AI ecosystem. Article 2 of the AI Act outlines the specific entities it regulates:
(a) Providers: being a natural or legal person, public authority, agency or other body that develops an AI system or gets it developed with the view of placing it on the EU market. The AI Act covers any provider, within the EU or from a third country, placing or putting into service any AI systems in the EU.
(b) Users: of AI systems located within the Union.
However, the AI Act does not extend to; AI systems that are exclusively developed by or used for military objectives, public authorities situated in third countries, or international organisations using AI systems as per international agreements for law enforcement and judicial cooperation.
Risk Based Approach
The AI Act categorises AI systems based on their risk level:
- Unacceptable risk under title II,
- High risk under title III, and
- Low/minimal risk under title IV.
The AI Act prohibits certain AI practices under Article 5 which are against the values of EU like violation of fundamental rights. The practices that have been specifically prohibited include subliminal techniques used by AI systems to manipulate and materially distort a person’s behaviour or exploit the vulnerabilities of any group or person due to their physical or mental disability. Article 5 also prohibits the use of social scoring AI systems by public authorities and the use of real-time remote biometric identification AI systems by law enforcement subject to certain exception.
High Risk AI systems include systems intended to be used as a safety component of products that are subject to third party ex-ante conformity assessment; and AI systems causing an adverse effect to fundamental rights specifically set out in Annex III. Due to the evolving nature of the adverse impact on fundamental rights resulting from AI systems, the European Union has taken measures to ensure flexibility in amending Annex III by allowing for potential amendments which are subject to specific conditions outlined in Article 7.
Compliances For High Risk AI
Articles 51 and 60 of the AI Act state that providers of high risk AI systems are required to register their systems in an EU- wide database managed by the commission before placing them on the market or putting them into service.
While placing such AI systems on the market, the AI Act mandates every developer of a high risk AI system should maintain risk management systems that run throughout the life cycle of the high risk AI system, adhere to data related best practices as per the quality standards mandated by the AI Act, store and record technical documents, ensure transparency built into the system, enable appropriate human oversight by introducing human- machine interface tools within the high risk AI system, and that such systems should maintain accuracy in light of their intended purposes.
The providers of AI systems are primarily responsible for carrying out the aforesaid obligations. The AI Act also imposes responsibility on the providers to maintain a quality management system, carrying out conformity assessment before placing AI systems in the EU market, taking corrective measures etc. Additionally, Article 22 of the AI Act puts the onus on the providers to inform any risk related to their AI system to the national authorities of the member state where the AI is placed.
(a) AI systems that interact with natural persons have to be designed and developed in such a way that natural persons are informed or it is obvious that they are interacting with an AI system;
(b) Emotion recognition systems or biometric categorization systems must inform any natural users about the AI operation of the system;
(c) An AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events, and would falsely appear to a person to be authentic or truthful (‘deep fake’) have to disclose that the content has been artificially generated or manipulated.
The conditions and categorization outlined within the AI Act encompass prominent generative AI systems such as ChatGPT and photo editing applications like Faceapp, which involves direct human
Limited Risk AIs
Article 52 of the AI Act lays down the criterion for certain AI Systems. These systems have lesser regulatory requirements as they expose humans to limited risk. The provider of such AI systems is mandated to maintain certain transparency conditions. These limited risk AI along with their attendant transparency conditions are:
- AI systems that interact with natural persons have to be designed and developed in such a way that natural persons are informed or it is obvious that they are interacting with an AI system;
- Emotion recognition systems or biometric categorization systems must inform any natural users about the AI operation of the system;
- An AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events, and would falsely appear to a person to be authentic or truthful (‘deep fake’) have to disclose that the content has been artificially generated or manipulated.
The conditions and categorization outlined within the AI Act encompass prominent generative AI systems such as ChatGPT and photo editing applications like Faceapp, which involves direct human interaction. In order to align with the regulatory requirements, set forth by the AI Act, these AI systems must adhere to the aforementioned transparency obligations.
Penalty
The penalties imposed by the AI Act are separate for different contraventions ranging from EUR 10,000,000, or 2% of global turnover in case of companies, to EUR 30,000,000 or 6% of global turnover in case of companies. It is clear that the amount of penalties for companies can be very severe in case there is any contravention of the requirements laid out in the Act.
Indian Perspective
The guidelines are bound to have an effect on any Indian AI system developer who plans to place their AI systems in any EU nation. As discussed earlier, Article 2 expands the scope of the AI Act to any provider entering the EU market, who intends to develop any AI system to place it in the EU market will need to comply with the articles of the AI Act, which would include service providers from India. Additionally, according to Article 25, any such Indian company developing a high-risk AI system would have to appoint a representative established in the EU in the absence of an importer.
This representative can be a person or legal entity which would be responsible for carrying out any obligations surrounding the high-risk AI systems. Additionally, Indian providers of AI systems which fall within the definition of limited risk AI, as per Article 52, would have to ensure the transparency obligations stipulated for such AIs.
AI regulations is in its nascent stages, there aren’t any specific Indian laws regulating AI, however, India demonstrated its commitment towards regulating AI when Niti Aayog published its discussion paper titled ‘Responsible AI for All’ which laid down certain guidelines and discussed problems which may occur while developing AI in India. However, it remains to be seen how policy making vis-à-vis AI would evolve in India. India can certainly, like it did with the EU Antitrust laws take inspiration from EU to formulate its own code on the subject of AI. Industry bodies have also tried to create norms for AI usage, for example when NASSCOM came out with guidelines to create a common consensus amongst stakeholders for the use or development of generative AI.
Recently, the Telecomm Regulatory Authority of India has released recommendations to regulate AI in India. They have given general recommendations which transcend the telecom sector and has recommended establishing a national authority to regulate AI. The recommendations have a similar regulatory framework to the AI Act like risk categorization and allocation of requirements as per the risk. The AI Act would have a positive effect on the regulation of AI in India as it will provide a relevant direction and enable Indian companies to be ready for possible future compliances.
It is to be noted that Indian developers haven’t developed AIs accounting for the AI Act’s requirements, therefore, if any Indian provider’s high risk or limited risk AI is placed on, or is already, in the EU market, then such Indian provider would have to update their AI systems to make it compliant with the AI Act or it may face entry barriers or penalties. In accordance with Article 39 of the Act, it is possible that partner organizations established under the laws of third countries may emerge. These organizations would be responsible for certifying that high-risk AI systems have undergone conformity assessment as required by the AI Act before being approved for deployment within the European Union. Considering that India is increasing ties with the EU for AI development, we may see a partner authority in terms of the AI act, which may ease the process of third-party conformity assessment for Indian AI developers.
Since the AI Act is the first initiative towards a comprehensive AI framework, its enactment would see a ripple effect of new legislations propping up in different jurisdictions in order to regulate AI. When the EU enacted GDPR, we saw that countries took inspiration from the act to create their own data regulations. Therefore, it becomes pertinent for AI companies to create strategies to develop AI systems in accordance with the AI Act to avoid penalties, entry barriers, and significant modifications to their AI systems in the future.