TOWARDS A LEGAL FRAMEWORK FOR ARTIFICIAL INTELLIGENCE
- The legal framework for AI is gradually taking shape at international level. While all eyes were on the European Union, where the draft regulation on AI is in the process of being adopted, other initiatives have recently taken centre stage, whether in the USA, China, or in the G7 and OECD The next edition of the AI Safety Summit, which has just ended in the UK, will be held in France in 2024.
- But what does AI have to say about that? When asked about the need for a legal framework for AI, OpenAI’s ChatGPT AI told us that this is “a complex topic” but that “many experts and policymakers recognize the need for some form of AI regulation to ensure its ethical and responsible use”
- AI is everywhere and impacts every sector. What are the consequences for the legal and medical worlds? What legislative measures are already in place or planned? Are there any bodies specifically in charge of monitoring AI? What are the risks of generative AI in terms of intellectual property and data protection law? Have the courts already had to rule on this? How do you carry out an AI impact assessment? Lexing network members answer all these questions.
The Lexing® network members provide a snapshot of the current state of play worldwide. The following countries have contributed to this issue: China, Estonia, Greece, Hong Kong, Hungary, India, Spain, United Kingdom, United States.
FREDERIC FORSTER
VP of Lexing® network and Head of the Industries & IT, Telecoms and Banking Services division of Lexing Alain Bensoussan-Avocat
- The “Interim Measures for the Administration of Generative AI Services” (“GAI Measures”) took effect on August 15, 2023. This latest administrative regulation on AI was released further to the “Provisions on the Administration of Deep Synthesis of Internet-based Information Services” (“DSS Provisions”) effective as of January 10, 2023 and the “Provisions on the Administration of Algorithm-generated Recommendations for Internet Information Services” (“AGR Provisions”) effective as of March 1, 2022.
- The term “generative AI technology” is defined in the GAI Measures as “models and related technologies with the ability to generate text, images, audio, video and other content.”. The “code” is not covered by the above definition and such exclusion appears consistent with article 2 of the GAI Measures which excludes the research and development on generative AI technologies from its scope of application. The same regulation also provides that the application of generative AI in press and publishing, film and television production and literary and artistic creation areas shall be governed by other specific regulation released by the State.
- The authorities may take necessary technical measures should the provision of services based on generative AI available in China but offered from overseas jurisdiction fail to comply with the applicable Chinese law and regulations.
- The supplier of the services based on generative AI shall take appropriate measures in case where it becomes aware of any illegal content or illegal activities committed by the users when using the services. In particular, the service provider shall take prompt measures to stop generating, transmitting illegal contents and eliminate the same and report the incident to the competent authorities. In case where the users are found engaging in illegal activities by using generative AI, the service supplier concerned, once alerted, shall take measures such as issuing warning (to the user concerned), restricting functions, suspending or terminating the service and keeping relevant records and reporting the same to the competent authorities.
- Regarding the training data processing, the “GAI Measures” emphasizes that the processing shall use data and underlying models having legitimate sources and such processing shall not infringe upon others’ intellectual property rights and in case where the personal information is involved, the consent from the individual concerned, if applicable, shall be secured.
The Service provider shall label the generated images and videos in accordance with DSS Provisions. In case where the services based on generative AI bear “public opinion attributes” or “social mobilization capabilities”, security assessment shall be conducted and relevant filing is also required as per AGR Provisions.
Jun Yang
- The Council of Ministers has approved a Royal Decree approving the statute of the Spanish Agency for the Supervision of Artificial Intelligence (AESIA), the result of the joint work of the Ministry of Finance and Public Function and the Ministry of Economic Affairs and Digital Transformation.
- The advancement of technology is unquestionable globally. In the specific case of Spain, digital transformation is a priority in the Government’s line of action, as reflected in the 2026 Digital Agenda. This Strategy includes different strategic plans, including the National Strategy for Artificial Intelligence (ENIA), which aims to provide a reference framework for the development of an “inclusive, sustainable and citizen-centric” Artificial Intelligence.
- The AESIA is attached to the Ministry of Economic Affairs and Digital Transformation through the Secretary of State for Digitalization and Artificial Intelligence and is based in La Coruña.
- Spain becomes the first European country to have a body of these characteristics and anticipates the entry into force of the European Regulation on Artificial Intelligence.
- The establishment of the Agency is based on the obligation laid down in the Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules in the field of Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, for Member States to select a “national supervisory authority” to oversee implementation and enforcement of the provisions of the aforementioned Law on Artificial Intelligence, as well as to coordinate the activities entrusted to those Member States, act as the single point of contact for the Commission, and represent the Member State concerned before the European Committee on Artificial Intelligence.
- The above-mentioned proposal for an Artificial Intelligence Regulation contains a number of obligations to be assumed by the designated national supervisory authority. To this end, the creation of a State Agency is proposed, anticipating and preparing for the assumption of the obligations and responsibilities imposed by the Regulation. For this reason, this royal decree has been processed through urgent administrative channels by Agreement of the Council of Ministers dated June 13, 2023.
In addition, the Agency will be responsible for assuming all those matters and competences that must be assumed by Spain, as a Member State of the European Union (EU) in the field of Artificial Intelligence, especially those related to supervision, in order to comply with all the obligations established in European and national regulations.
Marc Gallardo
- Since the topic of artificial intelligence (AI) skyrocketed in early 2023 and became entrenched in the general public, the average programmer has probably been bombarded with articles and social media posts about using AI as a tool to write code. Supposedly, using AI to generate code significantly expedites software development processes and enhances efficiency.
- However, certain caveats come with the advent of tools that can generate AI-generated code. AI can autonomously produce code based on vast datasets and preexisting codebases, blurring the lines between human and machine creativity. This raises the fundamental question: Who can be considered the creator of AI-generated code and, thus, the holder of copyrights?
- Software developers have long relied on copyright protection to safeguard their creations. The source code of a program is protected under the Copyright Act (CA) of Estonia as a work of authorship, akin to literary works. Copyright grants exclusive rights to the creators, including the right to reproduce, distribute, and display the work, as well as the right to create derivative works. Copyright enables the author of a computer program to prevent unauthorized use, replication, or distribution of its source code by other persons.
Copyright of AI-generated code
- If a person writes a line of code, they channel their creative and intellectual freedom to form coherent and usable source code. In fact, under existing copyright laws in most jurisdictions, copyright protection is extended to human authors, not to machines or AI systems. The latter is because of the premonition that only humans can be creative, while AI operates by algorithms and cannot exercise creativity. As a result, AI-generated code does not qualify for copyright protection like human-created code. The lack of human involvement in the creative process makes attributing authorship to AI-generated code challenging.
Authorship and using AI tools to generate code
- This is especially relevant with tools like OpenAI’s GPT and GitHub’s Copilot, where the user enters a prompt and receives a piece of source code. The user only enters a general description or idea of the expected result as the prompt. Ideas, unlike works, are more abstract and not copyrightable under the CA. It is therefore regarded that the prompt the user enters is not copyrightable. Because of the latter, the resulting source code is not copyrightable, as the user does not hold control over the resulting source code – the user only expresses their idea. Still, the result is governed by how the AI model is trained. Neither the user nor OpenAI or GitHub have direct creative control over the code that the AI generates, and the code is without an author and not copyrightable.
Including AI-generated code in existing projects
- If the AI-generated code is integrated into a larger codebase where the creator of the project exercises creative freedom, then the source code is protected under the CA as a whole. Users of AI code generators have to tread carefully, though, because using the code snippets does not entirely exclude them from the possibility of committing copyright infringement. In some cases, the code given by the AI generator is already protected by copyright.
Can AI-generated code match already existing code?
- Copyright gives the author an exclusive right over their creation, and they can refrain other people from using the work. A person who unlawfully uses works protected by copyright infringes the author’s rights. As AI models are trained on vast datasets, including copyrighted materials, there is a risk that the AI-generated code may reproduce or resemble copyrighted code without explicit authorisation. This opens up the possibility of copyright infringement claims by the original creators against users of AI-generated code.
Using AI-generated code and licensing issues
- In some cases, source code authors have uploaded their code to GitHub and attached a GPLv3 licence to the source code. Some AI models are trained on GitHub code repositories. The AI generator may output code identical to existing code. Although the code is open source and can be copied, the GPLv3 license states that the user must also publish the program entirely under the GPLv3 license. In this case, the user risks receiving a copyright claim if they do not attach the GPLv3 license to the program.
How to mitigate the risk of infringing copyrights
- To mitigate potential copyright issues, developers must be vigilant about the data used to train AI models. They should ensure that the datasets are carefully curated, avoiding copyrighted materials without proper permissions or licenses. Unfortunately, AI generator service providers have not implemented robust mechanisms to identify and exclude copyrighted content during code generation. Some tools, like Copilot, have a toggle that checks the output against code found in public repositories, Copilot does not offer you code that matches with code from an existing repository. A user should switch on this toggle to minimise the risk of infringement.
Privacy concerns about using AI-generated code
- However, even with diligent efforts, transmitting personal or confidential data to AI generator service providers may introduce privacy and security concerns. When developers utilise third-party AI services, they often share substantial amounts of sensitive information, such as proprietary algorithms or user data. This practice poses the risk of data breaches or unauthorised access, potentially compromising the security of the entire software development process.
- To address these concerns, developers should carefully evaluate the AI service providers they engage with, ensuring that adequate data protection measures are in place. Contracts and agreements with AI service providers should clearly outline data usage and security protocols to safeguard confidential information.
Conclusion
The rise of AI-generated code presents intriguing copyright implications for software developers. As AI increasingly becomes an integral part of the development process, clarifying the copyright status of AI-generated code becomes vital. While AI systems lack copyright protection, developers must navigate the legal landscape carefully and proactively to avoid potential infringement issues. Moreover, the responsible use and transmission of personal or confidential data to AI service providers must be managed with utmost care to protect both the developers’ interests and the security of the data itself.
Toomas Seppel
- Because it excelled at pattern recognition in big data, in 2011, Watson entered the medical arena by digesting existing databases of electronic health records (EHR) and troves of medical literature. Despite receiving high grades for marketing claims extolling the virtues of AI in health care, by 2017, Watson was expelled from elite U.S. medical centers such as M.D. Anderson Cancer Center and Sloan Kettering Medical Center. Reasons for AI’s failure included the incompatibility of the Watson supercomputer with many existing hospital EHR systems; the fragmented nature of records; the lack of sufficient open-source data; the inability to constantly update patient data in static EHR systems; and the failure of EHR systems to fully integrate all patient EHR data with smartphone app data, genetic test results and other diagnostic images.
- The pandemic took its toll on global health care systems and forced 93% of health care organizations to respond by accelerating their digitalization plans. Coupled with advances in technology, some of these earlier technical AI problems have been addressed, allowing AI systems to have the potential to be universally compatible. Fusion technology is now capable of continuously updating and collating such disparate data. EHR are regulated in the United States primarily by the Health Insurance Portability and Accountability Act (“HIPAA”) as well as the Health Information Technology for Economic and Clinical Health Act (“HIGHTECH Act”). However, neither of these two pieces of legislation encouraged, much less required, that EHR be standardized, making it difficult for an AI system to process big health data across multiple providers. All of that is hopefully about to change. The 21st Century Cures Act (“Cures Act”) requires interoperability, allowing for a standardized exchange of health data, by the end of 2022. Health Care providers that do not comply with this statute may become ineligible to participate in Medicare (the American government’s universal health coverage for citizens over age 65) and the loss of this revenue would be devastating to most providers and hospitals. While the deadline for EHR standardization has passed, no aggressive enforcement has yet been initiated.
- While enforcement of EHR standardization may be lacking, AI research and development is robustly taking place across the board from stroke prevention to early cancer detection. Nary a month goes by without groundbreaking strides being reported in the advancement of medical AI. MIT researchers and physicians at Massachusetts General Hospital have developed an AI model that can reliably predict an individual’s future lung cancer risk. Multiple predictive algorithms now predict the risk of stroke. With the help of voice recognition software, AI is diagnosing medical conditions including Parkinson’s Disease and strokes from a patient’s speech pattern. With enhanced targeting capabilities, AI is revolutionizing clinical trials, improving MRI and CT scans, creating new MRNA vaccines and using genetic sequencing to develop innovative therapies for common as well as rare disorders.
- AI can assist pharmaceutical companies in getting medicines to market faster. In addition to gene-sequencing work, AI is being trained to predict drug efficacy and side effects, and to manage the vast amounts of documents and data that support any pharmaceutical product. Additionally, AI can assist to create new therapeutic antibodies via a computer simulation (“in silico”) which can potentially reduce the time it takes to get new drug candidates into the clinic by more than half, while also increasing their probability of success in the clinic. Creating antibodies in silico with generative AI represents a major industry breakthrough on the path to fully biosynthesized antibody designs and the goal of delivering breakthrough therapeutics for untold numbers of patients, all at the click of a button.
- One challenge now is moving these amazing medical AI accomplishments forward from the realm of research to mainstream medicine so that patients can routinely benefit from this technology. While the technology is quickly advancing, implementation at the individual patient level will not be possible without help from regulators.
- We are still in the early stages of assessing risks associated with medical AI. AI’s unique attributes promise groundbreaking advances in health care, but also present considerable challenges for regulators. There is concern that AI in health care will undermine patient-doctor relationships, exacerbate existing societal, financial, and racial biases and undermine individual privacy rights.
- Additionally, despite the progress in natural language mimicry that provides an alluring assumption that AI “understands” issues given its conversational responses, AI is not sentient and does not comprehend the import of words. As a result, clinical judgment is not well represented in AI. For this reason, rather than replacing doctors, AI is better used as a second opinion tool that doctors can implement at their discretion and help health professionals by fully automating repetitive tasks. Also, because humans often cannot visualize the ongoing deep AI learning process to fully understand how the algorithms arrives at its’ conclusions, it can be difficult for humans to detect errors within the AI decision-making process.
- While the EU AI Act creates new rules and standards for all algorithms across various market sectors based on a perceived risk level, the United States is taking a different approach. In large part because of Congressional gridlock that results in it being nearly impossible to enact any broad, general policies, U.S. President Joseph Biden issued a White House directive called the Blueprint for an AI Bill of Rights (“Bill of Rights”). One of many potential problems with this U.S. AI Bill of Rights is that this is not binding federal legislation. The Bill of Rights provides only voluntary, nonbinding guidelines in five key areas: Safe and effective systems; algorithmic discrimination protections; data privacy; notice and explanation; and human alternatives, consideration, and fallback. The goal of this Bill of Rights is to create guidelines which various U.S. federal government agencies should follow when promulgating regulations in their agency’s respective area. In large part, these guidelines leave it to these many federal government agencies across the board to choose how to promulgate regulations consistent with these guidelines. This may have the unintended consequence of creating overlapping, inconsistent regulations between various government agencies. Additionally, if there is a change in the administration in the 2024 presidential election, these guidelines, and the regulatory agencies’ rules, may be scrapped entirely or dramatically changed.
- US medical devices require much more than what the U.S. AI Bill of Rights has to offer. From a regulatory perspective, it is difficult for traditional medical device laws to adequately address potential problems with such AI infused medical devices. Why? Because AI algorithms are capable of “learning” from experience and improving performance over time, the medical device itself adapts and changes, requiring potentially more oversight long after the device is initially approved. While the FDA traditionally reviews medical devices through an “appropriate premarket pathway,” the agency recognizes that this regulatory framework is not designed for adaptive technologies such as AI. Regulators noted that these technologies may require premarket review under the existing agency approach to software modifications. Acknowledging the need for greater oversight over AI driven medical devices, in September 2022 the FDA released new guidance providing that some AI tools should be regulated as medical devices as part of the agency’s oversight of clinical decision support (CDS) software. AI tools that will now be regulated as medical devices, include devices to predict sepsis, identify patient deterioration, forecast heart failure hospitalizations, and flag patients who may be addicted to opioids, among others.
- At the risk of being overly optimistic, there is hope that groundbreaking changes are afoot in the U.S. regulation of all AI. In September 2023, US titans of high tech met with the majority of US senators to discuss needed US AI regulation. While no decisions were made, key topics included setting up a new independent AI oversight body and establishing licensing protocol for AI development. Also discussed was regulation of liability allowing individuals to sue companies for damages for harm caused by AI errors. While difficult to predict because of the divisiveness in American politics, a draft AI bill may be introduced in early 2024.
Janice F. Mulligan
- This is a high-level analysis of the legal implications of the use of generative Artificial Intelligence applications. We aim to identify the basic elements of the generative AI applications currently in use, and potential legal issues and controversies.
- Generative AI applications have a wide range of uses across various industries and fields. These applications leverage AI models, such as Generative Adversarial Networks (GANs) and Recurrent Neural Networks (RNNs), to create content that is often indistinguishable from human-generated content. Generative AI is used, amongst other, for text generation, such as content creation or coding, for image generation and manipulation, creating art or deepfakes or image-to-image translation, for audio generation, such as music composition and voice synthesis, for video generation, for game design, even for medical and scientific research or simulation and training for autonomous vehicles, robots, AI systems.
- Diverse legal frameworks come into play to regulate such generative AI applications, depending on the industry and field of their deployment. One core legal implication derives from the necessity to process large datasets, often including personal information, to generate content. The processing of such data may infringe privacy and data protection legislation provisions. Further, determining the originality and ownership of AI generated content is not straightforward and may challenge long standing views from an intellectual property law perspective, resulting in legal disputes.
- The use of generative AI to generate fake news, deepfakes, defamatory material is already leading to legal actions for libel or slander, whilst posing real threats to democratic societies. This is the case not only when the AI apps are deployed with malicious intent, but also when biases are inextricably linked in training datasets, leading to discriminatory or unethical content as a by-product, potentially infringing anti-discrimination and equal opportunity laws.
Finally, the above-described AI applications may be subject to novel legislative actions, such as the much-awaited AI Act and the AI Liability Directive. When in place, these legislation initiatives will impose new compliance requirements and potential legal consequences for non-compliance, and also create a new legal framework to assign responsibility to AI developers and users adequately.
George A. Ballas
&
Nikolaos Papadopoulos
- As AI becomes broadly integrated into enterprise applications and adopted by businesses, impact assessments before the adoption of AI driven technology will be essential – whether as a requirement of law or as prudent risk management and awareness. In this article, we highlight key principles that apply in respect of AI impact assessments.
- Why? The key reasons for conducting an AI impact assessment are risk assessment and design planning.
- AI impact assessment provides a framework for organisations to identify the potential risk and impact of adopting AI technologies. The objective is not risk elimination, as this may be impractical. Rather, the objective is investigation and consideration of the risks and benefit of the intended technology, measured against the corporate and business objectives in introducing the technology. The outcome will frequently be the adoption of the intended (or similar) technology, but with adaptation, customisation or understanding of the associated risks.
- There is no one-size, fits all policy for AI implementation. An AI impact assessment will inform specific requirements for implementation of and rollout of the technology, and the specific features of the policy and guidelines for use and operation of the technology.
- When? An AI impact assessment should be undertaken as soon as a credible business proposal is advanced to introduce AI technologies as a core or material feature of:
- general business operations;
- a new business operation;
- a new office or jurisdiction;
- as part of systematic reviews after adoption.
- The first impact assessment must be conducted before the final decision to introduce the technology is taken, and before policies and guidelines are adopted. The purpose of the assessment is to inform the decision and policy. Much of the substance and benefit of the assessment is lost if the assessment is conducted as a fait accompli after the event. Businesses should conduct impact assessment reviews at different stages of using the AI application.
- Key elements. An AI impact assessment is an investigative framework to gather information, identify and quantify benefits and risks, and deliver recommendations to minimise risks while maintaining benefits.
- Investigation: The investigative phase will involve completing a questionnaire or inquiry form that seeks to gather documents and
- information that will form the factual basis of the assessment. The areas of enquiry will require responses from legal, compliance, technical and social responsibility teams within the business.
- Risk-benefit analysis: This stage of the process has two elements. First, the risks and benefits of the proposed AI technology are identified and described as objectively as possible. Risks, in particular, should not be limited to purely internal business matters. Then, risks and benefits should be given a weighting and ranking to give some frame of reference of materiality and seriousness. There is inherently a subjective element to this assessment. However, this provides a benchmark for making recommendations.
- Recommendations: The report will conclude with recommendations. The recommendations should be practical. For instance, the recommendation may be that issues can be eliminated or mitigated by technical design or customisation. Nonetheless, if an issue has core materiality and poses a serious to severe risk that cannot be mitigated or eliminated, then the recommendation must be to reject adoption of the AI technology.
- Who? Many businesses are introducing cross-functional AI review teams. In these circumstances, the leader of that team will be the correct executive to lead an AI impact assessment. Even if no AI review team has been established, the executive entrusted with the task of leading the assessment must have both seniority in the management structure and experience in conducting assessments. Experience in conducting privacy impact assessments may provide some grounding in the process involved. However, the subject matter and requirements are different for AI impact assessments.
- The key element is that the person conducting the assessment has been granted authority from senior management – ideally, the board of directors – to conduct the impact assessment under a mandate that empowers the assessor to require co-operation across the business operations, and to conduct an assessment and deliver a report without fetter or interference.
- Assessment criteria. The key elements of an AI impact assessment are:
- Purpose. Businesses should identify their specific needs and objectives before adopting AI applications. It is also important to set measurable goals for success of the AI project. Will the proposed technology enhance operational efficiency or reduce business costs? Are the expected benefits sufficiently significant to outweigh potential risks? The relevant AI strategies should also specify the purposes which AI may be used and provide guidance on how it should be used.
- Safety and reliability. The impact assessment should assess whether the AI application will perform the designated functions consistently without causing harm to users, organisations, and the environment. The impact assessment should review processes to monitor and manage the integrity and quality of the data being used to develop AI applications. Also, there must be an assessment of the human oversight over AI decision-making to ensure the use of the AI application achieves desirable outcomes. The impact assessment should consider proposals for ongoing assessment and review of the AI application, and whether they meet requirements that the use of AI application is regularly monitored and periodically reviewed for safety, security and accomplishment of intended outcomes.
- Accountability and transparency. The impact assessment should critically review the internal governance structure to oversee the AI application. This assessment will include checking that there is a clear designation of roles and responsibilities for persons who are accountable for the business’ compliance with the relevant AI regulations and requirements. The assessment will review and report on whether and how the business will provide information to customers, regulatory bodies and other third parties about the business’ use of AI. This will include reviewing the approach of the business to reporting in intended purposes and usage of the AI application, the types of data sets being used, and how the AI system has been developed or applied within the business.
- Privacy. The assessment will look at how the AI application has collected and used personal data in the training and deployment phases of the AI. In particular, the assessment will review and report on whether appropriate data minimisation techniques have been deployed to eliminate, encrypt, pseudonymise or minimise the use of personal data. The assessment will also consider other requirements under applicable laws and regulatory frameworks personal data privacy or data governance generally.
- Legal compliance. Particular concerns have been expressed by regulatory authorities in a number of jurisdictions in respect of data scraping and similar automated collection of public data sets, which are often conducted without regard for intellectual property or other rights. The assessment will consider the measures that have been taken by the AI provider to ensure that third party rights have been protected in the training and deployment phases of the AI. Many countries are adopting specific legislation to govern AI development and use, and there are also many industry guidelines and regulations that can also apply. The assessment will consider whether the practices proposed by the business will comply with those requirements.
- Ethics and society. The assessment should also consider how the adoption and use of the AI application will affect a broad range of stakeholders in the business, and the community and society at large. The automation of business processes affects employees, displaces roles and functions in society, and may even have unintended adverse environmental effects. These should all be subject matters for the assessment.
What happens next?
- An AI impact assessment report is not a report for the metaphorical top shelf. It is an actionable management tool. The delivery of an impact assessment report is not the end of the process. If recommendations are made, then those recommendations must be formally assessed and acted upon by the business. It may mean revisiting, clarifying and revising certain recommendations. However, the recommendations of an AI impact assessment should generally be adopted to the extent practicable. There should be a formal record of the adoption of those recommendations so that functional business units are mandated to follow the approved recommendations.
- The future of AI is inextricably linked to our ability at a granular business and operational level to promote a responsible and ethical use of the technology. Conducting an AI impact assessment is an important first step that businesses should take before it adopts and deploys AI technologies. An AI impact assessment has a complex blend of legal, technical and ethical features. It is an essential tool that provides a framework to gain the insights necessary for businesses to prudently navigate AI adoption.
Pádraig Walsh
&
Stephanie Sy
- In the legal profession, attorneys across the globe grapple with shared challenges: the unceasing evolution of laws, exacting clients, and unforeseen outcomes. Despite these hurdles, the profession often offers rewards that are just as substantial, spanning monetary gains, intellectual stimulation, and the gratification of aiding clients.
- However, the legal domain is on the cusp of a transformative change. This shift doesn’t stem from challenger law firms or the influx of next-generation lawyers, but from the rise of Artificial Intelligence (AI), such as ChatGPT by OpenAI. The prevailing concern isn’t merely about machines usurping routine tasks; it’s the possibility of AI encroaching upon sectors once thought to be the exclusive domain of human expertise, like the legal profession.
- To illustrate ChatGPT’s capabilities, I once tasked it with crafting a 19th-century style poem on AI’s potential repercussions on employment, particularly within the legal sphere. In mere moments, it delivered a superb composition, encapsulating both the current predicament and the prospective horizon. The inference was unambiguous: the legal field can’t be immune from the AI revolution.
- Yet, it’s crucial to note that ChatGPT, while impressive, isn’t without flaws. It excels as a “linguistic interface”, but occasionally falters in factual accuracy, stemming both from its training data’s limitations and its misunderstanding of human prompts. While it has a vast reservoir of knowledge, outperforming many in academic evaluations (predominantly in English), it often parrots mainstream viewpoints, and its information can be outdated.
- Beyond just linguistic tasks, AI’s infiltration into the legal world extends to more nuanced roles. In the criminal justice system, for instance, algorithms are increasingly being employed to assist in sentencing decisions or parole hearings. Such AI systems can simultaneously process and weigh a multitude of factors far beyond human capacity. This capability offers the promise of more consistent and informed decisions. However, it also brings challenges. The axiom “garbage in, garbage out” rings true here. If the data fed into these algorithms carries biases or inaccuracies, the resulting decisions can replicate these flaws, leading to what’s termed as “algorithmic bias”.
- People’s main worries about AI aren’t just about what it can do now, but what it might do in the future. Today, many AI tools help with legal tasks. But new AI systems are being designed with important features for law, like understanding feelings, creative thinking, and convincing arguments. AI can turn information into strong arguments, much like a good lawyer can. Plus, when looking at complex cases with lots of details, AI might be able to do better than humans.
- AI is advancing quickly, suggesting that future AI systems will be specially made for legal jobs. But big changes like this take time. There might be resistance from organizations, lots of rules to navigate, and old ways of doing things that slow down how quickly AI is used in law. However, as AI becomes more common, lawyers will need to step up their game to be better than the machines in speed, knowledge, and how they relate to clients.
- In short, the main topic isn’t about AI replacing lawyers. It’s about how the job of a lawyer might change because of AI. Lawyers who use AI well in their work will have an advantage over others, showing the importance of being flexible and evolving with new technology.
Miklos Orban
- While Artificial Intelligence (“AI”) has been a high priority of the Government of India, having been allocated substantial funding, at present India lacks specific legal frameworks, statutory provisions or official directives to govern the functioning of AI.
- The Ministry of Commerce and Industry, Government of India, established the Artificial Intelligence Task Force in 2017 to “Embed AI in our Economic, Political and Legal thought processes so that there is systemic capability to support the goal of India becoming one of the leaders of AI-rich economies’. The report of the Task Force, published in 2018, identified ten (10) sectors of relevance, namely, manufacturing, FinTech, healthcare, agriculture, education, retail, aid for differently abled, environment, national security and public utility services, and set out its recommendations in this regard. Further, in 2018, the Ministry of Electronics and Information Technology, Government of India, constituted four committees to promote AI initiatives and develop a policy framework. It unveiled a strategy that aims to incorporate AI into mainstream applications. The Minister of State for Electronics and Information Technology, Government of India, recently announced India’s commitment to regulate AI for the purpose of safeguarding users, stating that “our approach towards AI regulation is simple. We will regulate AI as we will regulate Web3 or any emerging technologies to ensure that they do not harm digital citizens”.
- India has witnessed a surge in investments, startups and AI implementations across various sectors, with a projected addition of nearly USD 957 billion to the Indian economy by 2035. In this light, there is a sense of urgency to establish a comprehensive legal framework for AI in India. Set out below is an overview of India’s evolving legal framework and expected rules, regulations and Government guidelines:
- Digital India Act. The Government is working on the Digital India Act, which is intended to supersede the Information Technology Act of 2000. This forthcoming law, with a draft release anticipated in the near future, aims to propel the digital economy to USD 1 trillion by 2030. It seeks to streamline and regulate activities in the technology sector including AI, while addressing concerns related to online safety, harmful content for children, copyright violations, misleading content and the use of AI.
- National Association of Software and Service Companies (NASSCOM). The not-for-profit industry association, which is the apex body for the technology industry in India, has introduced a set of instructions directed towards developers and researchers to facilitate both commercial and non-commercial AI models and tool development. NASSCOM has highlighted the significance of proactively considering and assessing potential challenges, ensuring openness and responsibility through public disclosures, and giving priority to reliability, security inclusivity and the betterment of humanity. These instructions have emphasised the need to tackle biases, comply with privacy standards, conduct safety testing, openly share research findings and concentrate on generative AI applications that empower human agency and well-being while also promoting the safety of AI technology. It has developed a code of ethics, which sets out the principles and guidelines for the responsible use of AI.
- National Strategy on Artificial Intelligence (NSAI). The Planning Commission of India devised the NSAI and contemplated the establishment of a panel comprising the Ministry of Corporate Affairs and the Department of Industrial Policy and Promotion to oversee regulation of AI. This included the creation of an intellectual property framework for AI advancements and the introduction of legal frameworks for the purpose of data protection, security and privacy.
- Ministry of Electronics & Information Technology. As set out above, the Ministry of Electronics & Information Technology, Government of India, has formed 4 committees to examine various ethical concerns related to AI. With a view to develop a policy framework for AI in India, these 4 committees were constituted with specific focus areas, namely, platform and data for AI, leveraging AI for identifying national missions in key sectors, mapping technological capabilities, key policy enablers required across sectors, skilling and re-skilling and research and development, and cyber security, safety, legal and ethical issues.
- National Institution for Transforming India (NITI Aayog). NITI Aayog, a Government think tank, has stated an aim to produce a national policy to direct the Government’s approach to the use of AI in various sectors. It has issued draft documents outlining the establishment of a supervisory body and the enforcement of responsible AI principles, encompassing safety and rehabilitation, equality, inclusivity, non-discrimination, privacy and security, transparency, accountability, protection and reinforcement of human values. These principles were designed for inspecting ethical standards, forming necessary legal and technical frameworks, innovating new AI techniques and tools, and representing India on the global stage.
Conclusion.
Although the use of AI has witnessed remarkable growth, it is evident that India’s legal framework is in a nascent stage, and is yet to address potential issues related to safety, inclusivity and protection. Nevertheless, several legislative measures have been proposed, aimed at establishing a comprehensive legal framework for AI, with appropriate safeguards. The challenge lies in harnessing the potential of AI while upholding ethical principles. While draft proposals have been released, the specific details of this framework are still to be defined.
Siddhartha George,
&
Dharani V. Polavaram,
&
Bilal Lateefi
- There are many legal implications for publishers whose online content is used to train Artificial Intelligence (“AI”) chatbots. Below are set out the key issues that have been identified to date relating to intellectual property misuse, unjust enrichment and competition law.
Background
- In November 2022, OpenAI released ChatGPT, a generative AI chatbot. ChatGPT is a natural language processing tool that interprets human queries or requests and answers them in real time. Users can have conversations with the chatbot, as well as ask it to complete tasks like composing emails, essays and code, among other things. Since ChatGPT was released, big tech platforms have raced to release their own AI products and integrations. These software solutions are referred to as Large Language Models (“LLM’s”). To generate natural language responses to user prompts, LLMs must be trained on vast amounts of data, including copyrighted data published online. Publishers have already been vocal about the issue, raising concerns of “IP theft” and fair payment for content.
Discussion of law and facts
- Copyright Issues. Copyright is infringed when copyright material is copied. Chatbots are understood to copy the content of pages within their LLMs by scraping information from underling datasets for training purposes, adding to the breach of copyright that may be taking place when search engines copy internet sites’ pages for using in their search indexes.
- The answer engine then captures data from use of the AI that helps its owner make money via targeted advertising. Where copyrighted material is misused, it may provide the owner of the copyright with a claim that its intellectual property has been infringed for which it is owed compensation. An account of profits in the hands of the infringer may be brought where the claim is for misuse of another’s intellectual property.
UK Position on Copyright
- The UK is readying a code of practice for AI firms to provide guidance on accessing copyrighted material. If adherence to the code of practice is established, it could enable AI firms to be granted a “reasonable licence” from the rights holder in return.
- There is already authority in the UK that reproducing copyrighted content on websites requires a license. In the 2010 case Newspaper Licencing Agency & Ors v Meltwater Holding BV & Ors, the Court found that companies had infringed newspapers’ copyright by using a third-party media monitoring service which provided copies of headlines and extracts from news articles without a web end.
- Litigation around this exact issue has already begun to trickle in, in relation to AI. Getty Images has initiated legal proceedings against Stability AI in the United Kingdom, relating to Stability AI’s alleged copying of millions of its images, according to a recent press release, with a similar suit brought by artists in California.
Competition Issues
- In the past, one tactical issue from taking copyright law actions for mass infringement of multiple headlines has been the cost and time involved, and the lack of clear class action jurisdiction. Where an antitrust case can be brought by an authority to address the same issue, it may be more cost-effective for that route to be followed. Where a group or class of businesses are affected, such as online Publishers, a competition law class action can now be brought in the UK before the Competition Appeal Tribunal.
- There is already a growing concern that the answer engine model anti-competitively walls off users from content creators (“Publishers”). Those publishing businesses who are used to produce the training content that makes generative AI more accurate lose traffic and will be unable to compete for attractive and targeted advertising revenue. Where data scraping of publishers is performed by a large platform to enhance its position over rival publishers in relation to advertising, this amounts to the exclusion of rivals.
- Further, where the answer engine is integrated in the search engine, and that search engine has already been found to be dominant, there is already a starting point for assessing dominance and identification of a new example of abuse.
- Currently, the free version of ChatGPT does not provide references for answers, though users can request sources. However, even where generative AI references its sources, it’s not clear what impact that will have on data being gathered and on targeting advertising or publisher revenue, since this depends on the user clicking on the source and being directed to the publisher’s website.
How do owners of generative AI respond?
- Firms have so far been vague when faced with these concerns, often sidestepping the copyright issues and pointing instead to the alleged benefits they provide to publishers and users. Microsoft and Google have attempted to portray a symbiotic relationship between Publishers and generative AI products, namely that their products present a value exchange.
- Firms like OpenAI also seek to elude claims for copyright infringement by justifying their conduct through ‘fair use’ – an exception to copyright law which allows limited use of copyright works without the permission of the copyright owner. They would argue that their use of the content is transformative, as AI Chatbots like ChatGPT do not simply reproduce text, but generate new text, images or videos based on the patterns it learned from combinations of training data. However, the argument of ‘fair use’ becomes difficult to run when the final product is commercialised and monetized and those that provided valuable intellectual property needed to train the AI have not been paid for their contribution: suggesting that the AI owner may be unjustly enriched from misusing others work and intellectual property.
Finally, objective justifications can be advanced to justify the position of the defendant in competition cases. Answer engines may claim integrated results are more useful to the user. In the Google Shopping case this objective justification was considered: while the Commission did not dispute the higher quality and usefulness of the Google Search format, it still found the practise to be anticompetitive because of the way that it promoted and displayed its own results over those of rivals.
Daniel Preiskel