Generative artificial intelligence (led by ChatGPT and DeepSeek) is advancing at a dizzyingly fast pace, transforming our daily practices and raising major legal questions. In response to this revolution, regulatory approaches vary significantly across regions. While the European Union is taking the lead with its AI Act, which lays down cross-sector, values-based rules and a risk-based approach, the United States favors flexible guidelines, relying on innovation and self-regulation and China follows a targeted and sector-specific approach.
In this ever-evolving landscape, understanding legislative developments is crucial for businesses and professionals operating in the digital industry. We provide you with an up-to-date overview of existing regulations and emerging trends, helping you anticipate the challenges and opportunities of generative AI on a global scale.
The Lexing® network members provide a snapshot of the current state of play worldwide.
The following countries have contributed to this Lexing Insights #42 – Legal framework for Generative AI: Australia, Belgium, China, Finland, Greece, Hong Kong, Kenya, Mexico, South Africa, Sweden.
FREDERIC FORSTER
VP of Lexing® network and Head of Telecommunications and Digital Communications at Lexing
AI Ethics Principles
Australia has not yet enacted any specific statutes or regulations that directly regulate AI. However, progress is being made towards structuring a framework for the use of artificial intelligence, including the protection of privacy and security of personal data in connection with the use of artificial intelligence technologies, similarly to the recently enacted European AI Act, which is the first-ever legal framework on AI. (1)
Latest developments
On 5 September 2024 the Australian Government issued two important papers:
- The “Proposed Guardrails for the Mandatory Use of AI in High-Risk Settings”. (2) (Mandatory Guardrails Paper). This paper recommends adopting ten mandatory guardrails that developers and users must comply with for AI activities in “high-risk settings”. The Mandatory Guardrails Paper 2024 was released for comments and received over 300 public submissions, which are available online (3); and
- “Voluntary AI Safety Standards” (4) (Safety Standards), which outlines ten guardrails designed to provide practical guidance to AI developers and AI deployers on safe and responsible development and deployment of AI systems in Australia.
- Select Committee on Adopting Artificial Intelligence. In March 2024, the Australian Senate established a Select Committee on Adopting Artificial Intelligence. (5) The Committee tabled a final report to Parliament in November 2024 (6), to allow for the consideration of the impacts of generative AI on the federal election in the United States. The report contained 13 recommendations, including a recommendation to introduce dedicated legislation to regulate high-risk uses of AI in Australia.
Developments in New South Wales and Victoria
- New South Wales has embarked on a comprehensive program to establish policies for the use of artificial intelligence and has produced several resources relating to the use of artificial intelligence in government. Key documents include the NSW Artificial Intelligence Assurance Approach (7) and NSW AI Assessment Framework. This has been embedded into the NSW Digital Assurance Framework (8), which is a framework that gives independent oversight to the NSW Government’s digital projects, to ensure that they are delivered efficiently and effectively within the specified timeframes.
- In Victoria the Office of the Victorian Information Commissioner issued a Public Statement: Use of personal information with ChatGPT. (9) The document relates to the use of the ChatGPT platform by Victorian public sector organizations. The document outlines that Victorian Public Sector Organizations must ensure that staff and contracted service providers do not use any personal information with ChatGPT. Any such disclosure would result in a breach of the Victorian Information Privacy Principles and the Public Records Act 1973 (Vic), as ChatGPT indefinitely retains personal information input into the platform.
Final comments
It remains to be seen whether and to what extent Australia will adopt a similar approach to the EU. Currently Australia is focused on pushing through a series of reforms to the Privacy Act, 1988, which are scheduled to commence in 2026 and beyond. These will no doubt take into account some elements of AI at least from a privacy and automated decision making perspective. We expect however to see stand-alone regulation relating to the appropriate and effective use of AI in the not too distant future. Clearly with the release of the Mandatory Guardrails paper, Safey Standards and the recent Select Committee Senate Report, the Government is signalling a strong intent to finally and properly legislate for the safe and responsible adoption of AI in Australia.
*****
(1) AI Act (Regulation (EU) 2024/1689, available at: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
(2) https://www.industry.gov.au/news/mandatory-guardrails-safe-and-responsible-ai-have-your-say
(3) https://consult.industry.gov.au/ai-mandatory-guardrails/submission/list
(4) https://www.industry.gov.au/publications/voluntary-ai-safety-standard
(5) https://www.aph.gov.au/Parliamentary_Business/Committees/Senate/Adopting_Artificial_Intelligence_AI
(7) See https://www.digital.nsw.gov.au/sites/default/files/2022-12/ict-digital-assurance-framework_1.pdf
(8) See https://www.digital.nsw.gov.au/sites/default/files/2022-12/ict-digital-assurance-framework_1.pdf
(9) See, e.g., “Public Statement: Use of personal information with ChatGPT” available at: https://ovic.vic.gov.au/privacy/resources-for-organisations/public-statement-use-of-personal-information-with-chatgpt/
DUDLEY KNELLER
What is your legislation applicable to artificial intelligence?
At national level, there are currently no rules, laws or guidelines specifically applicable to AI in Belgium. As a member of the European Union, Belgium has to comply with the AI Act (entered into force on 1 August 2024). This Regulation lays down harmonized rules on Artificial Intelligence. (1)
What is the definition of Generative AIs in the AI Act?
The AI Act does not define generative AI but mentions that generative AI is an example of general-purpose AI model, which is defined as follows as:
- “an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market.”
Which rules apply to defective artificial intelligence system?
The AI Act addresses the risks raised by defective AI systems by imposing transparency obligations as: (2)
- Drawing-up technical documentation of the model, covering its training, testing, and evaluation processes;
- Supplying information and documentation to AI system providers who seek to use the GPAI model in their products, helping them understand the model’s capabilities and limitations to meet their legal obligations under AI Act;
- Providing a detailed summary of the training content and data to enhance transparency;
- Adopting a policy to comply with EU copyright law.
GPAI models that could pose systemic risks will face additional requirements, including performing model evaluations, assessing and mitigating systemic risks, and reporting on incidents. General rules governing defective products or services may be relevant to AI systems:
- Product liability; (3)
- Consumer protection and legal warranty; (4)
- The failure or malfunctioning AI systems intended to be used as safety components in the management and operation of critical digital infrastructure; (5)
- Privacy and Data Protection. (6)
Which rules apply to any civil and criminal liability rules in cases of damages caused by artificial intelligence system?
Under Belgian law, there are no specific civil or criminal liability rules governing AI systems. However, provisions of general law shall apply such as: Manufacturer’s liability (7), seller’s liability and warranty regime (8), user’s liability in tort (9), user’s liability in contract. (10)
At the EU level, the AI Liability Directive (11) has been shut down in the frame of the tensions between EU and USA
*****
(1) Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and (Artificial Intelligence Act)
(2) Articles 53 and 55 of the Artificial intelligence Act
(3) Law of 25 February 1991
(4) Belgian law has transposed EU Directive 2019/770 on certain aspects concerning contracts for the supply of digital content and digital services and EU Directive 2019/771 on certain aspects concerning contracts for the sale of goods within Article 1649bis to 1649octies and Book III, Title VIbis of the Belgian Civil Code
(5) Directive (EU) 2022/2557 on the resilience of critical entities.
(6) GDPR and the Law of 30 July 2018 on the protection of natural persons with regard to the processing of personal data
(7) Article 1582 et seq. of the former Civil Code, the seller is expected to deliver goods that conform with the agreement.
(8) Article 1649bis of the former Civil Code
(9) Article 1384 of the former Civil Code, soon replaced by Article 6.16 of the Civil Code and Article 1382 of the former Civil Code, replaced by Article 6.5 et seq. of the Civil Code (entered into force on January 1, 2025)
(10) Article 5.230 of the Civil Code
(11) Proposal for a Directive on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive)
ANNE-VALENTINE RENSONNET
On March 16, 2023, Baidu, the Chinese AI giant introduced Ernie Bot which is defined as “new generation large language model and generative AI product” and it was fully open to general public on August 31, 2023. Ernie Bot along with other Chinese chat bots developed by Chinese AI companies has prompted the Chinese regulator to enhance its regulation on generative AI service (GAI) and such regulation is intended to ensure, among others, the legitimacy and security of training data. In such context a number of regulatory texts on Generative AI service have been released. The present note is intended to provide an overview of the main regulatory texts recently released in this area:
1. GAI Measures (1).
Regarding the training data processing, the GAI Measures emphasizes that the processing shall use data and foundation models having legitimate sources and such processing shall not infringe upon others’ intellectual property rights and in case where the personal information is involved, the consent from the data subject concerned, if applicable, shall be secured. The Service provider shall label the generated images and videos in accordance with applicable texts. The GAI Measures also provide 5-fold criteria with respect to training data including in particular the lawful provenance of data and measures susceptible to enhance the veracity, accuracy, objectivity and diversity of training data.
2. GAI Standard (2).
The GAI Standard provides the security requirements for training data in its section 5.
Regarding the security management of data provenance, the GAI Standard requires the following:
- (a) The GAI service provider shall take measures to ensure the traceability of the training data;
- (b) The GAI service provider shall conduct security assessment with respect to the data from specific source and shall reject such specific source should the “unqualified rate” go beyond 5 percent.
- (c)GAI service provider shall take measures to increase the diversity of training data including its linguistic diversity.
GAI Standard also provides detailed requirements with respect to content security management of the training data, concretely, such requirements cover the following aspects:
- (a) GAI service provider shall filter out the “illegal and unhealthy elements” to ensure the quality of the training data;
- (b) The organizational and technical measure to prevent/mitigate IPR infringement risk;
- (c) Consent/individual consent required for the training data containing personal data or sensitive personal data.
- The GAI Standard also provides the data labelling requirement.
3. NDSR (3)
This latest important administrative regulation released by the central government requires that a data processor (4) providing GAI service enhance security management on training data and data processing and take effective measures to prevent and deal with data security risk. In particular, the NDSR provides that a data processor shall conduct impact assessment should it intend to access, collect data through network by using automatic tool in order not to illegally invade or disrupt the networks of others. The data processor shall have relevant personal data removed promptly or anonymized should the collected training data still inevitably contain such personal data despite the precautionary measures taken by the data processor.
*****
(1) Interim Measures for the Administration of Generative AI Services” jointly released by 7 Chinese ministerial departments on July 13, 2023
(2) “Basic Security Requirements for Generative Artificial Intelligence Service” issued by the National Network Security Standardization Technology Commission on February 29, 2024
(3) “Administrative Regulation on Network Data Security” released by the State Council on August 30, 2024
(4) When navigating the terminology of PIPL, we may come across a number of “false friends” of which the most notorious one is the Data Processor (Personal information processor), which corresponds to the definition of “data controller” in the GDPR. (See Lexing Insights #39 and #34).
YUNG YANG
Introduction
As artificial intelligence (AI) technologies like ChatGPT, Ernie Bot and DeepSeek continue to advance, countries worldwide are adapting their legal frameworks to address the challenges and opportunities presented by these powerful tools. In Finland, the regulation of generative AI is heavily shaped by European Union (EU) legislation but also certain national laws, and an overall strong commitment to ethical AI development. This article explores the legal framework governing generative AI in Finland, highlighting the local specificities that apply to these technologies.
European Union’s influence on Finnish AI legislation
Finland, as a member of the European Union, aligns its AI regulations closely with EU legislation.
The most significant piece of legislation in this context is the EU Artificial Intelligence Act (AI Act) entered into force as of 1 August 2024 (1). While the Act is now in force, its provisions will become fully applicable after a two-year transition period, starting from August 2, 2026. The AI Act regulates AI systems based on their level of risk, with more stringent requirements imposed on higher-risk applications, such as those used in healthcare, law enforcement, and critical infrastructure. Generative AI systems like ChatGPT could fall into these higher-risk categories depending on their use cases, necessitating compliance with rigorous standards for safety, transparency, and accountability. Generative AI is prohibited for example, if it is used for explicitly prohibited practice such as harmful manipulation. In addition, the generative AI-system would be high-risk and permitted subject to compliance with AI requirements and ex-ante conformity assessment if used for high-risk purpose such as recruitment. Moreover, people should be informed they are interacting with chatbots or AI-generated deep fakes in accordance with transparency obligations of the AI Act.
The European Commission has recently initiated a consultation process for a Code of Practice aimed at providers of general-purpose Artificial Intelligence (GPAI) models. This Code, which is a key component of the AI Act, will focus on essential issues such as transparency, copyright compliance, and risk management. Organizations operating GPAI models within the EU, including businesses, civil society groups, rights holders, and academic experts, are encouraged to share their insights and perspectives. Their contributions will help shape the Commission’s upcoming draft of the Code of Practice for GPAI models. The rules governing GPAI are set to take effect in 12 months, with the Commission aiming to finalize the Code of Practice by April 2025. Additionally, the feedback gathered during this consultation will support the AI Office in overseeing the implementation and enforcement of GPAI-related provisions under the AI Act.
The updated Product Liability Directive took effect on Sunday, December 8, 2024. This revised legislation, which must be implemented into national laws, strengthens the legal framework for individuals seeking compensation for harm caused by de-fective products. It also enhances legal clarity for businesses. The new regulations cover a wide range of products, from everyday household goods to digital technologies and advanced innovations like artificial intelligence (AI) systems.
Additionally, the General Data Protection Regulation (GDPR) (2) plays a crucial role in shaping the legal landscape for generative AI in Finland. The GDPR man-dates strict requirements for the collection, processing, and storage of personal data, emphasizing the protection of individual privacy. Any AI system operating in Finland, including ChatGPT, Ernie Bot et DeepSeek, must adhere to GDPR standards, ensuring that users’ data is handled with the utmost care and transparency.
National legislation and ethical AI development in Finland
Beyond EU regulations, Finland has developed its own national policies and guidelines to govern the use of AI technologies. The Finnish Artificial Intelligence Program (3) outlines the country’s approach to AI development, focusing on innovation, competitiveness, and the responsible use of AI. This program emphasizes the importance of ethical considerations in AI, advocating for systems that are fair, transparent, and accountable. In 2019, Finland introduced its Ethical AI guidelines in public administration (4), which set forth principles for the ethical use of AI systems. These guidelines align with the EU’s ethical framework but also reflect Finland’s specific cultural and societal values, such as a strong commitment to equality.
National AI legislation is still comparatively thin, despite the Finnish government’s active participation in EU AI efforts and the implementation of programs to assist businesses with the twin transition and AI adoption. In addition to the AI Act, the GDPR is the main source of current rules on AI usage, which places greater emphasis on the use of personal data in AI technology than on AI itself. Finland, for instance, does not have any particular laws governing AI-related liability, limitations on AI use, or the use of advanced AI technologies to process personal or other data.
AI-related laws in all EU member states are greatly impacted by the AI Act, the proposed Directive on adapting non-contractual civil liability to AI (COM/2022/496, “AI Liability Directive”) (5), and the possible amendments to Directive on liability for defective products (COM/2022/495, “Product Liability Directive”) (6). As a result, the Finnish government has chosen to participate actively in the EU by delaying the adoption of any new national legislation until the AI Act is put into effect and new case law is created. National legislation will need to incorporate these directives, but it’s unclear if completely new laws will be made or if already-existing ones would be modified.
On the other hand, Finland has passed certain laws concerning automated decision-making by government agencies. In 2023, a new general law on automated decision-making in public administration went into force. It was really a question of two separate legislative packages: general legislation enabling automated decision-making in the administration on a broad scale (new Chapter 8b of the Administrative Code on the grounds for automated decision-making and a number of provisions to be added to the Data Management Act and certain other Acts) and specific legislation on automated decision-making in tax, customs and certain other matters (amendments to several Acts, most importantly the Tax Procedure Act). In the past, distinct choices made by various authorities were governed separately. The new laws permit automated decisions in general as long as the authority complies with applicable legal requirements. Furthermore, the legislation tackles liability concerns by asserting that, in accordance with the Act on Information Management in Public Administration, the authority employing automated decision-making bears responsibility for it.
The total lack of case law and court decisions concerning AI is another notewor-thy aspect of Finland’s legal system. It is, however, quite understandable, given the newly adopted EU legislation and the narrow scope of state laws. This also explains why there has not been any litigation or conflicts that would eventually become case law. Finally, alternative dispute resolution might also play a part here.
Compliance and oversight mechanisms
In Finland, the enforcement of AI-related regulations is overseen by various na-tional agencies. The Data Protection Ombudsman is responsible for ensuring compliance with the GDPR and safeguarding individuals’ data privacy. Any AI system that processes personal data in Finland, including generative AI models like ChatGPT, must operate within the bounds of GDPR, and violations can result in significant penalties. The Finnish Transport and Communications Agency (Traficom) also plays a role in overseeing AI applications, particularly in areas related to communication and digital infrastructure. Traficom ensures that AI systems are secure, reliable, and accessible, contributing to the broader goal of digital trust in Finnish society.
Conclusion
The regulation of generative AI in Finland is shaped by a combination of EU laws, national legislation, and ethical guidelines. This legal framework emphasizes pri-vacy, transparency, and accountability, reflecting Finland’s commitment to re-sponsible AI development. As AI technologies like ChatGPT, DeepSeek and Ernie Bot continue to evolve, understanding and adhering to these local and EU-related specificities will be crucial for developers and businesses looking to operate in Finland. While the technology may be global, the legal rules governing its use are distinctly local or at least regional, reflecting each country’s and in this case, EU’s values, priorities, and societal goals.
(1) EU Artificial Intelligence Act, available at: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
(2) General Data Protection Act, available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32016R0679
(3) Finnish AI Programme, see here: https://digital-skills-jobs.europa.eu/en/actions/national-initiatives/national-strategies/finland-artificial-intelligence-programme
(4) Ethical AI Guidelines in Public Administration, see: https://vm.fi/documents/10623/162999475/Tekoäly-huoneentaulu-en.pdf/9ef4bf55-7c06-4861-2680-aa39c7b8357a/Tekoäly-huoneentaulu-en.pdf?t=1685604530697
(5) AI Liability Directive, see progress: https://commission.europa.eu/business-economy-euro/doing-business-eu/contract-rules/digital-contracts/liability-rules-artificial-intelligence_en
(6) Product Liability Directive: https://eur-lex.europa.eu/eli/dir/2024/2853/oj/eng
JAN LINBERG
Greek Law 4961/2022 (1), which came into effect on July 27, 2022, with AI-related provisions taking effect on January 1, 2023, introduces a national regulatory framework for developing, deploying, and using artificial intelligence (AI) technologies in Greece. This framework is aligned with the EU Artificial Intelligence (AI) Act (2), adopting a “risk-based approach” to ensure the responsible and transparent use of AI in both public and private sectors.
For public bodies (3), the law mandates that AI systems can only be used when authorized by statute, excluding the Ministries of National Defense and Citizen Protection. Public sector organizations must conduct algorithmic impact assessments and data protection impact assessments required by the EU General Data Protection Regulation (GDPR) before deploying AI systems. These assessments are intended to evaluate risks to the rights and freedoms of individuals affected by AI systems. The law also imposes transparency requirements, including the obligation for public bodies to publicly disclose information on the operation of AI systems and any decisions made or supported by them (4). Affected individuals can file complaints, which the National Transparency Authority oversees. Additionally, public bodies must maintain a register of the AI systems they use (5).
For private entities, the law regulates the use of AI in employment, requiring companies to inform employees about AI systems that affect their job conditions, recruitment, or evaluation (6). This also applies to digital platforms using AI in employment-related decisions. Private entities, particularly medium- and large-sized companies, must adopt policies on the ethical use of data when employing AI systems (7). These companies must also keep a record of all deployed AI systems (8).
In the context of public procurement, contractors developing AI systems for the public sector must provide the necessary information for transparency, ensure the AI system’s compliance with applicable laws, and allow the contracting authority to study and improve the system (9). The law emphasizes compliance with fundamental rights, such as privacy, non-discrimination, and human dignity, when using AI.
To oversee AI regulation and strategy in Greece, the law establishes a Coordinating Committee for AI, which drafts the National Strategy for AI, and a supervisory committee to ensure its implementation (10). The National AI Observatory provides these committees with data and insights to monitor AI developments.
*****
(1) Gov. Gaz. Α’ 146/27.07.2022
(2) Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (AI Act)
(3) Art. 4 of Law 4961/2022
(4) Art. 6 of Law 4961/2022
(5) Art. 8 of Law 4961/2022
(6) Art. 9 of Law 4961/2022
(7) Art. 10 (2) of Law 4961/2022
(8) Art. 10 (1) of Law 4961/2022
(9) Art. 7 of Law 4961/2022
(10) Art. 11 of Law 4961/2022
GEORGE BALLAS
&
NIKOLAOS PAPADOPOULOS
Introduction
Artificial intelligence (“AI”) is now a useful tool in many aspects of our lives. The public release of AI systems using generative pre-trained transformers in large language models has sparked widespread and popular adoption of GenAI tools. This, in turn, has attracted the attention of legislators and regulators. The goal for society is to maximise the benefits brought by AI, while mitigating risks. The mission for legislators and regulators is to find the right framework to achieve that goal.
Nevertheless, the regulatory approach to AI can differ from one jurisdiction to another. In August 2024, the European Union (“EU”) enacted its Artificial Intelligence Act (1), which was the first comprehensive risk-based AI regulatory framework that applies to all member states across the EU. By contrast, the United States does not have a unified AI regulation but adopts a case-by-case strategy with numerous guidelines to regulate AI on a federal level. In China, regulators have adopted more targeted and sector-specific rules on AI.
Hong Kong has adopted a context-based approach to regulation of AI, and has avoided the general legislative framework approach adopted by the EU. Under this approach, Hong Kong regulators have published regulations and guidelines within existing frameworks to provide guidance on AI in the context in which it is used. These sector and context specific regulations share common principles, but adopt an approach that can be more flexible to the needs of the sector. The requirements in the communications sector, for instance, will be different from the energy sector, which is in turn different to the financial services sector. Even within context-based regulation, there is a special role for the Privacy Commissioner for Personal Data (PCPD). This subject domain regulator has taken the lead in providing cross-sector guidance on the protection of personal data in the development, supply and use of AI systems in Hong Kong.
Hong Kong is a special administrative region of China. This means that although Hong Kong is a part of China, it has its own system of law and regulation, including those in relation to the governance of AI systems. In this article, we will focus on the AI legislative developments in Mainland China and Hong Kong.
What is Ernie Bot? What is next?
On 16 March 2023, Chinese tech giant Baidu officially launched its chatbot, Ernie Bot. It is one of the first generative AI chatbots to obtain regulatory approval in Mainland China. Similar to its main rival, ChatGPT, users can interact with Ernie Bot via text to create content. It primarily operates in Mandarin Chinese but can respond to English queries to a lesser extent. Although Ernie Bot is available for download globally, it requires a Chinese phone number for user registration. As such, the primary users of Ernie Bot are the Chinese population.
In April 2024, Baidu announced that Ernie Bot had over 200 million users. Nevertheless, the advent of another Chinese AI application, DeepSeek, demonstrates that AI chatbots in Mainland China will face rapidly increasing competition.
The rising popularity of Ernie and other AI chatbots has resulted in the consideration of a more comprehensive legislative framework on AI to ensure that AI development, supply and use benefits China’s national interest.
AI regulation in Mainland China
In recent years, Mainland China has implemented the following major legislation and measures on the regulation of AI:
- Generative AI Measures: The Interim Measures for the Management of Generative Artificial Intelligence Services (“Measures”) (2) were jointly issued by the CAC and the National Development and Reform Commission, along with five other key government ministries on 13 July 2023, and took effect on 15 August 2023. The Measures are the first administrative regulation on the management of generative AI services in Mainland China, which apply to any individual that uses generative AI technology to provide services to the public in Mainland China.
- Regulation on algorithmic recommendations: Management of Algorithmic Recommendations in Internet Information Services Provisions (3) is Mainland China’s first legislation regulating the use of algorithm recommendation technologies to provide online services in Mainland China. The provisions came into effect on 1 March 2022. The purpose is to prohibit algorithmic recommendation service providers from generating fake news or disseminating information from unauthorised sources.
- Regulations on deep synthesis: Provisions on the Administration of Deep Synthesis Internet Information Services (4) came into force on 10 January 2023, following their adoption by the Cyberspace Administration of China, the Ministry of Industry and Information Technology, and the Ministry of Public Security on 25 November 2022. The regulations aim to address the risks associated with the AI-based “deep synthesis” (deepfake) technology which can generate or edit image, audio, or video content that appears to be authentic but is synthesised.
- Ethical principles and frameworks: The Measures for Review of Scientific and Technological Ethics (for Trial Implementation) came into effect on 1 December 2023. Under the legislation, enterprises engaged in AI and certain other technology activities are required to undergo scientific and technological ethics reviews.
The legislative developments above reflect that the Chinese authorities have adopted more targeted and sector-specific rules to regulate the evolving AI technologies. This is intended to protect the development of AI technologies in Mainland China from the risk of overregulation. In his 2025 work report delivered at China’s annual “two sessions”, Premier Li Qiang recognised AI as one of the future industries that are key to China achieving its economic goals, and committed to advance China’s “AI Plus” initiatives.
These regulations and measures do not apply in Hong Kong, as Hong Kong is a separate legal jurisdiction (although it is part of China).
AI regulation in Hong Kong
There is no overarching legislation regulating the use of AI in Hong Kong. The Hong Kong authorities have relied on existing legislation with sector-specific guidelines from regulators to address the risks and challenges posed by AI.
The PCPD first published general guidance on the topic of AI in its Guidance on the Ethical Development and Use of Artificial Intelligence (“Ethical Guidelines”) in 2021 (5). This paper was a general paper on AI governance frameworks, and adopted a broader outlook than just personal data protection. It positioned the PCPD as the natural general thought leader among regulators in Hong Kong on this topic.
On 11 June 2024, the PCPD issued the Artificial Intelligence: Model Personal Data Protection Framework (“Model Framework”) (6) The Model Framework is an elaboration of the Ethical Guidelines to the specific topic of personal data protection. While complying with the Personal Data (Privacy) Ordinance (Chapter 486 of the laws of Hong Kong) (7), the Model Framework provides a set of recommendations on the best practices for any organisations procuring, implementing, and using AI systems that involve the use of personal data.
Under the Model Framework, organisations are encouraged to formulate appropriate policies, practices and procedures when they procure, implement and use AI technologies:
- AI Strategy and Governance: organisations are advised to have an internal AI governance strategy, which generally comprises an (i) AI strategy, (ii) governance considerations for procuring AI solutions, and (iii) an AI governance committee (or similar body) to steer the process.
- Risk Assessment and Human Oversight: organisations are advised to adopt a risk-based approach in the procurement, use and management of AI systems. For instance, organisations should adopt a comprehensive risk assessment to systematically identify, analyse and evaluate the risks involved in the use of AI technologies.
- Customisation of AI Models and Implementation and Management of AI Systems: the PCPD suggested that organisations should adopt rigorous testing and validation of the AI models in proportion to the level of risks involved, to ensure that the AI technologies will perform as intended.
- Communication and Engagement with Stakeholders: organisations are encouraged to communicate and engage effectively and regularly with stakeholders on the usage of AI.
In addition to this general guidance framework, different industry sector regulators have published guidance on different aspects of AI governance. For instance, the Hong Kong Monetary Authority is the regulator for banks in Hong Kong. It has published guidelines on AI generally (8), GenAI (9), and use of AI in transaction monitoring (10), and has opened a sandbox for the trialling of AI systems and applications under the sponsorship and input of the HKMA. Even professional sectors such as the Law Society of Hong Kong have provided guidance to its profession (11).
What is next
The approaches adopted in Mainland China and Hong Kong are a good example of the different approaches adopted around the world. Neither has adopted the general, unified regulatory model favoured by the EU. However, Mainland China is more likely to adopt additional legislative measures. The population of Mainland China is substantial, and the potential risks to social stability and welfare are correspondingly larger. Hong Kong has adopted, and is likely to maintain, a context-based regulatory model that relies on existing legislation and regulatory frameworks.
This difference in approach between Mainland China and Hong Kong is also an excellent example of the continuing relevance and application of the one country, two systems policy of Mainland China that allows Hong Kong to operate a different legal system according to common law principles.
Ernie Bot, like DeepSeek, Gemini, Copilot and many other GenAI systems, is available in Hong Kong. OpenAI’s ChatGPT is not (though workarounds are commonly applied). Nonetheless legislators and regulators in Mainland China and Hong Kong are alive to the benefits and challenges to society of GenAI technologies.
*****
(1) https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689
(2) https://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm
(3) https://www.gov.cn/zhengce/zhengceku/2022-01/04/content_5666429.htm
(4) https://www.cac.gov.cn/2022-12/11/c_1672221949354811.htm
(5) https://www.pcpd.org.hk/english/resources_centre/publications/files/guidance_ethical_e.pdf
(6) https://www.pcpd.org.hk/english/resources_centre/publications/files/ai_protection_framework.pdf
(7) https://www.elegislation.gov.hk/hk/cap486
(8) https://www.hkma.gov.hk/media/eng/doc/key-information/guidelines-and-circular/2019/20191101e1.pdf
(10) https://www.hkma.gov.hk/media/eng/doc/key-information/guidelines-and-circular/2024/20240909e1.pdf
PÁDRAIG WALSH
&
STEPHANIE SY
Introduction
The rapid advancement of generative artificial intelligence (AI) technologies has sparked a global conversation about the need for robust legal frameworks to regulate their development and use. In Kenya, the introduction and proliferation of generative AIs in sectors such as healthcare, finance, law, education, and business are presenting new opportunities and challenges. The existing legal frameworks, which focus primarily on data protection, intellectual property rights, and cybersecurity, may not adequately address the complexities that arise from AI-generated content, its use, and potential misuse. As a result, there is an increasing demand for laws and regulations that account for the ethical, legal, and societal implications of generative AIs, particularly in relation to accountability, transparency, and privacy.
As Kenya positions itself as a hub for digital innovation, policymakers and legal experts are exploring how to balance the benefits of generative AI with the need to safeguard public interests.
The current legal framework governing the development and use of generative AIs in Kenya consists of the following Acts of Parliament.
The Data Protection Act of 2019, Chapter 24 Laws of Kenya
Kenya’s Data Protection Act of 2019 (1) establishes clear guidelines on the handling of personal data, which is often a core element in the development and deployment of generative AIs. Under the Act, AI Companies, developers, and users are required to ensure that any personal data used in these systems is collected and used in a lawful, fair, and transparent manner, and with the informed consent of the data subjects. This provision protects individuals from unauthorized or unethical data harvesting practices, a key concern given the capabilities of AI to analyze and generate insights from personal data. Furthermore, organizations utilizing generative AI must ensure that they have a clear legal basis for processing personal data, such as obtaining explicit consent from individuals or relying on legitimate interests that do not override the privacy rights of data subjects.
Additionally, the Act defines who can be data controllers and processors, their roles and responsibilities, and mandates that they must uphold principles of data minimization, accuracy, and purpose limitation. This is designed to prevent excessive data collection that could lead to breaches of privacy. The Act requires that personal data be kept accurate and up to date, which is a challenge for generative AI models that may produce outputs based on outdated or irrelevant data. Furthermore, the Act imposes strict requirements for securing personal data against unauthorized access and use. Non-compliance with these provisions could result in significant penalties and sanctions under the Act, providing a strong incentive for AI practitioners in Kenya to prioritize data privacy and security.
The Act also introduces the office of the Data Protection Commissioner, whose mandate includes overseeing compliance with the law and investigating potential breaches. In the context of generative AI, this oversight ensures that AI applications adhere to set legal standards and that individuals can exercise their rights to access, correct, or delete their personal data. The Commissioner has the authority to audit AI systems and issue enforcement notices if violations are detected, thereby holding AI developers and operators accountable.
The Computer Misuse and Cybercrimes Act of 2018, Chapter 5 Laws of Kenya
This Act (2) regulates the use of computer systems, facilitates the prevention of cybercrimes, provides sanctions against those who commit cybercrimes, and protects people’s right to privacy, freedom of speech, and access to information.
The Act defines different types of cybercrimes and the sanctions for each. This is relevant to the development and use of generative AI systems in Kenya as developers have to be wary of their systems being used to commit cybercrimes and have to develop their systems with guardrails in place to prevent the commission of cybercrimes. Users of generative AIs have to ensure they use these systems within the bounds of the law to avoid criminal sanctions.
The Industrial Property Act of 2018, Chapter 509 Laws of Kenya
This Act (3) regulates patents, utility models, technological innovations, and industrial designs. Software inventions may be patented where they meet the criteria of patentable inventions under this Act, that is, if they are new, involve an inventive step, and have an industrial application. The Act defines the rights and obligations of the owner of an invention, the process of applying for a patent, and the terms of a patent.
Furthermore, the Act defines technovations as innovations that solve a problem specifically in the field of technology. The Act provides the criteria for registering a technovation, the process of applying for a technovation certificate, the use of technovations, remuneration of a technovator, and how to solve disputes relating to technovations.
Through this Act, companies and innovators of generative AI systems find protection for their innovations under the law.
The Copyright Act, Chapter 130 Laws of Kenya
The Copyright Act (4) provides another way for companies and innovators of generative AI systems to secure their intellectual property rights, by regulating copyrights and their registration.
The Act establishes the Kenya Copyright Board (KECOBO) which has the function of overseeing the implementation of international laws and treaties on copyright that Kenya is a party to, provision of copyright licenses and certificates, and other roles. Through this board, foreign companies and innovators can adequately protect their AI systems while in use in Kenya.
*****
(1) The Data Protection Act of 2019, Chapter 24 Laws of Kenya
(2) The Computer Misuse and Cybercrimes Act of 2018, Chapter 5 Laws of Kenya)
(3) The Industrial Property Act of 2018, Chapter 509 Laws of Kenya
(4) The Copyright Act, Chapter 130 Laws of Kenya
NELSON MATENDO NKARI
&
RUTH KAVITHE MUASYA
In Mexico, 57% of internet users report having used an artificial intelligence (AI) application, 30% admit they have not, and 13% are unsure if they have ever done so. Furthermore, 55% consider AI a useful tool for decision-making, 23% are uncertain about its utility, and 22% are concerned about this technology.
These figures are derived from the “19th Study on Internet User Habits in Mexico 2023 (1),” conducted by the Internet Association MX.
The increasing influence of AI in various aspects of daily life has generated the need to establish solid regulatory frameworks worldwide. Mexico is no exception, and in recent years, some legislative initiatives have been presented with the aim of regulating the development and use of this technology.
Why is it important to regulate AI?
Regulating AI is crucial for several reasons:
- Protection of human rights: AI can have significant implications for rights such as privacy, non-discrimination, and freedom of expression.
- Security: It is necessary to ensure that AI systems are safe and reliable, avoiding risks such as information manipulation or biased decision-making.
- Accountability: A clear framework of responsibilities must be established in case of damages caused by AI systems.
- Innovation: Adequate regulation can foster responsible innovation and fair competition in the AI sector.
Legislative Initiatives in Mexico
In Mexico, several bills have been proposed to regulate AI. Some of the most prominent are:
- Federal Law Regulating Artificial Intelligence Bill: Proposed by Senator Ricardo Monreal, this text seeks to emulate the proposals of the EU and Chile, establishing a regulatory framework based on authorizations and fines.
- Bill to Issue the Law on Ethical Regulation on Artificial Intelligence and Robotics: Proposed by Representative Ignacio Loyola Vera, which focuses on regulating the use of AI for government, economic, commercial, administrative, communication, and financial purposes, in accordance with ethics and law.
Key points addressed by these initiatives
- Definition of artificial intelligence: A clear and understandable definition of what is considered artificial intelligence is sought.
- Ethical principles: Ethical principles such as transparency, accountability, and non-discrimination are established as pillars of regulation.
- Authorizations: Prior authorization is required for the development and commercialization of certain AI systems.
- Supervision: A regulatory body is established to oversee compliance with the law and impose sanctions in case of non-compliance.
- Accountability: The responsibilities of developers, providers, and users of AI systems are defined.
Challenges and Opportunities
The regulation of AI presents significant challenges, such as the rapid evolution of technology, technical complexity, and the need to find a balance between innovation and protection. However, it also represents an opportunity for Mexico to position itself as a leader in the development of ethical and responsible AI.
Aspects to consider in the regulation of AI in Mexico
- Adaptability: Regulation must be flexible and adaptable to be able to respond to rapid technological changes.
- Collaboration: Collaboration between the government, academia, the private sector, and civil society is essential to develop a consensual regulatory framework.
- Focus on risks: Regulation should focus on mitigating the risks associated with AI, without hindering innovation.
- Protection of personal data: The regulation of AI must be closely linked to the protection of personal data.
Conclusion
The regulation of artificial intelligence is a highly relevant pending matter in Mexico. The legislative Bills proposed so far are a first step towards creating a solid and adequate regulatory framework. However, the Senate cancelled the analysis of said Bills in the prior legislature (debate period which ended August 31, 2024).
The new legislative session commencing in September 2024 has introduced significant measures highlighting the growing importance of AI in regulation. A key development was the establishment of the Innovation and Artificial Intelligence Commission by the Senate on October 8, 2024. Additionally, a proposal presented on October 29, 2024, aims to create a regulatory framework for AI’s use in digital activities, including the creation of a National AI Center to coordinate the implementation of a national AI policy. The initiative emphasizes the regulation of AI to address the challenges of digital transformation and follows the EU AI Act’s risk-based model. It incorporates guiding principles such as ethical adherence, legality, transparency, open governance, and preventing AI’s use for social manipulation or ideological bias. As of December 13, 2024, the proposal remains under review by legislative commissions for its development in Mexico.
*****
ENRIQUE OCHOA DE GONZÁLEZ ARGUELLES
&
MITZY FERNANDA PÉREZ GALICIA
&
OMAR ALEJANDRI RODRÍGUEZ
Generative AI technologies like ChatGPT, Ernie Bot and DeepSeek have changed how we engage with media, enabling the creation of conversations, art, music, and even legal documents. While these tools bring innovation, they also raise legal and ethical concerns. Governments around the world are responding in different ways, with some moving quickly to regulate AI and others still exploring the best approach.
In South Africa, regulation is still developing, with no specific laws addressing generative AI directly. However, the country is making strides toward AI governance through a broader National AI Policy.
South Africa’s National AI Policy Framework
On 14 August 2024, the South African Department of Communications and Digital Technologies (DCDT) published The National Artificial Intelligence (AI) Policy Framework. (1) This framework aims to guide the responsible development of AI in line with global trends. Twelve strategic pillars focus on key areas like ethical AI, data protection, and infrastructure. South Africa’s socio-economic disparities are also a central concern, with the policy advocating for fairness, inclusivity, and innovation.
While this policy covers AI more broadly, it serves as the foundation for future generative AI regulations. By fostering a human-centred, transparent AI environment, South Africa aims to become a competitive global player while addressing local challenges like inequality.
Existing legal frameworks that impact AI
Although South Africa does not yet regulate generative AI explicitly, several existing laws influence its development:
- Copyright Act (2): AI-generated works may qualify for copyright protection if they meet originality requirements. However, the current law does not fully address the complexities of AI creation, leading to a need for updates to protect creators and ensure AI-driven innovation is legally recognised.
- Data privacy (3): The Protection of Personal Information Act (POPIA) requires AI systems to manage personal information transparently and lawfully, emphasising consent, security, and data minimisation. As AI technologies evolve, updates to POPIA may be necessary to better address privacy concerns. The Information Regulator has indicated that it plans to take a more active role in regulating AI.
- Constitution: South Africa’s Constitution, particularly the Bill of Rights (4), guides AI regulation through its emphasis on human dignity, equality, and protection from bias. This framework is critical in ensuring AI technologies respect individual rights while promoting social benefit.
*****
(2) https://www.michalsons.com/blog/the-situation-with-copyright-and-generative-ai/65303
(3) The Protection of Personal Information Act, 2013.
(4) Constitution of the Republic of South Africa, 1996 – Chapter 2: Bill of Rights
JOHN GILES
I. Background and Development
Generative AI models such as OpenAI, Gemini, Microsoft Copilot, Meta AI, Claude, and the disruptive newcomer DeepSeek are reshaping the landscape.
Sweden, a nation committed to innovation and social well-being, is addressing AI’s transformative potential. While offering opportunities to streamline processes and enhance customer experiences, Sweden’s legal landscape, ethical guidelines, and worker rights necessitate a calibrated approach. This article explores opportunities, challenges, and necessary frameworks for responsible AI adoption in Sweden.
II. Applications and Opportunities in Sweden
In Sweden, Generative AIs have garnered significant interest in both the private and public sectors. The technology offers opportunities to streamline processes and improve services in areas such as customer service, education, and healthcare. Swedish companies are exploring how Generative AIs can be integrated into their operations to increase productivity and enhance customer experience.
Legal and Ethical Considerations. On February 5, 2025, the Swedish Authority for Privacy Protection (IMY) published GDPR compliance guidelines for generative AI. These guidelines emphasize data protection principles, defined roles between data controllers and processors, securing a legal basis for AI usage, evaluating automated decision-making and data transfers, safeguarding individual rights, and conducting thorough risk assessments.
Secondly, Sweden has been proactive in developing ethical guidelines for the use of AI. Vinnova, Sweden’s innovation agency, has published recommendations for ethical and responsible AI development. These guidelines emphasize the importance of transparency, fairness, and respect for human rights when implementing AI systems like Generative AI.
Challenges and Limitations.
- (1) Linguistic Adaptation: Generative AIs require fine-tuning for accurate use of Swedish technical terms and cultural nuances.
- (2) Labor Market Impact: Discussions are ongoing between Swedish unions and employers on balancing AI innovation with worker rights and necessary skills development.
III. Recent Regulatory Developments in AI
H&M Employee Surveillance Case (2020) (1)
- Facts: H&M secretly recorded employees’ private information using AI surveillance systems.
- Issues: Violated GDPR transparency and data minimization rules by analyzing employee behavior without consent.
- Judgment: H&M penalized for violating GDPR.
Swedish National Police – Facial Recognition and AI (2020) (2)
- Facts: The Swedish Police used facial recognition AI in public spaces without a legal basis.
- Issues: Violated GDPR’s requirement for a clear and legitimate purpose for processing personal data.
- Judgment: Usage deemed unlawful due to lack of legal basis.
AI in Recruitment – Swedish Labour Court (2021) (3)
- Facts: The Company used AI to evaluate job applications.
- Issues: The company did not inform applicants about the AI’s use in automated decision-making, violating GDPR transparency.
- Judgment: Violated GDPR’s transparency principles.
Facebook (Meta) GDPR Fine (2021) (4)
- Facts: An EU-wide case resulted in a €225 million fine for Facebook (Meta) due to GDPR transparency violations related to AI-driven personal data processing for targeted advertising.
- AI Role: Facebook used AI to analyze user behavior and personalize ads without sufficient transparency about data processing.
- Judgment: The case set a precedent for AI transparency across the EU, including Sweden, highlighting the need for clear information on how AI systems use personal data.
IV. AI Use Cases in Sweden
AI Ethics Board for Healthcare. In response to the increasing use of AI in healthcare, Sweden established a national AI Ethics Board for Healthcare in late 2024. This board is currently developing guidelines for the use of language models like Generative AIs in patient care, focusing on issues of privacy, accuracy, and the doctor-patient relationship.
AI Transparency Initiatives in Municipalities. Several Swedish municipalities, including Stockholm and Gothenburg, have implemented AI transparency initiatives. These require all AI-powered public services, including those using Generative AIs, to be clearly labelled and provide explanations of their decision-making processes.
Europeiska Dataskyddsstyrelsen (European Data Protection Board). In 2024, an investigation revealed that Sweden’s Social Insurance Agency (Försäkringskassan) employed AI systems that disproportionately flagged marginalized groups, including women and individuals with foreign backgrounds, for benefits fraud inspections. The Amnesty International called for the immediate discontinuation of these AI systems, citing violations of rights to social security, equality, non-discrimination, and privacy.
V. Ongoing Discussions and Controversies in AI
AI in Journalism. The Swedish Union of Journalists is discussing ethical uses of Generative AI in news, focusing on journalistic integrity and transparency.
AI and Employment Law. Unions and employers are negotiating the impact of AI on employment, including retraining, job displacement, and human review of AI decisions.
VI. Generative AI in the Broader EU Context
Sweden’s approach to Generative AI aligns with the evolving EU regulatory landscape. Key EU initiatives shaping AI development include:
- The AI Act. Establishes a harmonized legal framework categorizing AI systems by risk, imposing stringent requirements on high-risk applications like Generative AI in sensitive sectors.
- EU AI Strategy. Promotes AI innovation while ensuring ethical and trustworthy AI, supporting investment and skills development.
- European Data Strategy. Aims to create a single market for data, crucial for Generative AI’s training and operation, while ensuring data protection.
- Ethics Guidelines for Trustworthy AI. Provides a framework emphasizing human oversight, safety, privacy, transparency, non-discrimination, and societal well-being.
- Coordinated Plan on AI. Outlines a coordinated approach among EU member states, aligning national strategies and promoting collaboration, including regulating AI applications based on risk.
VII. Conclusion
In conclusion, Sweden faces a pivotal moment in the AI revolution. While Generative AI offers significant potential, its integration requires careful consideration of Sweden’s legal and ethical landscape. By adhering to GDPR, promoting AI Ethics Boards, and ensuring transparency, Sweden can balance innovation with responsible AI deployment. Alignment with the EU’s AI strategy is also crucial for Sweden to contribute to beneficial AI technologies while safeguarding fundamental rights and societal values.
*****
(1) Hamburg Commissioner for Data Protection and Freedom of Information, 2020
(2) Swedish Data Protection Authority, 2020
(3) Swedish Labour Court, Case No. 2021-08-23
(4) European Data Protection Board (EDPB), Decision 2021
References:
https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.pdf
https://www.europarl.europa.eu/doceo/document/TA-9-2023-0069_EN.pdf
https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679
https://www.ai.se/en/news/update-ai-swedens-legal-expert-group
https://www.ai.se/en/decision-makers/ai-strategy-sweden
https://iclg.com/practice-areas/digital-business-laws-and-regulations/sweden
https://news.bloomberglaw.com/us-law-week/eu-ai-act-provides-gcs-innovation-guideposts-not-barriers
https://dig.watch/updates/overview-of-ai-policy-in-15-jurisdictions
https://automatingsociety.algorithmwatch.org/report2020/sweden/
https://justai.in/ai-regulation-in-sweden/
https://aaronhall.com/legal-challenges-of-ai-generated-content-ownership/
https://ai-watch.ec.europa.eu/countries/sweden/sweden-ai-strategy-report_en
KATARINA BOHM HALLKVIST
Our latest news on this topic
-
27/03/25
Must Read: “Le règlement européen sur l’intelligence artificielle” by Alain Bensoussan (Lexing France)
-
06/02/25
Lexing Insights #41 – Highlights of the 2024 Lexing Conference on Generative AI for Lawyers
-
25/10/24
Lexing Global Conference: “Generative AI for Lawyers and Legal Experts”