Global Perspectives on AI Regulations: An In-Depth Exploration of the OECD Principles

SSRI
36 Min Read

Artificial intelligence (AI) refers to the emulation of human intelligence processes through the use of machines, particularly computer systems. This advanced technology encompasses a diverse range of applications, including expert systems, natural language processing, speech recognition, and machine vision. Through sophisticated algorithms and computational models, AI enables machines to perform tasks that traditionally require human intelligence, revolutionizing various fields and industries.

Over the past half-decade, the field of Artificial Intelligence (AI) has witnessed significant advancements across its various domains, marking notable progress in vision, speech recognition and generation, natural language processing (comprising both understanding and generation), image and video generation, multi-agent systems, planning, decision-making, and the seamless integration of vision with motor control for robotics. Breakthrough applications have notably emerged in diverse sectors such as gaming, medical diagnosis, logistics systems, autonomous driving, language translation, and interactive personal assistance, underscoring the transformative impact of AI.

Contemporary society is increasingly leveraging AI for diverse applications, ranging from voice dictation on mobile devices and personalized shopping recommendations to news and entertainment suggestions. AI is also contributing to enhancing virtual backgrounds during conference calls and facilitating numerous other functionalities. At the heart of these advancements lies machine learning, with a particular emphasis on deep learning, including the innovative use of generative adversarial networks (GANs) and reinforcement learning, empowered by substantial data sets and robust computing resources.

GANs, in particular, represent a significant breakthrough, imbuing deep networks with the capability to generate artificial content, including realistic images that convincingly mimic authentic visuals. Comprising a dual structure of a generator, responsible for crafting lifelike content, and a discriminator, tasked with distinguishing between generated and naturally occurring content, GANs evolve and improve through mutual learning. A notable application is observed in GAN-based medical-image augmentation, where artificial images are automatically generated to augment training data sets for diagnostic purposes.

The ascendancy of deep learning, especially in the last decade, has garnered widespread acknowledgment of its formidable capabilities. Ongoing research endeavors seek to elucidate the underlying principles governing the effectiveness of deep learning, unraveling the conditions that optimize its performance. The migration of machine-learning technologies from academic realms to real-world applications over the past decade has engendered a landscape rich in promise and, concurrently, raised pertinent concerns, shaping the trajectory of AI’s impact on various facets of our daily lives.

The symbiotic progression of technological opportunities and threats is a characteristic phenomenon in the contemporary landscape. Over the years, both artificial intelligence (AI) and the apprehensions surrounding its implications have undergone rapid and parallel developments. While concerns about the societal impact of innovative technologies have historically accompanied various inventions, the realm of AI is distinctive in that some of the most pronounced warnings have been articulated by eminent figures intimately familiar with the field. Prominent voices such as Elon Musk, Bill Gates, and Stephen Hawking, among others, have expressed noteworthy concerns, underscoring the unique challenges and responsibilities associated with the advancement of AI technology.

The maturation of Artificial Intelligence (AI) and its pervasive integration into various sectors have prompted heightened scrutiny and investment from governments and public agencies worldwide. This intensified focus has been particularly pronounced in the last half-decade, a period marked by the ubiquitous adoption of AI in consumer products and the escalating prominence of private and government applications, notably in areas such as facial recognition, which has garnered considerable public attention.

The landscape of AI governance has evolved significantly in response to these developments. Over the past five years, more than 60 countries have initiated national AI initiatives, showcasing a concerted global effort to harness the potential of AI and address its associated challenges. Simultaneously, substantial multilateral endeavors have emerged, aiming to foster effective international collaboration on AI-related matters.

The growing attention from governments reflects a nuanced understanding that AI is a multifaceted domain with far-reaching implications. It intersects with diverse policy priorities, encompassing privacy, equity, human rights, safety, economic considerations, and both national and international security. As AI technologies continue to evolve, policymakers are confronted with the imperative to strike a delicate balance between fostering innovation and ensuring ethical, equitable, and secure deployment.

In a notable incident, Elaine Herzberg (August 2, 1968 – March 18, 2018) became the first documented case of a pedestrian fatality involving a self-driving car. The incident occurred on the evening of March 18, 2018, in Tempe, Arizona, United States. Herzberg, who was pushing a bicycle across a four-lane road, was struck by an Uber test vehicle. The vehicle was operating in self-drive mode, with a human safety backup driver present in the driving seat. Subsequently, Herzberg was transported to a local hospital, where she succumbed to her injuries.

In a more recent incident on November 8, 2023, in South Korea, a tragic occurrence unfolded as a man lost his life due to a robotic mishap. The individual was crushed by a robot that failed to distinguish him from the boxes of food it was handling.

In light of such incidents, there is a compelling need for the establishment of robust legal frameworks to effectively regulate the potential threats posed by artificial intelligence (AI).

Numerous international forums are actively engaged in discussions on the collaborative governance of artificial intelligence (AI), reflecting the growing recognition of the need for coordinated efforts in this realm. Although several countries have yet to enact specific regulations for AI, with most existing rules primarily focused on data usage, the global landscape is witnessing the emergence of various initiatives and endeavors aimed at formulating responsible policy frameworks for the development and deployment of AI.

One notable example is the Organisation for Economic Co-operation and Development (OECD), consisting of 38 member countries, which has articulated the AI Principles. These principles serve as guidelines for nations to navigate the ethical and regulatory considerations associated with AI implementation.

In another recent noteworthy development, negotiators from the European Parliament have successfully reached a provisional agreement on the proposed Artificial Intelligence Act—a set of harmonized rules governing artificial intelligence (AI). This draft regulation is designed with the overarching goal of guaranteeing the safety of AI systems introduced into the European market and employed within the European Union, while also upholding fundamental rights and EU values. Termed the “Artificial Intelligence Act,” this pivotal proposal not only seeks to establish a framework for the secure deployment of AI but also strives to foster an environment that encourages investment and innovation in the field of AI within Europe. The agreement represents a significant stride toward creating a comprehensive and standardized approach to regulating AI, ensuring its responsible use and alignment with the principles integral to the European Union’s values and rights.

 

The OECD Principles on Artificial Intelligence

The OECD AI Principles advocate for the use of AI that is both innovative and trustworthy, while respecting human rights and democratic values. Adopted in May 2019, these principles establish standards for AI that are practical and flexible enough to endure over time. The OECD has introduced two sets of principles: one comprising values-based principles and the other consisting of recommendations for policymakers.

Values-based principles

  • Inclusive growth, sustainable development and well-being (Principle 1.1)

‘Stakeholders should proactively engage in responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for people and the planet, such as augmenting human capabilities and enhancing creativity, advancing inclusion of underrepresented populations, reducing economic, social, gender and other inequalities, and protecting natural environments, thus invigorating inclusive growth, sustainable development and well-being.’

This fundamental principle acknowledges the imperative of guiding the evolution and application of AI toward fostering prosperity and positive outcomes for both people and the planet. The concept of Trustworthy AI assumes a pivotal role in advancing inclusive growth, sustainable development, and overall well-being, aligning with global development objectives. It is evident that AI, when approached responsibly, holds the potential to significantly contribute to the achievement of Sustainable Development Goals (SDGs) across various domains such as education, health, transportation, agriculture, environmental sustainability, and the development of resilient cities.

The stewardship role inherent in this principle is tasked with addressing concerns related to inequality and the potential widening of technology access gaps, particularly between developed and developing nations. Leveraging the OECD Framework for Policy Action on Inclusive Growth as a valuable reference point, this stewardship aims to guide policy actions that ensure an inclusive path toward a more robust and confident future for all.

Furthermore, the principle acknowledges the inherent risk that AI systems may perpetuate existing biases, posing a disparate impact on vulnerable and underrepresented populations, including ethnic minorities, women, children, the elderly, and those with lower education or skill levels. To counteract this risk, the principle emphasizes the dual role of AI in empowering all segments of society and actively working to mitigate biases.

Responsible stewardship also entails a recognition that, throughout the entire lifecycle of AI systems, stakeholders have the ability and responsibility to encourage the development and deployment of AI for beneficial outcomes while implementing appropriate safeguards. The definition of these positive outcomes and the optimal means to achieve them necessitate collaborative efforts involving multiple disciplines and stakeholders, facilitated through social dialogue. Additionally, fostering a meaningful, well-informed, and iterative public dialogue that is inclusive of all stakeholders is identified as a critical element in enhancing public trust and understanding of AI..

  • Human-centered values and fairness (Principle 1.2)

‘AI actors should respect the rule of law, human rights and democratic values, throughout the AI system lifecycle. These include freedom, dignity and autonomy, privacy and data protection, non-discrimination and equality, diversity, fairness, social justice, and internationally recognised labour rights.

To this end, AI actors should implement mechanisms and safeguards, such as capacity for human determination, that are appropriate to the context and consistent with the state of art.’

The development of AI should unwaveringly adhere to human-centric values encompassing fundamental freedoms, equality, fairness, the rule of law, social justice, data protection, privacy, consumer rights, and commercial fairness. Certain applications or uses of AI systems carry implications for human rights, posing potential risks of deliberate or accidental infringement on human rights as outlined in the Universal Declaration of Human Rights. Therefore, it becomes imperative to advocate for “values-alignment” in AI systems, ensuring their design incorporates appropriate safeguards, including provisions for human intervention and oversight tailored to specific contexts.

This alignment is crucial in ensuring that the behaviors of AI systems not only safeguard but actively promote human rights and align with human-centric values throughout their operational lifespan. Maintaining fidelity to shared democratic values is essential for building public trust in AI and endorsing its use in safeguarding human rights while mitigating discrimination and preventing unfair or unequal outcomes.

This principle also recognizes the significance of measures such as human rights impact assessments (HRIAs) and human rights due diligence, the incorporation of human determination (commonly known as a “human in the loop”), adherence to codes of ethical conduct, as well as the implementation of quality labels and certifications aimed at fostering human-centric values and ensuring fairness in AI applications.

  • Transparency and Explainability (Principle 1.3)

‘AI Actors should commit to transparency and responsible disclosure regarding AI systems. To this end, they should provide meaningful information, appropriate to the context, and consistent with the state of art:

To foster a general understanding of AI systems,

To make stakeholders aware of their interactions with AI systems, including in the workplace,

to enable those affected by an AI system to understand the outcome, and,

to enable those adversely affected by an AI system to challenge its outcome based on plain and easy-to-understand information on the factors, and the logic that served as the basis for the prediction, recommendation or decision.’

The term “transparency” encompasses various dimensions, with this Principle primarily focusing on disclosure when AI is utilized, such as in predictions, recommendations, decisions, or when users directly engage with AI-powered agents like chatbots. The level of disclosure should be proportional to the significance of the interaction, recognizing that the increasing prevalence of AI applications may influence the feasibility and desirability of disclosure in certain situations.

In addition, transparency involves empowering individuals to comprehend the development, training, operation, and deployment of an AI system in a specific application domain. This understanding allows consumers to make more informed choices, emphasizing the provision of meaningful information about what is conveyed and why, without necessarily disclosing proprietary code or datasets due to technical complexity or intellectual property considerations.

Another aspect of transparency pertains to fostering public discourse and establishing dedicated entities, as needed, to enhance general awareness and understanding of AI systems, ultimately bolstering acceptance and trust.

As for “explainability,” it entails ensuring that those impacted by an AI system’s outcomes can understand the processes leading to those outcomes. This involves presenting easily understandable information to affected individuals, allowing them, where practicable, to challenge the outcomes and comprehend the factors and logic influencing them. However, achieving explainability varies based on context, with considerations for potential trade-offs, such as accuracy, performance, privacy, and security. In certain instances, demanding explainability may compromise the effectiveness of the system or disproportionately disadvantage smaller AI actors.

When providing explanations, AI actors should aim for clarity and simplicity, offering, as appropriate to the context, insights into key decision factors, determinants, data, logic, or algorithms influencing specific outcomes. This should facilitate individuals’ comprehension and ability to contest outcomes while respecting obligations related to personal data protection, when applicable.

  • Robustness, security and safety (Principle 1.4)

‘AI systems should be robust, secure and safe throughout their entire lifecycle so that, in conditions of normal use, foreseeable use or misuse, or other adverse conditions, they function appropriately and do not pose unreasonable safety risk.

To this end, AI actors should ensure traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle, to enable analysis of the AI system’s outcomes and responses to inquiry, appropriate to the context and consistent with the state of art.

AI actors should, based on their roles, the context, and their ability to act, apply a systematic risk management approach to each phase of the AI system lifecycle on a continuous basis to address risks related to AI systems, including privacy, digital security, safety and bias.’

Addressing the safety and security challenges associated with intricate AI systems is paramount for instilling confidence in AI. In this context, robustness is defined as the capacity to withstand adverse conditions, including digital security risks. The guiding principle asserts that AI systems should not pose unreasonable safety risks, encompassing both physical security and potential hazards arising from normal or foreseeable use or misuse throughout their lifecycle. Existing legal frameworks, particularly in consumer protection, delineate what qualifies as unreasonable safety risks. It is the responsibility of governments, in consultation with stakeholders, to ascertain the extent to which these regulations apply to AI systems.

AI actors can adopt a risk management approach to identify and safeguard against foreseeable misuse and risks associated with using AI systems for purposes beyond their original design. Robustness, security, and safety in AI are interconnected; for instance, digital security can impact the safety of interconnected products like automobiles and home appliances if risks are not adequately managed.

The recommendation emphasizes two key strategies for ensuring robust, safe, and secure AI systems:

  1. Traceability and subsequent analysis and inquiry

Similar to explainability, traceability aids in the analysis and inquiry into the outcomes of an AI system, serving as a means to promote accountability. While distinguishable from explainability, traceability focuses on maintaining records of data characteristics, such as metadata, data sources, and data cleaning, rather than the data itself. This approach facilitates understanding outcomes, preventing future errors, and enhancing the overall trustworthiness of the AI system.

  1. Risk management approach

The recommendation acknowledges the potential risks that AI systems pose to various aspects, including human rights, bodily integrity, privacy, fairness, equality, and robustness. It also recognizes the associated costs of safeguarding against these risks by incorporating transparency, accountability, safety, and security into AI systems. Recognizing that different uses of AI present varying levels of risk, a risk management approach applied across the AI system lifecycle can help identify, assess, prioritize, and mitigate potential risks that may adversely impact a system’s behavior and outcomes. Leveraging established OECD standards on risk management, particularly in the context of digital security risk management and risk-based due diligence, can provide valuable guidance. Documenting risk management decisions at each lifecycle phase contributes to the implementation of other principles, such as transparency (1.3) and accountability (1.5).

  •  Accountability (Principle 1.5)

‘AI actors should be accountable for the proper functioning of AI systems and for the respect of the above principles, based on their roles, the context, and consistent with the state of art.’

The concepts of accountability, responsibility, and liability, while closely related, possess nuanced distinctions that can vary across cultures and languages. Broadly, “accountability” implies an ethical or moral expectation, often outlined in management practices or codes of conduct, guiding individuals or organizations in their actions. It entails the ability to elucidate the reasons behind decisions and actions. In the event of an unfavorable outcome, accountability also involves taking corrective measures to ensure improved outcomes in the future. On the other hand, “liability” predominantly denotes adverse legal consequences resulting from an individual’s or organization’s actions or inaction. “Responsibility” encompasses ethical or moral expectations and can be invoked in both legal and non-legal contexts to signify a causal connection between an actor and an outcome.

Given these nuanced meanings, the term “accountability” most accurately encapsulates the essence of this principle. In this context, “accountability” denotes the expectation that organizations or individuals will uphold the proper functioning of AI systems throughout their lifecycle. This involves activities such as design, development, operation, or deployment, aligning with their roles and relevant regulatory frameworks. Demonstrating accountability entails showcasing responsible actions and decision-making processes, which may include providing documentation on key decisions made throughout the AI system lifecycle or facilitating auditing when justified.

Recommendations for policymakers

  • Investing in AI research and development (Principle 2.1)

‘- Governments should consider long-term public investment, and encourage private investment, in research and development, including inter-disciplinary efforts, to spur innovation in trustworthy AI that focuses on challenging technical issues and on AI-related social, legal and ethical implications and policy issues.

-Governments should also consider public investment and encourage private investment in open datasets that are representative and respect privacy and data protection to support an environment for AI research and development that is free of inappropriate bias and to improve interoperability and use of standards.’

 

AI-driven scientific breakthroughs have the potential to address societal challenges and foster new industries. Stressing the importance of basic research and a long-term perspective in research policy, recent investments in applied AI R&D by the private sector underscore the necessity for continued public investment in long-term research. Governments, possibly with support from foundations dedicated to the public good, play a crucial role in driving trustworthy AI innovation, particularly in areas underserved by market-driven investments, to ensure widespread benefits.

Publicly funded research is pivotal in addressing complex technological issues that impact a diverse range of AI actors and stakeholders. Covering AI applications, techniques for teaching AI systems, optimization for data reduction, and research on societal considerations, publicly funded research contributes to transparency, explainability, and the protection of data integrity.

Given AI’s extensive impact on various aspects of life, the recommendation calls for interdisciplinary research on the social, legal, and ethical implications of AI relevant to public policy.

Recognizing the crucial role of data in the AI system lifecycle, the availability of open, accessible, and representative datasets, while respecting privacy and data protection, intellectual property rights, and other rights, is vital for advancing AI R&D. While achieving a completely “bias-free” environment may be challenging, governments can contribute to mitigating bias risks in AI systems by providing representative datasets that are publicly available, fostering a more inclusive and fair AI landscape.

  • Fostering a digital ecosystem for AI (Principle 2.2)

‘Governments should foster the development of, and access to, a digital ecosystem for trustworthy AI. Such an ecosystem includes in particular digital technologies and infrastructure, and mechanisms for sharing AI knowledge, as appropriate. In this regard, governments should consider promoting mechanisms, such as data trusts, to support the safe, fair, legal and ethical sharing of data.’

The establishment of trustworthy AI relies on nurturing a supportive ecosystem. This recommendation urges governments, in collaboration with the private sector where applicable, to actively contribute to or encourage the development of the necessary infrastructure and digital technologies for AI. This collaborative effort should consider national frameworks to ensure coherence.

Critical components of the required digital technologies and infrastructure include access to cost-effective high-speed broadband networks and services, computational power, data storage, and supporting technologies like the Internet of Things (IoT). Notably, recent advancements in AI owe part of their success to a substantial increase in computational speed, particularly with the utilization of graphics processing unit resources. Simultaneously, the establishment of effective mechanisms for sharing AI knowledge, covering data, code, algorithms, models, research, and know-how, is vital for understanding and actively participating in the AI system lifecycle. These mechanisms must uphold principles of privacy, intellectual property, and other rights. The pivotal role of open-source tools and high-quality training datasets in managing and utilizing AI cannot be overstated, facilitating the diffusion of AI technology and enabling collaborative problem-solving.

In the pursuit of data-sharing frameworks like data trusts or trusted third parties, governments should be attentive to associated risks. These risks may include confidentiality and privacy breaches, threats to intellectual property rights, data protection concerns, competition and commercial interests, as well as potential national security and digital security risks. Regarding the datasets themselves, and in alignment with recommendation 2.1, governments are encouraged to advocate for and utilize datasets that are as inclusive, diverse, and representative as possible.

Special emphasis in the recommendations for policymakers is placed on policies tailored for small and medium-sized enterprises (SMEs). The aim is to streamline SME access to data, AI technologies, and relevant infrastructure, such as connectivity, computing capacities, and cloud platforms. This approach seeks to cultivate digital entrepreneurship, encourage competition, and foster innovation through the adoption of AI.

  • Shaping an enabling policy environment for AI (Principle 2.3)

‘Governments should promote a policy environment that supports an agile transition from the research and development stage to the deployment and operation stage for trustworthy AI systems. To this effect, they should consider using experimentation to provide a controlled environment in which AI systems can be tested, and scaled-up, as appropriate.

Governments should review and adapt, as appropriate, their policy and regulatory frameworks and assessment mechanisms as they apply to AI systems to encourage innovation and competition for trustworthy AI.’

This recommendation explores the policy environment essential for fostering AI innovation, encompassing institutional, policy, and legal frameworks. It complements recommendation 2.2, which focuses on the requisite physical and technological infrastructure, and, like that recommendation, underscores the significance of paying special attention to SMEs.

Addressing the rapid pace of AI developments, establishing a policy environment that is both flexible enough to keep pace with advancements and conducive to innovation while ensuring safety and legal certainty is a formidable challenge. To tackle this challenge, the recommendation aims to enhance the adaptability, reactivity, versatility, and enforcement of policy instruments. The objective is to responsibly expedite the transition from development to deployment and, where applicable, commercialization, all while adopting a human-centered approach in fostering AI use.

Central to the recommendation is the promotion of experimentation as a mechanism for creating controlled and transparent environments for testing AI systems and nurturing AI-based business models that can offer solutions to global challenges. Operating in “start-up mode,” policy experiments involve deploying, evaluating, modifying, and scaling up or down—or even abandoning—experiments based on test outcomes.

  • Building human capacity and preparing for labour market transformation (Principle 2.4)

‘Governments should collaborate closely with stakeholders to prepare for the transformation of the world of work and society. They should empower people to effectively use and interact with AI systems across a range of applications, including by equipping them with the necessary skills.

To ensure a fair transition for workers as AI is deployed, governments should take steps, including through social dialogue, such as implementing training programs throughout the working life, providing support for those affected by displacement, and facilitating access to new opportunities in the labor market.

Governments should work closely with stakeholders to promote the responsible use of AI at work, aiming to enhance the safety of workers and the quality of jobs. Efforts should also focus on fostering entrepreneurship and productivity, with the ultimate goal of ensuring that the benefits from AI are distributed broadly and fairly’

The widespread integration of AI is anticipated to reshape various aspects of life, particularly in the realms of labor, employment, and the workplace. AI is expected to complement human efforts in certain tasks, replace them in others, and give rise to novel job roles and organizational structures. If not properly managed, these transformations in the labor market could entail substantial economic and social consequences. To navigate these shifts equitably, policymakers, collaborating with stakeholders such as social partners, employer organizations, and trade unions, must address critical considerations related to social protection, educational programs, skills development, labor market regulations, public employment services, industrial policies, taxation, and the financing of transitions.

Ensuring fair transitions necessitates policies that promote lifelong learning, skills development, and training. These policies should empower individuals, especially workers in various contractual contexts, to engage with AI systems, adapt to AI-induced changes, and seize new opportunities in the labor market. This encompasses the development of skills essential for AI practitioners (currently in short supply) and those required for other professionals (like doctors or lawyers) to effectively utilize AI within their domains, thus enhancing human capabilities. Simultaneously, skills development policies should emphasize distinctly human attributes like judgment, creative and critical thinking, and interpersonal communication to complement AI systems.

The evolution of the labor market due to AI may warrant adjustments or the establishment of labor standards and agreements between management and workers. These adaptations should reflect the changes brought about by AI, address potential challenges related to equality, diversity, and fairness (for instance, arising from data collection and processing), and promote reliable, safe, and productive workplaces. Achieving this involves a combination of regulatory measures, social dialogue, and collective bargaining, aiming to strike a balance between workplace flexibility and the preservation of workers’ autonomy and job quality.

  • International co-operation for trustworthy AI (Principle 2.5)

‘Governments, including developing countries and with stakeholders, should actively cooperate to advance these principles and to progress on responsible stewardship of trustworthy AI.

Governments should work together in the OECD and other global and regional fora to foster the sharing of AI knowledge, as appropriate. They should encourage international, cross-sectoral and open multi-stakeholder initiatives to garner long-term expertise on AI.

Governments should promote the development of multi-stakeholder, consensus-driven global technical standards for interoperable and trustworthy AI.

Governments should also encourage the development, and their own use, of internationally comparable metrics to measure AI research, development and deployment, and gather the evidence base to assess progress in the implementation of these principles.’

This recommendation underscores the importance of international collaboration involving governments and stakeholders to address the global opportunities and challenges presented by AI. The collaborative efforts extend to promoting the implementation and dissemination of these principles and policies not only within the OECD and partner countries but also in developing and least developed nations, fostering inclusivity and engagement with various stakeholders.

International cooperation serves as a platform to utilize international and regional forums for sharing AI knowledge, thereby cultivating long-term expertise. It also aims to establish technical standards for interoperable and trustworthy AI, along with the development, dissemination, and utilization of metrics to assess AI system performance. These metrics encompass accuracy, efficiency, alignment with societal goals, fairness, and robustness. Furthermore, international cooperation facilitates the seamless cross-border flow of data while upholding trust, safeguarding security, privacy, intellectual property, human rights, and democratic values—critical elements for fostering AI innovation on a global scale.

SSRI

 

References

https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

https://ai100.stanford.edu/gathering-strength-gathering-storms-one-hundred-year-study-artificial-intelligence-ai100-2021-1/sq2#underlyingtech

https://www.cliffordchance.com/content/dam/cliffordchance/briefings/2023/12/the-eus-ai-act-what-do-we-know-about-the-critical-political-deal.pdf

https://www.oecd.org/digital/artificialintelligence/#:~:text=Artificial%20Intelligence%3A%20OECD%20Principles&text=The%20OECD%20Principles%20on%20Artificial,human%20rights%20and%20democratic%20values.

https://ai100.stanford.edu/gathering-strength-gathering-storms-one-hundred-year-study-artificial-intelligence-ai100-2021-1/sq7#_2021SQ7ref5

https://futureoflife.org/national-international-ai-strategies/

https://www.oecd-ilibrary.org/science-and-technology/state-of-implementation-of-the-oecdai-principles_1cd40c44-en

 

Photo Credit – https://www.jonesday.com/en/insights/2023/05/senate-hearings-signal-bipartisan-drive-for-ai-regulation

Share This Article
Leave a comment