Forty-two countries believe that AI regulation is important enough to adopt the OECD’s AI principals, which emphasize that AI should be regulated with human-centered values.
The Organisation for Economic Co-operation and Development’s (OECD) recommendation for intergovernmental standard for Artificial Intelligence (AI) policies is pushing for transparency and honesty in AI while urging private players and governments to deal with bias and data privacy.
The OECD revealed the first intergovernmental standard for AI policies, for which, 36 of the organization’s member countries have signed, along with six non-member countries consisting of Argentina, Brazil, Colombia, Costa Rica, Peru, and Romania. This is a much needed step towards building an ethical AI industry.
The document, Recommendations of the Council on Artificial Intelligence aims to aid in the formation of a global policy ecosystem that will leverage AI benefits, while keeping an eye on its ethical side.
Read More: Keeping Prometheus out of machine learning systems
The Recommendation identifies five complementary values-based principles to responsibly administer AI that can be trusted.
It also envisages that those involved in AI promote and implement the principles.
The key takeaways of the principles include:
The Recommendation also urges policy-makers to “invest in AI research and development; foster a digital ecosystem for AI; shape an enabling policy environment for AI; build human capacity and preparing for labour market transformation; and international co-operation for trustworthy AI.”
Read More: AI academy asks students to sign ethical contract akin to Hippocratic Oath
Over the past two years, the OECD has been working towards the policy debate. In 2016, it held a Foresight Forum on AI and in 2017, an international conference on AI: Intelligent Machines, Smart Policies.
The organization also conducted analytical and measurement work that, apart from giving an overview of the AI technical landscape, points out the economic and social impacts of AI application. This work also identifies policy considerations, describing AI initiatives from governments and other stakeholders at national and international levels.
The document urges stakeholders to be responsible when pursuing AI benefits even when they are for people and the planet, such as augmenting human capabilities and enhancing creativity. The document highlights the need to include underrepresented populations, reduce societal inequalities, and protect natural environments.
Driving the need for inclusive growth, sustainable development and well-being, the document echoes the problems we have been facing in AI in the past few years, such as feeding an AI system with social bias or making the system available only to those who are willing to pay well for it.
The recommendation strongly demands that AI players “respect the rule of law, human rights and democratic values, throughout the AI system lifecycle,” which include “freedom, dignity and autonomy, privacy and data protection, non-discrimination and equality, diversity, fairness, social justice, and internationally recognised labour rights.”
To achieve the same, the need for implementing mechanisms and safeguards is urged.
Warning against prevailing problems like privacy, digital security, safety, and bias, AI actors are urged to continuously apply a systematic risk management approach to each phase of the AI system lifecycle.
The risk of a faulty system was also recommended, so that, in case of misuse or adverse conditions, they do not pose unreasonable safety risk. To ensure the same, AI actors are urged to ensure traceability, regarding datasets, processes, and decisions made during the AI system lifecycle.
The document also urged governments to consider long-term public investment, as well as private investment in R&D, to spur innovation that solve AI-related social, legal and ethical issues. Respecting privacy and data protection were also urged, so that AI R&D becomes free of inappropriate bias.
Governments are also urged to consider promoting mechanisms like data trusts to support the safe, fair, legal and ethical sharing of data.
Read More: Terence McKenna’s cyberdelic evolution of consciousness as it relates to AI
The document says that government policy regarding AI should support an agile transition from R&D to the deployment and operation stage of AI, for which, experimentation in a controlled environment is needed.
The inevitable transformation that AI is poised to bring in many workplaces and the society indicates that governments must prepare by working closely with stakeholders.
“They should empower people to effectively use and interact with AI systems across the breadth of applications, including by equipping them with the necessary skills,” the document says.
To ease transition, social dialogue, training programmes, supporting the displaced, and creating new opportunities in the labour market, are steps governments are recommended to take.
The OECD has also urged that AI actors be transparent about the tech, practicing ‘responsible disclosure’ and hence providing meaningful contextual information that keep stakeholders aware of their interactions with AI systems.
Considering growing fears of AI replacing jobs, workplace populations who might be adversely affected by an AI system should be particularly explained to. A regulated AI system has the potential for developing the field in a way that is acceptable to all stakeholders.
The OECD is an intergovernmental economic organisation that was founded in 1961 to stimulate economic progress and world trade. Its 36 member countries include the US, Australia, Belgium, Chile, Germany, Israel, Latvia, and Slovenia.
Expanding into new geographic locations is one of the most effective ways to drive growth…
Every now and then, I stumble upon posts such as these here and there: And,…
Winter(Physics) is Coming It now looks like Large Language Models running on the GPT technology…
Latin America’s tech industry is booming, with innovative new startups popping up across the region.…
The Global Initiative for Information Integrity on Climate Change claims to 'safeguard those reporting on…
In the late 19th Century, physicians began inserting hollow tubes equipped with small lights into…
View Comments