An AI academy for developers in Barcelona, Spain asks that all of its students sign an ethical contract similar to the Hippocratic Oath that physicians take.
I will apply AI towards the benefit of humanity at all costs.
I will respect every human’s privacy as if it was my own.
I will do everything in my power to acquire knowledge and share it with others.
I will set positive models for others to emulate.
I will consider the impact of my models and disobey unjust requests.
I will train my models again and again until I succeed.
I will consider the impact of historical and new bias in my work.
I will preserve human concerns over technological ones.
I will work to create a new set of conditions that reduce inequalities.
My AI models will be designed to prevent harm at all costs.
I will keep my word.
The AI Hero Pledge from The Academy.ai
Do we expect more from algorithms than we do from ourselves? How do we approach this technology that seems to possess too much power for its own good? Can we trust AI when it spews back our own biases?
“We have an ethical contract with our students before we accept them into the academy. We ask them to sign this contract, akin to doctors taking the Hippocratic oath”
To learn more about the ethics of AI and human bias implicit in the programming, The Sociable spoke with Jan Carbonell, Co-Founder and CEO of The Academy.ai ahead of the Horasis Global Meeting that took place in Cascais, Portugal, from April 6-9, where he was a participant.
The Academy.ai is a startup based in Barcelona, Spain that has set its sights on teaching people about AI with a basis in ethics.
“Our academy teaches people about AI coding but also with an ethical background or an understanding of the other implications of AI and how they can make a positive impact in society,” he explains.
Carbonell views ethics as a core part of AI in general, which is why it is also a core part of the curriculum at the academy. The Academy.ai team has found an interesting way of ensuring that their students pay attention to ethics.
“We have an ethical contract with our students before we accept them into the academy. We ask them to sign this contract, akin to doctors taking the Hippocratic oath,” the 24-year old CEO says.
They are also taking steps to find a way to tie students in the future to the ethical contract.
“Not only do students abide to fulfill this contract now, but we have a way of tracking that you haven’t broken this contract in the future as well,” he says.
Read More: Terence McKenna’s cyberdelic evolution of consciousness as it relates to AI
With the aim of wanting future AI professionals to stick to their own ethical code instead of having to abide by somebody else’s, the academy also implements a clause that allows programmers to disobey unfair requests from their bosses.
“We want to ensure that it’s not just for white dudes in Stanford. Any person who wants to learn AI and has the motivation to go through 15 Saturdays in a row can come to us”
Read More: Amazon proposes ethical guidelines on facial recognition software use
“I have worked in consulting and am aware of how far ethical codes can be pushed. The idea is that even if we’re working with technology, we keep humans in mind,” he says.
Last year, Amazon had to scrap its AI tool after it taught itself to prefer male candidates over female ones. Google too came under fire when its Photos app started suggesting people of color as results for searches for gorillas.
AI is being criticized for displaying social bias against particular groups. How serious is this flaw in AI and can we do anything to overcome it? It is often said that this flaw exists in the algorithms, because in a way, algorithms are set to replicate our own biases or learn from our own prejudices to predict patterns.
However, according to Carbonell, the bias is not in the algorithm but on the models and the training data sets that are feeding those algorithms.
“I’m sure that Amazon has had a larger number of male candidates than women candidates. So, it’s likely that the algorithm has learned to push those women down, because they had a smaller chance of success in the past when Amazon hired,” he says.
“I have worked in consulting and am aware of how far ethical codes can be pushed. The idea is that even if we’re working with technology, we keep humans in mind”
To overcome these problems, firstly, he insists that we should make sure that models are more explainable. “The models should explain why they are behaving the way they are behaving, a field of AI, we are not focusing yet. We support it, but we’re not working on it,” he says.
Secondly, he says, the data sets should be balanced. AI should be distributed. Certain populations might be tempted to leap frog and skip some of the technological progress that they’re lacking to be on par with more developing nations. However, this kind of hurry can make us lose focus of the human element.
Companies like Microsoft are putting in efforts to monitor their AI. Last year, IBM has also launched tool designed to detect bias in AI. TheAcademy.ai are also trying to solve this, but with a human approach. They are trying to get more people from diverse backgrounds into AI, so that everyone brings their different perspectives, while taking care of their own particular problems.
Read More: Technology is inheriting our implicit biases
“We want to ensure that it’s not just for white dudes in Stanford. Our non-profit course caters to doctors, teachers, and business people. Any person who wants to learn AI and has the motivation to go through 15 Saturdays in a row can come to us,” Carbonell explains.
Is it wise for us to rely on machines for decision-making in issues like recruitment, financial payments, or identifying candidates for criminal reformation?
Carbonell says that though human monitoring can ensure catching flaws by tracking functions, he is actually critical of complete reliance on these algorithms.
“I think that we have higher expectations from the algorithms than we have from ourselves. For example, if we look at the way we recruit, I may or may not like an interview, but I will never tell the candidate why,” he says.
As humans, in our day-to-day decisions, we hide our bias because it’s impolite. Each of us comes from a particular background, and we all like to think that we treat everyone fairly, and a portion of us want to think that we do, but the reality is that we have our own perceptions of people, face, shapes, and behaviors that we don’t tolerate, but never express.
“I think that we have higher expectations from the algorithms than we have from ourselves”
Algorithms have the potential of applying the same bias to everyone, which, ironically, turns out to be super unfair. However, machines actually treat everyone with the same unfairness.
Carbonell believes that if we can identify the source of this unfairness, we can steadily solve it. However, in the current state, it is not wise to just let an algorithm decide and make decisions for us, because we have seen that this makes the system worse.
As Artificial Intelligence advances into every aspect of our lives, helping us deal with massive amounts of data at double or even treble the speed, we are realizing that our biases are creeping into the algorithms we are creating.
Read More: Medicine or poison? On the ethics of AI implants in humans
We may not like the state that AI is in today, but it makes more sense to join in and try to change it from within with our own set of ethical standards than panicking about its ill effects.
Every now and then, I stumble upon posts such as these here and there: And,…
Winter(Physics) is Coming It now looks like Large Language Models running on the GPT technology…
Latin America’s tech industry is booming, with innovative new startups popping up across the region.…
The Global Initiative for Information Integrity on Climate Change claims to 'safeguard those reporting on…
In the late 19th Century, physicians began inserting hollow tubes equipped with small lights into…
This year wasn’t exactly what the video gaming industry expected — it declined by 7%…
View Comments