Business

Amazon proposes ethical guidelines on facial recognition software use

After months of public outcry over potential ethical abuses of Amazon’s facial recognition software Rekognition, Amazon proposes a set of guidelines to policymakers.

Outside groups testing Rekognition “have refused to make their training data and testing parameters publicly available”

In response to claims that Amazon’s Rekognition software could potentially abuse civil and human rights, Amazon came up with a set of five guidelines for policymakers to consider as potential legislation and rules to be considered in the US and other countries.

  1. Facial recognition should always be used in accordance with the law, including laws that protect civil rights.
  2. When facial recognition technology is used in law enforcement, human review is a necessary component to ensure that the use of a prediction to make a decision does not violate civil rights.
  3. When facial recognition technology is used by law enforcement for identification, or in a way that could threaten civil liberties, a 99% confidence score threshold is recommended.
  4. Law enforcement agencies should be transparent in how they use facial recognition technology.
  5. There should be notice when video surveillance and facial recognition technology are used together in public or commercial settings.

Last month Amazon shareholders filed a resolution demanding that Amazon stop selling facial recognition software Rekognition to government and law enforcement, citing concerns of potential civil and human rights abuses.

Read More: Shareholders tell Amazon to stop selling Rekognition facial recognition tech to govt

On Thursday Michael Punke, VP of Global Public Policy at AWS, responded to the claims stating, “In recent months, concerns have been raised about how facial recognition could be used to discriminate and violate civil rights. You may have read about some of the tests of Amazon Rekognition by outside groups attempting to show how the service could be used to discriminate.”

“Facial recognition is actually a very valuable tool for improving accuracy and removing bias”

Among the “tests of Amazon Rekognition by outside groups,” include those by the American Civil Liberties Union (ACLU), that we have documented many times on The Sociable.

Read More: ACLU files FOIA request demanding DHS, ICE reveal how they use Amazon Rekognition

study by the ACLU found that Rekognition incorrectly matched 28 members of Congress, identifying them as other people who have been arrested for a crime.

The members of Congress who were falsely matched with the mugshot database used in the test included Republicans and Democrats, men and women, and legislators of all ages, from all across the country.

Nearly 40% of Rekognition’s false matches in the test were of people of color, even though they made up only 20% of Congress.

Read More: ‘450 Amazon employees tell Bezos to kick Palantir off AWS’

“In each case we’ve demonstrated that the service was not used properly”

Claiming inaccuracies with tests by outside groups such as the ACLU and MIT, Amazon explained on Thursday, “In each case we’ve demonstrated that the service was not used properly; and when we’ve re-created their tests using the service correctly, we’ve shown that facial recognition is actually a very valuable tool for improving accuracy and removing bias when compared to manual, human processes.”

Additionally, Amazon accused groups like the ACLU of withholding their research, claiming, “These groups have refused to make their training data and testing parameters publicly available, but we stand ready to collaborate on accurate testing and improvements to our algorithms, which the team continues to enhance every month.”

“New technology should not be banned or condemned because of its potential misuse. Instead, there should be open, honest, and earnest dialogue among all parties involved to ensure that the technology is applied appropriately and is continuously enhanced,” wrote Punke.

Read More: Big tech employees voicing ethical concerns echo warnings from history: Op-ed

“AWS dedicates significant resources to ensuring our technology is highly accurate and reduces bias, including using training data sets that reflect gender, race, ethnic, cultural, and religious diversity,” he added.

Amazon developed the proposed guidelines after months of talking to “customers, researchers, academics, policymakers, and others to understand how to best balance the benefits of facial recognition with the potential risks.

Read More: Amazon, Palantir are aiding mass deportations of govt ‘undesirables’: report

“It’s critical that any legislation protect civil rights while also allowing for continued innovation and practical application of the technology.”

Tim Hinchliffe

The Sociable editor Tim Hinchliffe covers tech and society, with perspectives on public and private policies proposed by governments, unelected globalists, think tanks, big tech companies, defense departments, and intelligence agencies. Previously, Tim was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. These days, he is only responsible for articles he writes and publishes in his own name. tim@sociable.co

View Comments

Recent Posts

DARPA to simulate disease outbreaks: model lockdown, vaccination & messaging strategies

Why is DARPA modeling disease outbreaks & intervention strategies while simultaneously looking to predict &…

8 hours ago

ManagedMethods launches Advanced Phishing solution against rising tide of malicious emails 

Earlier this year, a report from non-profit organization the Center for Internet Security shone a…

18 hours ago

DARPA ‘CoasterChase’ looks to mitigate stress with ingestible neurotech

DARPA is putting together a research program called CoasterChase that aims to mitigate warfighters' stress…

2 days ago

U.S. Fusion Power Plant Design Passes Independent Review

In the global race to develop and commercialize fusion power reactors, U.S. scientists have reached…

2 days ago

Pet Health Meets Convenience: New Partnership Aims to Empower Pet Owners with At-Home Testing

Innovative Pet Lab, a science-forward company offering at-home health tests for pets, today announced a…

5 days ago

Club of Rome launches joint taskforce that would restrict your food, travel & ownership choices

The 'Materials and Consumption Taskforce' is an attempt to micro-manage all aspects of your life:…

7 days ago