With the hype surrounding the new iPhone X, many are eagerly anticipating the feature that has everyone talking — unlocking their shiny new phones with nothing more than their face.
Apple’s next generation smartphone features facial recognition technology, thanks to infrared and 3D sensors incorporated into its front-facing camera, so you can now unlock your phone and start taking selfies with more speed and simplicity than ever before.
While many may be excited to try out this novel experience, we are quickly embracing a technology which unknowingly could develop into something far more treacherous than we currently anticipate. According to a study from Stanford University, artificial intelligence now has the ability to accurately guess whether people are gay or straight based on photos of their faces. A computer algorithm was used to correctly distinguish between gay and straight men 81% of the time, and 74% for women.
This surprisingly accurate ability to distinguish between the two orientations not only poses questions about the biological origins of sexual orientation, but it also brings to light the very worrying ethics of facial recognition technology. Presenting the possibility of this type of software being used to violate privacy or be abused for anti-LGBT purposes.
Glancing over the headlines, it appears that our fears of technology generally resonate around the dangers of AI taking over the world, or at least our jobs in the not too distant future. However, though our fears concerning AI are not without merit, it is reasonable to assume, and potentially more likely, that this technology falling into the wrong hands is more of a threat to mankind than the dangers surrounding this technology operating on its own.
This is a topic which has been gaining attention within certain groups and events such as SXSW, where a panel explored the dangers of facial recognition software and the threats that it may present. The panel featured the privacy activist Cory Doctorow, the FBI’s Christopher Piehota and Brian Brackeen, founder of the facial recognition company Kairos. The panel’s title asked their central question: Are Biometrics the New Face of Surveillance?
As biometric technology becomes more advanced, governments and private companies are creating tools for identifying people using their faces, voices, eyes and other unique signatures. But there is little oversight of these systems, leaving them open to misuse.
The disturbing truth is that we encounter this technology more than we realise and it is becoming increasingly common around the globe. In China, police use face recognition to identify and publicly shame people for the crime of jaywalking. In Russia, face recognition has been used to identify anti-corruption protesters, exposing them to intimidation or worse.
In the UK, face recognition was implemented at an annual West Indian cultural festival to identify revelers in real-time. In the United States, more than half of all American adults are in a face recognition database that can be used for criminal investigations, purely because they have a driver’s license.
The LGBT community is not the only group which has cause for concern as facial recognition technology can also detect ethnicity. According to Brian Brackeen from Kairos, their facial recognition system now uses a genealogy tool, which enables it to pinpoint a person’s race. “It’s coming back with the percentages of race the person is,” he said, mentioning someone who came up 12% Asian despite being Jamaican. “Oh, I have a Chinese grandmother,” she said, according to Brackeen.
Though this may not sound as impressive given the more overt nature of a person’s ethnicity, it can identify race to an incredibly accurate level with potential dangers for certain groups. This is more worrying when you consider the possible threats it may cause to members of racial groups which are more discrete. For example, if this type of technology was available during World War II, it could have seriously aided and accelerated Hitler’s ambitions to wipe out the Jewish race across Europe.
Understandably, these fears may be speculative, and facial recognition technology may provide a plethora of positive uses, such as identifying and stopping criminals and terrorists. Furthermore it may even lend support to the idea that homosexuality is biologically determined, thus encouraging more rights and acceptance globally.
However, the reality is that it could go either way. We live in times of political uncertainty, potentially now more than ever, with no clear idea of who will be affected by our governments’ choices. Trump’s recent ban on transgender citizens serving in the military was evidence of this. Despite popular opinion and his support for the LGBT community during the campaign, Trump announced the ban with no warning. It may be a large jump to assume the next step would be to systematically target individuals within this group, but it is fair to say the government now has the technology and could do so if it wishes.
We can see this ban as the thin edge of the wedge, while it may seem trivial at first, only affecting a small number of individuals, it can easily be the start of something else. Moreover, given the unpredictable nature of Trump’s government, and his indifference towards groups which harm minorities, it is a threat that could possibly materialize one day with no given warning.
Across the pond in the UK, there is also cause for concern regarding the implementation of this technology. For example, The Home Office recently invested more funding into facial recognition technology for English police forces in order to conduct a live trial during London’s Notting Hill Carnival. The trial went ahead despite Civil liberty groups, such as Open Rights Group, expressing deep concerns with its use, claiming the technology presents ‘unique threats’ to human rights.
For the time being this technology may not appear to be a huge threat in light of other potential threats surrounding AI and our privacy. However, this is an issue which is not likely to be something we consider dangerous until it is too late.
Your face and information might be sitting in a database right now and you may not even be aware, which is probably not something you consider that alarming if you are a fair, law abiding citizen. But that may change if the keys of this technology are handed to an individual or government which considers you a problem given your race or sexual orientation.
The sad reality is that if we ever do reach that stage, it will probably be too late to consider this technology a threat worth challenging.
Every now and then, I stumble upon posts such as these here and there: And,…
Winter(Physics) is Coming It now looks like Large Language Models running on the GPT technology…
Latin America’s tech industry is booming, with innovative new startups popping up across the region.…
The Global Initiative for Information Integrity on Climate Change claims to 'safeguard those reporting on…
In the late 19th Century, physicians began inserting hollow tubes equipped with small lights into…
This year wasn’t exactly what the video gaming industry expected — it declined by 7%…
View Comments
that's the great thing. now in this covid situation everywhere using face recognition instead of bio metric solution.
that is the extraordinary thing. presently in this Coronavirus circumstance wherever utilizing face acknowledgment rather than bio metric arrangement.
clear factors with respect to security, and some more subtle ones, similar to AI's capacity to recognize unmistakable characteristics and qualities of an individual. In any case, as long as this innovation doesn't fall into some unacceptable