Government transparency in deploying facial recognition software could gain more acceptance among citizens than trying to use it behind their back and later claiming it was for the greater good.
As the facial recognition criticism thickens, the US Homeland Security is already putting facial recognition software to use for biometric exits at US airports. Within the next four years, it plans to use facial recognition technology on 97% of passengers departing from airports.
The system, which captures images of passengers prior to boarding flights, has already been in practice since 2017, and by the end of 2018, has become operational in 15 US airports.
The system aims to help authorities to become more aware of people leaving and entering the country as well as aid them in identifying anyone overstaying their visas. According to Quartz, currently, US authorities rely on departing airline flight manifests for the same.
Opponents of facial recognition are arguing that creating a database that contains photographs of millions of people could create civil liberty transgressions.
Read More: Shareholders tell Amazon to stop selling Rekognition facial recognition tech to govt
Sharing such a database with other agencies, such as law enforcement or private organizations could spell uncomfortable times for citizens in the future. Looking over your shoulder could become a norm.
Facial recognition software is also being deemed too flawed and biased to be able to do a fair job of picking out actual culprits or suspicious characters, as The Sociable has also pointed out before.
Having said that, there is undoubtedly a dire need for upgrading security systems at airports. Relying only on human eyes is no longer wise. The tool could revolutionize safety in traveling.
With such benefits, it might not be wise to refrain from making use of such technology, provided the deployment is transparent. For example, before installing facial recognition software for biometric exits at airports, the Department of Homeland Security Science and Technology Directorate (S&T) conducted a Biometric Technology Rally in March last year at S&T’s Maryland Test Facility (MdTF).
Read More: Amazon proposes ethical guidelines on facial recognition software use
The rally selected innovative systems from 11 industry participants representing 10 countries from around the world to participate. S&T challenged the industry to meet specific objective performance goals along evaluation metrics of effectiveness, efficiency, and satisfaction.
The initiative gauged the performance of the biometric system in areas such as failures to acquire, process, and match images, and measured the average time volunteers spent in making use of the systems. S&T challenged industry systems to identify 99% of all volunteers using the system in less than 20 seconds and to design intuitive systems that volunteers could understand and use in just five seconds.
Agreed, there might be much more that remains to be explored. For example, the database thus gathered could be shared with other agencies. As Alvaro Bedoya, a student of facial recognition at Georgetown Law’s Center on Privacy & Technology told The Verge in 2017:
“Right now, other than the no-fly list, you do not have law enforcement checks on who can fly. But once you take that high-quality photograph, why not run it against the FBI database? Why not run it against state databases of people with outstanding warrants? Suddenly you’re moving from this world in which you’re just verifying identity to another world where the act of flying is cause for a law enforcement search.”
Not to mention that as the database grows, it might be begging for a data breach incident in a year or two. Still, they were being transparent about it. At least, subjects involved in the rally did so with full awareness of what they were doing.
Read More: ACLU files FOIA request demanding DHS, ICE reveal how they use Amazon Rekognition
While our trust in any government stems from the security they pledge to provide us, the knowledge that they might be getting under our skin to do so is creepy. For example, according to FT, a database called Janus Benchmark-C (IJB-C), built by IARPA, has put together a collection of images of 3,500 subjects with the aim of training facial-recognition algorithms.
The cinch is that none of the subjects whose photographs are available on the database are aware that they are being used for the initiative.
IARPA is a US government organization that backs research aimed at giving the US intelligence community a competitive advantage. The database contains 21,294 images of faces (other body parts are also there), with around six pictures and three videos per person. Researchers can apply to access the data.
Read More: Every move you make IARPA will be watching you
FT says the database contains photographs of three EFF board members, an Al-Jazeera journalist, a technology futurist and writer, three Middle Eastern political activists, including an Egyptian scientist, and a 36-year-old American activist Jillian York.
York was appalled when Adam Harvey, the researcher who first found her face in the database, asked her if she was aware that her image was available for facial recognition software training.
“What struck me immediately was the range of times they cover. The first images were from 2008, all the way through to 2015,” she told FT. “They were taken at closed meetings. They were definitely private in the sense that it was me goofing around with friends, rather than me on stage,” she added.
The images in the database were obtained without any explicit consent. Instead, they were uploaded in accordance with the terms of Creative Commons licences, an online copyright agreement that anybody can use to copy and reuse images for academic and commercial purposes. While that does not make it illegal, the ethical nuances are easily arguable.
Meanwhile, almost three years have passed since a congressional watchdog, the Government Accountability Office raised several concerns regarding the bureau’s use of facial recognition tech. The FBI is still refraining from evaluating if its systems comply with privacy and accuracy standards, according to Nextgov.
As long as complicated technology remains unknown, we might continue to resist it, especially when it continues to harbor flaws. However, if government agencies employ transparency in deploying said technology, citizens might ease into it with more acceptance.
Not that this gives facial recognition a complete green signal. As this technology becomes more prevalent in general society, there is no dearth of uses that agencies, or even individuals, could put it to. Government agencies, especially law enforcement organizations, could become oppressive.
Read More: Big tech employees voicing ethical concerns echo warnings from history: Op-ed
On an individual level, soon, spouses could be using it in court to accuse cheating partners. Assassins could be picking out targets even when they are disguised. We might not be able to hide from anyone.
At the same time, the technology could prove extremely useful in spotting perpetrators, securing crowded areas like airports, while putting less stress on the human system.
How can we ensure that this technology, which does have merits, only be used to aid us, without turning the state into authoritarian regimes? Could transparency be the answer?
Jeanna Liu’s love for nature is rooted in her childhood. As a young girl, Liu…
The arrival of generative artificial intelligence (genAI) into the mainstream at the end of 2022…
Data analytics and machine learning models deliver the most powerful results when they have access…
I’ve been on the road for almost a year now. Chasing freedom, adventure, and purpose.…
As technological use increases, so may the cost of innovation due to the global movement…
Have you ever asked yourself why some people are amazing at picking gifts, while others…
View Comments