Gatwick Airport has become the first airport in the United Kingdom to deploy facial recognition technology to allow passengers to board aircraft without checks.
According to the Sunday Telegraph, which first reported the story, Gatwick has already conducted trials of the technology in a joint venture with Easyjet.
The use of the technology will worry privacy campaigners, already alarmed at its use in public spaces such as city streets, and even shopping centres.
According to the report, the facial recognition cameras will be installed on a permanent basis for ID checks before passengers board planes.
The cameras will be installed on eight further departure gates after the runway extension in 2022.
The idea is that the technology should reduce queuing times, but passengers would still need to carry passports or other forms of ID.
Heathrow airport is also reportedly expected to announce similar plans before the end of this year, after a £50m trial this summer.
“More than 90 percent of those interviewed said they found the technology extremely easy to use and the trial demonstrated faster boarding of the aircraft for the airline and a significant reduction in queue time for passengers,” a Gatwick airport spokesperson told the BBC.
“Gatwick [is now planning] a second trial in the next six months and then rolling out auto-boarding technology on eight departure gates in the North Terminal when it opens a new extension to its Pier 6 departure facility in 2022,” the spokesperson added.
Passengers will still be subjected to conventional security checks as they pass through the bag-check security zone, where they will need to display a boarding pass.
Passengers will also apparently need to scan their passport at the departure gate for the system to be able to match the photo inside to their actual face.
But privacy groups are concerned, as they are worried travellers might not realise they can opt out of the scheme.
“Our main concern… would be the issue of proper consent,” Ioannis Kouvakas, from Privacy International told the BBC.
“Placing general or vague signs that merely let individuals know that this technology is being deployed, once individuals are already inside the check-in area, is inadequate, in our view, to satisfy the strict transparency and consent requirements imposed by data-protection laws,” Kouvakas reportedly said.
“If this would apply to child travellers… it raises even more concerns, considering the special protection afforded to children’s privacy and the risks associated with having their biometrics taken by the airport private entities,” said Kouvakas.
There is also ongoing concern about how inaccurate facial recognition systems can be.
In August a campaign group in the US ran a picture of every California state legislator through a facial-recognition program that matched facial pictures to a database of 25,000 criminal mugshots. And the results were not encouraging.
The test saw the facial recognition program falsely flag 26 legislators as criminals.
And to make matters worse, more than half of the falsely matched lawmakers were people of colour, according to the ACLU.
Facial recognition systems has been previously criticised in the US after research by the Government Accountability Office found that FBI algorithms were inaccurate 14 percent of the time, as well as being more likely to misidentify black people.
Can you protect your privacy online? Take our quiz!
Targetting AWS, Microsoft? British competition regulator soon to announce “behavioural” remedies for cloud sector
Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…
Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…
Judge Kaplan praises former FTX CTO Gary Wang for his co-operation against Sam Bankman-Fried during…
Explore the future of work with the Silicon In Focus Podcast. Discover how AI is…
Executive hits out at the DoJ's “staggering proposal” to force Google to sell off its…