Facial recognition software: Is there a darker side to surveillance technology?

Globally, as a society, we have come to acceptsurveillance cameras as a part of our everyday lives. In fact, CCTV camerashave become so commonplace that it is estimated that there are over 600,000 surveillance cameras inLondon alone - that’s one for every 14 people.

Clearly, we have become so used to the constant presence ofCCTV that most of us wouldn’t think twice about there being a camera in publicspaces; they’ve become part of the furniture, very often going unnoticed.

Of course, the high presence of surveillance camerasbrings with it benefits for society — the main one being to act as a crimedeterrent, helping to maintain public order and reassure the public of theirsafety.

But not everyone is happy with the constant videosurveillance we are under or, rather, the extent of the information thesesurveillance cameras are now capable of gathering about us.

Developments in artificial intelligence (AI) have changedthe face of surveillance technology in recent years. Gone are the days whenCCTV cameras had a passive role, simply capturing a scene and the recording beingviewed by a human operator when necessary.

Combined with AI, computers are increasingly capable ofidentifying and analysing surveillance footage in real time, meaning thatcameras can recognise individuals and find out their identity among the mass ofpeople walking down the street.

This use of such technology - facial recognition software- has hit the headlines inrecent months, and is causing controversy worldwide.

Take a look at Hong Kong, for example, whereanti-government protests have been escalating for nearly six months. The recentdecision by Hong Kong’s Chief Executive, Carrie Lam, to prohibit the wearing offace masks in public in a bid to tackle the violent clashes proved to be verycontroversial.

Many pro-democracy protesters saw this as an attempt by thestate to control its citizens. They believe the anonymity a mask provides to beessential in avoiding the perceived threat from mainland China - where facialrecognition is used almost everywhere, and surveillance a fundamental tool tomaintain an authoritarian rule.

The notion that facial recognition software is aninvasion of our privacy has raised its head much closer to home too. In 2019, theMetropolitan Police carried out ten trials of live facialrecognition (LFR) around the capital to determine whether the technology wouldbe of use to the police force in the future in helping to tackle crime — adecision that is still in the process of being made.

It’s not only state use of LFR that is causingcontroversy — private companies have been using it too. In the past year,various tales have emerged of live music venues and bars trying to use facialrecognition software for good, to identify ‘suspicious’ individuals at eventsor even see who was first in the queue at the bar.

While superficially these seem to be viable uses of thetechnology, the necessity has been called into question by many, includingcampaign group, BigBrother Watch.

We must also remember that while developments in AI bringsome huge benefits to society, it is still not totally free from bias.Arguably, AI’s use could put individuals at unnecessary risk of beingstereotyped based on their appearance.

A final case that made national news in September was therevelation that the developers of King’s Cross Central had placed CCTV camerasusing facial recognition software on the site. While this is an individualcase, and the recognition technology was apparently last used in March 2018, itrepresents a bigger issue, demonstrating the potential for facial recognitiontechnology to be used without the knowledge of the authorities, or the consentof the general public.

Such private sector use is arguably more of a cause forconcern than government use. It’s harder to regulate who is using LFRtechnology, how the footage is being used, and who has access to the data. Andthis raises questions about how necessary LFR really is, and how much privacyand freedom we truly have nowadays.

There are certainly valid arguments both for and againstthe use of facial recognition software in public spaces.

When used for good, and in the right hands (who that maymean is a whole other story), these developments could help keep our publicspaces far safer than ever before, taking criminals off the streets before weeven really know they’re there.

But at what cost? Being able to trace our every movement?Diminishing our civil liberties and freedoms?

It seems, in an age of constant technologicaldevelopments, there’s a fine line between keeping people safe and protectinghuman rights. In the case of facial recognition software, it’s a blurred line thatis yet to be properly determined.

Previous
Previous

Food for thought: the foodtech trends to look out for

Next
Next

Is competitive gaming here to stay?