Research Paper Rough Draft
In recent years there has
been a large growth in popularity of AI facial recognition technology with both
large governmental bodies/police forces and the general public. With this
increase in the use of AI FRT it has raised some concern in my mind about how
far this will go and the risks to our privacy and security that it can cause. When
it comes to its use by police departments, what happens if it misidentifies
someone? What does it mean for our or any country to have a trackable database
of every citizen’s face that can be accessed without our knowledge at any time?
And when it comes to the use of AI face apps available through smartphones,
what happens to the data that those apps collect? And is it truly protected?
I’ll go further into answering these questions as this essay goes along and
hopefully show why they are a genuine cause for concern.
Let’s begin with the dangers certain AI Facial
Recognition apps may cause. It has become very popular recently to upload photos
of yourself into certain AI face apps that will then morph the pictures into
basically whatever you want following a written prompt or pre-determined template.
One app “Lensa” recently had an article written about it by ABC newswriter
Haley Yamada. In her article she discusses some of concerns of privacy that app
raises with how it handles data collection. Throughout her article she uses
quotes from cybersecurity expert Andrew Couts to help illustrate those concerns
and Andrew Couts himself is even quoted as saying, “that it's almost impossible
to know what happens to a user's photos after they are uploaded onto the app.” Obviously
face data like this is extremely sensitive and with apps like Lensa and others similar
to it, it becomes a very large security risk for its users when we don’t truly
know where that data is going to. However, Andrew Couts’ main concern according
to the article is about the “behavioral analytics” that the app is collecting. To
quickly summarize, behavioral analytics are things like what demographics you
fall into, your search and purchase history through different websites and
apps, etc.
Something not discussed in the previous article that I’d
like to address is when those behavioral analytics and facial recognition
technology merge together into something very unsettling. There is something
called “smart retail” which is when businesses use things like FRT to and other
smart technologies to better sell products to consumers. The unsettling part is
that when you begin to use facial recognition as tool to measure people’s
emotions while they’re purchasing certain items and them combine that with
behavioral analytics provided by other sources such as your phone apps or
websites you may frequently use, now certain corporations and retail stores
have a perfectly built profile of you and how to get you to spend the most
money with them as possible. It was already bad enough when retailers were
tracking your shopping habits online through their websites, but with FRT there
is a very high possibility they will be able to track your shopping habits in
the real world as well and further use that new data to push more and more
products onto you. Some may not view having a personalized shopping page on
your amazon account or targeted ads on every website you visit, but you should
remember that the use of FRT in these circumstances is not for your convenience,
its so that companies can siphon as much money out of you as possible. However,
targeted ads and intrusive behavioral analytics doesn’t begin to scratch the
surface of the possible dangers that FRT can cause.
In an article written by Göran Wågström
for Forbes titled "The Dangers of Facial Recognition” he discusses the
concerns facial recognition technology can cause due to its use by law
enforcement. He citied the Hong Kong protests of 2019 and 2020 as one major
breach of privacy and safety for many of the protestors involved because of the
Hong Kong’s police force use of FRT. During the protests police forces used surveillance
cameras with the help of AI to later find and detain protestors after the
protest was over. The Chinese government’s use of facial recognition technology
is probably the most egregious currently within the world with its countless facial
recognition towers all over almost every corner of the country. However, it is
not too much of a stretch to say that this could become the norm in many other
countries as well. The most dangerous part of normalizing this kind of use of
FRT throughout the world besides creating a surveillance state, is that the
recognition is not 100% accurate for each person meaning there could be dire
consequences for someone misidentified. In Wågström’s article he explains the
inaccuracies and also says, “it misidentifies people of color -- and especially
women of color -- at alarming rates.” This misidentification can not only cause
the wrong person to be wrongly accused of a crime they didn’t commit but entire
groups of people to be wrongly targeted too. The overreliance of the use of this
technology can have very dangerous effects on society because as advanced as
the AI may be it still has major gaps in its programming that can lead to many
innocent people being further discriminated against, accused, and even charged
of crimes they had nothing to do with. And with those gaps in programming even
more problems can arise.
In another article written by Hafiz Sheikh Adnan Ahmed
for ISACA.org he outlines the other aspects of FRT that can be damaging beyond
police use and unwanted data collection. The major one I’d like to focus on is “technical
vulnerabilities” that he goes through. Ahmed states, “With FRT, it may be
possible to spoof a system (i.e., masquerade as a victim) by using pictures or
three-dimensional (3D) masks created from imagery of a victim. In addition, FRT
can be prone to presentation attacks or the use of physical or digital spoofs,
such as masks or deepfakes, respectively.” This is a massive cause for concern because
now with a reliance on facial recognition as a means for security it’s now
possible that all someone has to do to get access to your phone, bank account,
or anything else that uses FRT is simply manipulate the system using different
photos of you or finding a way to deepfake your face in a way that allows them
access to your private information. This concern goes hand in hand with another
one that Ahmed raises about the encryption of faces. Ahmed states in the
article, “Faces are becoming easier to capture from remote distances and
cheaper to collect and store. Unlike many other forms of data, faces cannot be
encrypted. Data breaches involving facial recognition data increase the
potential for identity theft, stalking, and harassment because, unlike
passwords and credit card information, faces cannot easily be changed.” Once
your face has been effectively stolen by another person for any other purpose
it’s not something you can just change. If a hacker steals your password, it
can be changed, if a scammer gets your card information it can be cancelled,
but with the growing use and dependence of FRT we could reach point where your
bank account, location, and other sensitive information is all easily accessed
by your face. And the more frequently this technology is used, the more people will
be looking for ways to exploit it for their own benefit which can lead to lots
of security breaches by just copy and pasting your face from social media,
street cameras, and apps like Lensa that have huge collections of people’s
facial data.
In conclusion, AI facial recognition technology is a very
dangerous tool that can lead to many different negative outcomes for everyone
in society. Whether it’s something as simple as an app like Lensa that allows
you to make goofy avatars and profile pictures or something more serious like the
use of mass surveillance towers by the police the results are the same. Even in
its most minor use FRT poses a massive threat to people with lots of privacy
risks and overly intrusive behavioral analytics that can be used in a number of
different devious ways. Corporations can track your face’s emotions to try build
a shopping profile for you to sell more products. The police can use facial
recognition towers to charge you with crimes you didn’t commit because of the many
false positives that happen with FRT because it can’t properly track people of
color’s faces. And hackers can use the data collected by corporations and the
data collected by law enforcement to build and manipulate photos and deepfakes
of you in order to have access to things like your bank account, place of
employment, location, and other sensitive information. Overall, I feel that the
risks posed by this continuous use of AI facial recognition technology are too high
no matter the convenience of a shopping profile may bring or as harmless as a AI
face app may seem. Our privacy and security has been breached.
Comments
Post a Comment