Future of Privacy 👀
in the age of surveillance
April 24, 2020
(5 min read)
(5 min read)
Our data is being constantly tracked online. It’s how Amazon always seems to know what you want to buy, and how YouTube keeps people captivated for hours at a time, in an endless supply of recommended videos. With the continued development of AI, data collection will only ramp up, as our data will be in higher demand and will be used for more purposes, many of which we don’t even know about. The problem of data collection is more so that people are unaware of how their data is being used and shared, and the massive demand for data created by AI makes it harder and harder for digital citizens to their data online. Heavily using one platform A could cause all the data you entrust to platform A to suddenly be available to platform B. Before you even register for platform B, they already know a lot about you.
Data collection also allows for anonymous identification, which takes away from peoples’ conceived notion of anonymity and privacy online.
With extensive surveillance systems in cameras and also over the Internet, there’s also a movement towards using AI to identify people in all facets of their lives. Your Google searches can tell a lot about you as a person; your Facebook friends show what kinds of circles you’re involved in; your Instagram posts literally identify who you are.
Even outside contexts where you identify yourself online, AI can be used to identify who you are. This includes “anonymous” actions on the internet, like browsing from webpage to webpage or viewing different peoples’ profiles. AI is able to use your past behavior on the Internet to identify who you are and what you’re doing by correlating how you’re acting now and how you’ve acted in the past. Anonymity could become a lost concept in this future. People must be more conscious about how they act online, more so than ever. Even if one’s name isn’t entered on a platform, there will be a way to find out who they are.
This also creates the problem of discrimination and bias, as people may be judged incorrectly by AI, and thus face unjustified consequences. AI is very powerful, but it can still make mistakes. Certain nuances within identification algorithms could very easily misidentify people based on race, gender, etc., causing “unfair, discriminatory, or biased outcomes.” (“Artificial Intelligence.”). This could create a disproportionate model of In addition, there could even be misidentifications, potentially correlating innocent people with criminal activity.
Courtesy of Chicago Tribune
In the past decade, the world has seen the immense expansion and advancement of surveillance technology. Applications of surveillance that were once solely depicted in science fiction are now regularly used around the globe. Every day, surveillance cameras capture the actions, habits, and even conversations of people all around the world. These cameras play a role in deterring crime, identifying criminals, and maintaining security. And now, with the development of quick and accurate facial recognition technology, surveillance cameras have become adept at identifying any person caught on camera. Although surveillance technology was initially created to enhance public safety and to promote security, its exponential growth has led to concerns regarding privacy and other potential abuses. The modern, camera filled world has transformed into a place where nothing will go unseen, and where it’s almost impossible to be truly private. Additionally, facial recognition technology brings with it a myriad of different exploitations of power as well; people are tracked everywhere they go, so such technology can be used to manipulate people into behaving in certain ways.
China is currently developing a massive network of surveillance systems with facial recognition built in, all controlled by the government. With such a system, the government would be able to track everything that every citizen does, giving the government the means to bring about and operate a totalitarian state. In China today, these elements of state control range from the public “naming and shaming” of citizens who commit petty crimes like jaywalking to the mass subjugation of entire ethnic minority groups (Economist).
One major abuse of this technology already is occurring within the Chinese province of Xinjiang. Xinjiang is comprised mostly of Uyghurs who perceive themselves as a separate community from the rest of China. Because the Uyghurs differ in culture and values from the majority of China, the Chinese government seeks to maintain control over the Uyghurs. Surveillance cameras essentially are keeping the Uyghurs in line, while also severely restricting their freedom. There are now an estimated one million Uyghurs being detained in internment camps.
As the future of AI brings about improvements to facial recognition technology, such exploitations may occur on a larger scale. We’re already near a reality where our every action is being watched by somebody — a state of hyper surveillance is no longer science fiction or a “what if” situation. At the end of the day, it’s everyone’s mission, including yours, to shape the usage of this technology the right way; after all, surveillance does have its productive uses too.
where do we go from here?