The Dark Side of Artificial Intelligence
Artificial intelligence, including language models like ChatGPT, is set to radically change our work and personal lives. However, with this new technology comes the potential for unprecedented forms of problems, such as identity theft or the production of fake news. The biggest danger to our democracy and freedom-loving society, however, lies elsewhere.
Following in the footsteps of India and Taiwan, the US is now considering banning the Chinese social media app, TikTok. This is a drastic move for a country that protects political propaganda, hate speech, and even swastikas under the guise of free speech.
This step is also motivated by heavy lobbying of Big Tech, as TikTok generated more revenue through in-app purchases this year than Facebook, Instagram, Twitter, and Snapchat combined. Banning TikTok would eliminate Silicon Valley’s biggest competitor at once.
TikTok’s threat is not just hype
The political threat that TikTok poses is not unfounded. No western company has yet managed to combine artificial intelligence (AI) with social media so cleverly. It is not the AI itself that is the problem, but rather how Bytedance, TikTok’s Chinese parent company, has ingeniously utilized this technology.
TikTok* is a social media app that originated in China in 2016 under the name Douyin. The app was created by Chinese tech company ByteDance, which had previously launched a similar app called Musical.ly. Douyin was initially created as a platform for short lip-sync videos, but it quickly expanded to include a wider range of content, such as dance challenges, comedy skits, and educational videos. In 2018, ByteDance acquired Musical.ly and merged it with Douyin to create TikTok, which was launched for the international market. Since then, TikTok has become one of the most popular social media apps in the world, with over one billion active users. (*written by ChatGPT)
However, the real danger lies in the data, which can now be used in combination with a clever machine to become a threat.
The path to the unknown is a minefield
Let me be clear: I celebrate the breakthroughs in AI that we are currently experiencing. It is the logical next step in our journey into what Angela Merkel once called “Neuland” (unknown territory). Like the printing press, telephone, or steam engine, AI will enrich our lives and work, making it easier and better. But the path to this future is a minefield.
Apps and applications that can imitate speech, faces, and even the voices of people so accurately that they are barely distinguishable from the original have suddenly become possible overnight. This opens up entirely new dimensions of identity theft and fraud, never before seen. Furthermore, the spread of fake news, propaganda and conspiracy theories could reach entirely new heights through AI applications.
But this is not even the scariest part of what lies ahead. Artificial intelligence, when combined with large datasets, creates perhaps the most terrifying weapon that humanity has ever built: the power to read thoughts.
AI as a thought police
The vast amounts of data we produce every day seem harmless: websites we visit, customer cards we use at the supermarket, and even our smartphones, which have two dozen sensors that monitor our every move and share data with countless companies and data brokers.
Modern smartphones are equipped with a variety of sensors*, including accelerometers, gyroscopes, GPS, cameras, microphones, and more. These sensors can be exploited to gather a vast amount of intimate information about the phone’s carrier, such as their location, daily routines, physical activities, and even their conversations. For instance, GPS and accelerometer data can be used to track the user’s movement and determine their travel patterns, while the microphone can be used to record conversations and ambient sounds. Similarly, the camera can be used to capture images and videos of the user and their surroundings. Overall, the multitude of sensors in smartphones presents a significant risk to user privacy if not properly secured. (*written by ChatGPT)
So far, this has not been a problem because no human being could ever search through these data mountains. However, for a machine, this is child’s play. Nothing is hidden from AI. Every WhatsApp message from the past, which may seem insignificant today, could one day cost us our reputation, job, or marriage. Even things that are seemingly innocuous today could become problematic in ten or twenty years, as the ongoing discussion about cancel culture shows.
Who betrayed you? Your data did!
To an AI, it doesn’t matter why you do something. With the right data, it will always know where you’ve been, what you’ve spent money on, how fast you drove your car, how much alcohol or other substances were in your blood, which bar you visited, and with whom you shared a hotel room and when.
In China, the social score measures the loyalty of its citizens to the state. AI acts as a “character” credit score that draws conclusions about a person’s character from seemingly harmless everyday data.
Social scoring in China* is a system that rates citizens based on their social behavior, such as financial status, online activities, and social relationships. It was introduced in 2014 and has since expanded to cover various aspects of citizens’ lives. The scoring system works by assigning points to individuals based on their actions, with positive actions such as volunteering or timely bill payments resulting in points, and negative actions such as traffic violations leading to deductions. The score can impact a person’s ability to access services such as loans, transportation, and housing. The system is highly controversial due to its potential privacy violations and the potential for abuse of power by the government. Critics argue that it is a tool for social control, allowing the government to suppress dissent and punish those who do not conform to its policies. Additionally, the system raises concerns about data protection and accuracy, as well as the potential for discrimination and social stigmatization based on one’s score. (*created with ChatGPT)
And what about us in the democratic West? Fake bios will be the least of the problems politicians will stumble upon in the future. Most political talents won’t even make it into office because of the digital skeletons they have in their closets (or on their servers).
A Choice Between the Plague and Cholera
Such technology — whether motivated by a communist state doctrine or by our Western surveillance capitalism — changes things. It changes the texture of our society. The combination of AI with the incredibly granular data sets that corporations like Google, Facebook, or TikTok already possess about us (and will never voluntarily delete) could become the greatest threat to our democracy. We don’t even need to look to Moscow or Beijing to see it.