The 21st century is a great time to be alive. We drive electric cars, we’re curing diseases with CRISPR, and we’ll soon be returning to the moon in rockets more reusable than the Space Shuttle ever was.
But it’s not all silicon and sunshine — our future has a healthy dose of “Black Mirror,” thanks to a slew of technologies that are downright creepy. Some of these technologies seem promising but can be easily perverted in frightening ways. Others seem to serve no purpose other than to creep us out.
Here are the 11 technologies that are so creepy they’re keeping tech experts and futurists awake at night.
If you’ve seen the “Black Mirror” episode “Nosedive,” you’re familiar with the idea of using social media reputation as a sort of social currency for keeping a job, getting an apartment, and using other everyday services. What you might not realize is that China is actually implementing that very thing in real life.
“It’s arguably the largest social engineering experiment in human history, enabled by artificial intelligence, especially face recognition, and big data analytics,” Lukas Kovarik, CEO of Bohemian AI, told Business Insider. “You can lose your credit for criticizing the communist party as well as for not having your dog on a leash. Low-scoring citizens are banned from traveling, work, or private schools. Your mugshot can even be shown in movie theatres before the movie.”
What could be less intimidating than a smart speaker? Products like Amazon Echo (powered by the Alexa personal assistant) and Google Home are popular household companions that respond to voice commands. But many experts are wary, citing the creepy behavior lurking just around the corner — like cases in which Amazon has already mishandled sensitive private recordings.
“I’m skeptical that these devices are not listening to our conversations,” Heather Vescent, a futurist at The Purple Tornado, said, citing reports of Amazon employees explicitly playing back Alexa recordings. Consider that when you place Alexa in your bedroom.
Deepfakes — a notorious technology that uses artificial intelligence and deep learning to seamlessly replace faces in a video — might have some superficially beneficial applications. But on the whole, it’s not just creepy, but existentially terrifying. Imagine being able to swap one person’s face onto another person’s body in full, high-resolution video.
There are already deepfake video editors available, and they’re no more complicated than traditional video editing software. It’s already been used for revenge porn and political propaganda. What happens when it’s impossible to tell legitimate video from video manipulated using deepfake technology?
Aaron Lawson, a scientist at SRI International’s Speech Technology and Research Laboratory, said this may lead to political instability and chaos as people lose faith in media. “It’s going to lead to confusion about basic reality and encourage a sense that truth is just your opinion, based on the assumption that everything has been faked,” Lawson said.
We were promised flying cars, but it looks increasingly what we’re really getting is self-driving cars. Already, some cars on the road have autopilot or drive-assist modes that can handle common driving situations. Can they drive more safely on average than humans? Arguably, yes — especially over time, as carmakers learn from past mistakes and vast volumes of real-world driving data.
But what you might not think about is that self-driving cars will, by their very role of “taking the wheel,” eventually have to make life and death decisions. What happens when your car is faced with a real-world Trolley Problem, in which the only available options are to run into either this person or that person? Will the car prioritize the driver’s life, or the life of a pedestrian? These are decisions humans have never left to machines in the past.
“There will certainly be unintended consequences we have not thought through,” Vescent said.
As a society, we’re slowly acclimating to the use of 3D body scanners, especially at airport security. But retailers are tentatively experimenting with body scanners as well. Amazon, for example, is experimenting with it as a revolutionary way to custom fit apparel you buy online.
In an age when social media companies are accused of violating customers’ privacy and banks are not immune from cyberattacks, it’s no surprise that futurists like Harsha Reddy worry about ways your exact body specifications can be misused online.
“If your personal data is available online, including a scan of your body, this can be a whole other level of creepiness,” Reddy said.
Smart baby monitors are a part of the emerging Internet of Things, and they’re welcomed by many parents because they allow you to see, hear, and talk to your baby from anywhere in the house.
But “smart” sometimes seems synonymous with “hackable,” and we’ve already seen hackers gain access to smart monitors, like one especially creepy case in which a stranger threatened to kidnap a family’s baby. And it can get worse.
“Someone speaking to your child and asking them to open a door or a window or go to a particular place is probably the worst that could happen with this type of technology,” said Donata Kalnenaite, futurist and president of Termageddon.
Evidence is building that anonymization — the act of stripping identifying information from private data to make subjects in tests and studies anonymous — simply doesn’t work. Researchers in the UK and Belgium have recently published a study in which they showed how easy it is to re-identify a specific person from a dataset that has been used for research. They were able to correctly re-identify 99.98% of users from a given dataset, even when the dataset was incomplete.
That might sound somewhat academic and perhaps irrelevant to your life, but Ilia Sotnikov, a cybersecurity expert at Netwrix, says it has far-reaching, creepy implications.
“Even heavily sampled anonymized datasets can’t satisfy modern privacy standards, whether it’s captured for medical research or Siri voice assistance,” Sotnikov told Business Insider.
Privacy might be dead.
Once the stuff of science fiction, sophisticated brain-computer interfaces (BCI) are becoming a reality. And like many of the leading technologies in the news, this one is also being developed by real-world Tony Stark, Elon Musk.
Neuralink is working to develop high-bandwidth implantable computer interfaces that will allow doctors to restore sensory and motor function in people who are severely disabled through strokes and other neurological disorders. But of course, it won’t end there. Once BCI technology advances far enough, Musk hopes it can be used to enhance ordinary human brain function with better memory and cognitive abilities, as human brains and artificial intelligence merge.
The very concept might sound creepy to some, but there are even more distressing implications.
“It opens our bodies to an unknown amount of threats. Eventually, this technology may present an opportunity for people to be hacked into, and that control could be all-encompassing — physical, mental, and emotional,” Hypergiant CEO Ben Lamm told Business Insider.
Voiceprint recognition has moved out of the sci-fi movie realm and into commercial reality, with some banks and credit unions using voiceprints to improve customer service. Since a voiceprint is a unique way to identify a customer, voiceprints can avoid answering security questions or remembering passcodes.
But that introduces a new creepy concern: criminals cloning your voice. It’s not a 10-year-from-now proposition. AI startup Lyrebird has already demonstrated the ability to convincingly clone voices.
Futurist Laura Mingail is concerned about the risk that thanks to artificial intelligence, voice cloning can be done by secretly recording a short sample of your voice.
“If your credit card is stolen, that theft is easy to identify,” Mingail told Business Insider. “But if your voice is stolen and used, you can’t yet track its usage, or all the implications of its theft — whether it’s used to access personal banking information, to speak with family members, employers, or even the press. The abilities of AI to do voice cloning in mere minutes gives an entirely new meaning to losing your voice.”
We’re used to seeing pixelation used to mask faces, license plates, and secure undisclosed locations, so much so that we intuitively know that a pixelated image is intended to protect someone or something’s identity. Pixelation works for the very reason that the “sharpen and enhance” trope on television doesn’t — if something is pixelated, there isn’t enough information to refine it into a sharp, identifiable image.
At least, that used to be true. But in 2016, researchers at the University of Texas at Austin and Cornell Tech created software that can “see through” intentionally pixelated images to understand what’s behind the masking. It uses neural networks, naturally — in other words, artificial intelligence — and has had great success defeating YouTube’s privacy blur tool.
“This means that the most vulnerable people whose faces must be protected from publicity can’t in fact be protected at all, Kovarik told Business Insider.
Some science fiction tropes are iconic. The robot apocalypse, for example, which has fueled six “Terminator” movies and a TV series so far, relies on robots that hunt down humans.
With that image burned into the collective consciousness, you might think that robotics companies would avoid creating robots that look like they’re a few iterations from killbot, but that’s not the way Boston Dynamics rolls.
For at least two decades, the company has been rolling out increasingly sophisticated and ever creepier robots capable of overpowering and outrunning humans. And now Boston Dynamics has a robot that can open doors to search rooms to see where we’re hiding from them.
Avots: business insider