Browse By

Recent AI Developments May Raise Privacy Issues

SNA (Tokyo) — With the forthcoming 2020 Tokyo Olympics, the development of Artificial Intelligence (AI) is reaching a turning point. At the same time, the rapid emergence of AI may bring about special concerns regarding the infringement of personal privacy.

The fundamental matter is that AI is being given access to sensitive data, such as political and religious beliefs, people’s sexual orientations, and much more. This data management could be abused, depending on who holds it and how it is distributed. If the data comes from facial recognition technology, it might even be used to determine the ethnic background of some individuals.

At the 2020 Tokyo Olympics, NEC is planning to introduce AI facial recognition technology as  part of the nation’s security measures. Around 300,000 attendees will be identified by scanning their ID badges and linking them with their faces. While these systems are designed to guard and protect those who participate and visit the Olympics, it will nevertheless give NEC access to a huge database on ordinary people.

“AI Guardman” is a surveillance system often installed in shops which can recognize suspicious behavior for the purpose of reducing losses from shoplifting. However, this technology sometimes misidentifies individual behavior—for instance, regarding indecisive customers or workers restocking the shelves.

Furthermore, AI voice recognition devices might also infringe upon personal privacy through unintentional data accumulation. Voice-based AI applications have been embraced by the Japanese consumer market, especially Amazon’s AI assistants, Echo and Alexa. These voice-controlled smart speakers, introduced into Japan in August 2017, collect voice data which enables customers to control devices like home appliances. Alexa listens to people’s conversations, and then adjusts its voice and manner of speaking to conform with a user’s speech patterns. It constantly updates, and as a result, it improves its functionality.

It doesn’t seem farfetched, however, to suggest that this kind of technology could easily be hacked or modified by governments or private groups for the purpose of conducting surveillance inside people’s homes. AI can even collect data about people’s feelings, their levels of education, and much more.

As Ichiro Satoh, deputy director-general of the National Institute of Informatics, told the Nikkei last year, “If everything is recorded to keep evidence, privacy concerns will emerge.”

On the other hand, many AI products are having a positive impact on society. Hitachi has developed a facial recognition tool called the “happiness meter.” It measures people’s behavior and their moods in an attempt to prevent death by overwork. Similarly, manufacturer Daikin has promoted an air-conditioning system which observes eyelid movement, designed for an office environment. If the device detects sleepiness, it will decrease the room’s temperature.

Recently, four scientists at Kyoto University have used AI to decode human thoughts on the scientific platform, BioRxiv. Based on brain activity, this new technology can recreate images that a person is seeing. While this may contribute to treating mental diseases and have other positive applications, it also suggests that AI may soon be able to detect not only our words and actions, but also our thoughts.

Get the feeling that your news services aren’t telling you the whole truth? That’s what happens when they get their operating money from governments and business corporations. SNA relies exclusively upon its subscribers in order to remain fully independent. Please support fearless and progressive media in Japan through Patreon.

Become a Patron!
For breaking news, follow on Twitter @ShingetsuNews