Old Version
Politics

Facing the Risks

Concern is mounting over the wide application of facial recognition technology and the damage irresponsible storage of personal data can cause

By Yang Zhijie Updated Feb.1

Despite winning his landmark court case on the use of facial recognition technology, Guo Bing, a resident of Hangzhou in East China’s Zhejiang Province, said he intends to appeal. Guo, a member of a zoological park, sued over the obligatory use of facial recognition technology at the members’ entrance. Fuyang District Court in Hangzhou ordered the defendant to pay Guo compensation of 1,038 yuan (US$159) and delete his facial information. Unhappy with that outcome, Guo has filed an appeal, asking that the park delete all his personal information from their digital records, including his phone number and fingerprints. The first case of its kind in China, it attracted much attention as the use of facial technology spreads in a country where not everyone is convinced it is being used appropriately and safely. 

Facial recognition as an important means of ID authentication is in widespread use for apps, public transportation and public places for security. It is common in neighborhoods, supermarkets, entertainment venues and scenic sites. The trend toward smart neighborhoods means that gate technology based on facial recognition has become commonplace. The technology is also increasingly used by schools and educational training institutions both in online or offline classes to monitor student and teacher behavior.  

The CAGR (compound annual growth rate) for the facial recognition industry was 30.7 percent on average between 2010 and 2018. The market value reached 2.51 billion yuan (US$381.8m) in 2018, and it is expected to climb to 10 billion yuan (US$1.5b) by 2024, according to a February analysis by Forward Intelligence, an industry data provider.  

But the pervasive technology is becoming more controversial as people grow more conscious of data protection, even if it does make life more convenient. Lao Dongyan, a law professor from Tsinghua University in Beijing, filed a complaint with the management company and neighborhood committee of her residential community in March 2020 after discovering they planned to install facial-recognition access control for every gate and require every resident to upload a photo of their face as well as provide their ID information. Having seen many cases involving data leaks, her instincts told her that the blanket use of facial recognition is risky due to legal loopholes and safety hazards. The community’s plan was postponed indefinitely.  

On some online trading platforms, a thousand photos of people’s faces sell for just 2 yuan (US$0.3). They can be used for fraud, criminal activities like money laundering, and identity theft, according to a report by China Central Television (CCTV) in late October.  

“You can’t underestimate the risks of facial recognition. You don’t know who is collecting the information and what data they have saved, let alone how they will use it,” Lao said. “If the face data are leaked and linked to other personal data, the consequences are disastrous.”

A teacher at an elementary school in Hefei, Anhui Province uses a facial recognition system at the campus gate, March 31, 2020

Preying on Data 
To help prevent the spread of the coronavirus, in early 2020 shopping malls, subway stations, offices and other public places started installing terminals that recognize faces, take thermal images and collect data at the same time. Except for a minority of software or application scenarios, most of the data collection is done without the permission or even knowledge of the users.  

An October survey by the Southern Metropolis Daily listed scenarios where people found the data collection is most unacceptable. It includes shopping malls that use facial recognition to collect data about customer behavior and shopping habits, universities that collect data of students’ micro expressions and teachers’ gestures during class and photo editing apps that demand photos for face swapping or virtual makeup.  

“The collection of facial information is rather invasive because data is collected from a distance without people knowing. The data keeps accumulating for a long time and at a large scale without anyone noticing,” Lao said. She is most concerned about who stores the collected data and its safety.  

The CCTV report pointed out that without unified standards in place, vast amounts of facial data are stored in the databases of app operators or technology suppliers. But the outside world has no idea if sensitive data is redacted, which data is to be used for algorithm training and which will be shared with their partners. 

In September, Kai-Fu Lee, CEO of venture capital firm Sinovation Ventures, caused uproar after saying at the HICOOL Global Entrepreneur Summit held in Beijing that he had helped AI company Megvii build a partnership with Alibaba’s fintech division Ant Group, through which photo editing apps Meitu and Megvii gained a massive amount of facial data. Ant Group later denied this, and Lee said he misspoke.  

Megvii started as a facial recognition company in 2011. For startups in this field, gaining as much facial data as possible is crucial to the accuracy of the product. These companies have a strong desire to acquire data. In early development, they use public data provided by research institutes or universities and many companies pay volunteers to collect samples, according to technicians engaged in the field. Later it became normal practice for companies to acquire data from photos uploaded online, even though the legitimacy of this has been questioned. 

There is enormous concern about how AI companies cooperate with their customers in terms of data. Megvii states in its service agreement that it has the right to store customer data and use it for internal research to “improve the accuracy of facial recognition, updating algorithms and improving our products and services.” 

An employee of CloudWalk, a Chinese AI company founded in 2015, told NewsChina that their customers usually store the data they collect and may not be willing to share data with facial recognition companies. “It is particularly so when we cooperate with banks and public security systems. Our servers are built in their intranet on their private servers. There is no way to get the data out from outside.” 

Respondents to the Southern Metropolis Daily survey said they are most concerned about how firms that collect data will protect and ensure its safety.  

In the early years, tech firms paid lip service to data protection. Huang Hao (pseudonym) who worked at MSRA (Microsoft Research Asia), Microsoft’s research arm in the Asia-Pacific region, said the risk is highest when one firm outsources work involving data to other companies, which may not be secure. He claimed he knew of cases where outsourced work had been exposed online, without mentioning the firms involved. Huang said that data protection might cost too much for some startups.  

Even today, the storage and protection of data is a vulnerability for many companies, according to Zeng Yi, an AI specialist at the Institute of Automation of the Chinese Academy of Sciences. 

In February 2019, Netherlands-based NGO GDI Foundation security researcher Victor Gevers revealed that SenseNets, a Shenzhen-based technology provider that has a contract with a local public security system, failed to protect its data and exposed the personal information of millions of people to all visitors to the company’s database for months, meaning anyone with malicious intent could sell the data on.  
Safety Hazards 
Some leaked facial data finds its way on to the black market. In September 2019, the Beijing Youth Daily reported that a merchant on an online shopping platform was selling facial data. His wares included tens of thousands of photos of over 2,000 people, each matched by a file detailing individual facial features and gender. The seller said some samples were scraped using search engines and some were from the database of an overseas software company. 

“Personal biological data involving the face, voice and iris can’t be modified after it’s disclosed. If it’s leaked, it will cause irretrievable and irreversible risks and harm,” Lao said.  

The photos themselves might not always lead to big risks, but if the photos are matched with other personal ID information it will expose that person to enormous risks, internet safety experts said.  

The data SenseNets exposed included detailed and sensitive personal information like ID numbers, gender, home address, photos and the work places of more than 2.5 million people. Such a huge leak is disastrous for the industry.  

And it is becoming much easier to match facial photos with ID information. “Mobile payment software requires facial and personal information. People swipe their ID when they enter a park or scenic site, that leaves traces too. Some finance companies store customers’ personal information,” said one industry insider who spoke on condition of anonymity.  

In some cases, apps demand consumers to take a selfie while holding their ID card or passport, which experts in internet security warn is the riskiest scenario.  

Using AI technology to change faces to pass authentication and swindle money is already an old trick. As video authentication becomes popular, tools have emerged in the underground market which are able to “activate photos,” media reported. They animate a static photo with motions like eye blinking, nodding or opening and closing the mouth. These “activated” face videos, combined with ID information, can be used to register for apps and websites and obtain money fraudulently through identity theft.  

In January 2019, police in Sichuan Province busted a criminal gang that used software to make live photos that could trick the Alipay mobile payment app’s facial recognition system and steal money from victims’ accounts. In another case in Shenzhen, Guangdong Province, the suspects bought personal data that included names, ID numbers and facial photos on the black market and used software to “activate” the photos. 

Wang Bin (pseudonym), who used to conduct testing so facial recognition tech could identify live faces at Tencent’s AI research arm, said he first saw these tricks in 2017. “The human eye can easily tell when it’s a fake person. But it was difficult for the detection technology to distinguish it then,” Wang said.  

While warning people to be more vigilant, the interviewed experts noted that a clearer line should be drawn on the use of facial recognition technology. 

Lao believes that the widespread uptake of facial recognition is a “conspiracy” between the government and tech companies. “For the government, facial recognition is a convenient tool for its security needs, and capital-driven companies are happy to expand the business as fast as possible,” Lao said.  

“But there is no law yet to regulate how to collect, store, transmit and use the data and whether the data can be sold or supplied to a third party, which makes the potential risk of rapidly expanding application scenarios grow at an exponential rate.”
Print