Since the outbreak of COVID-19, the privacy debate that had been making global headlines was put on the back burner while the world turned its attention to more pressing matters. But as companies and governments enlist sophisticated tracking technology from biometrics to Bluetooth as a weapon against the coronavirus, privacy implications are coming back into focus.
Tracing apps have become a popular option for citizens and governments to monitor the spread of the virus—but they also open new avenues for unparalleled access to user data. As of May 8, 2020, 5.3 million Australians had downloaded COVIDSafe, an app backed by the government that uses Bluetooth to exchange a “digital handshake” with any other user who comes within five feet, and pushes an alert if they have contact with anyone who tests positive for COVID-19. The UK’s National Health Service is currently piloting a similar app, in which several security issues have already been flagged.
Some governments are taking more aggressive measures, turning national defense software on their citizens in an effort to contain the virus. The Israeli internal security office redeployed counter-terrorism technology to track the movement of COVID-19 patients using phone and credit card data. In Russia, authorities used facial recognition software to locate a woman who had “escaped” from quarantine. In South Korean train stations, thermal cameras monitor travelers’ body temperatures. Drones in the United Kingdom have spotted and reported people violating social distancing regulations. The Australian government is installing surveillance hardware in some homes to ensure that anyone in quarantine stays put—with the threat of fines or jail time if they don’t.
Citizens are expressing qualms about the use of tracing tools. Only 37% of Americans think it’s acceptable for the government to use cellphone data to track individuals in order to ensure they are complying with social distancing regulations, only 45% think it’s acceptable to track those who may have had contact with someone who tested positive, and just over half (52%) think it’s acceptable to track the movements of people who have tested positive, according to April 2020 findings from Pew Research.
Perhaps adding to these reservations, many of the Big Tech companies who were major culprits in data and privacy breaches are also developing tracing technology. On May 20, Apple and Google—two companies that were under significant fire for data harvesting—released their joint Exposure Notification API software, which supports the development of contact-tracing apps. While not itself a tracing app, as was originally planned, it lays the technical foundation for developers to build upon in the creation of their own apps. The companies report that twenty-two countries across five continents have already requested the API to support their app development. Facebook, meanwhile, is sharing location data with COVID-19 researchers to track and predict hotspots.
Less obvious, seemingly innocuous platforms are also coming under scrutiny for privacy concerns. The New York City Department of Education banned the video-conferencing app Zoom from school use after the FBI warned that it was susceptible to digital hacking. Other free applications, such as Google Hangouts and Facebook Messenger, are raising similar concerns. Google subsidiary Verily requires a Google account to locate and book coronavirus testing—and says it may share users’ personal health information with outside parties like contractors and the government.
Privacy advocates argue that the use of tools like locating testing centers, videoconferencing and chat apps is no longer truly voluntary in newly distanced lifestyles. With schools and offices closed, many people don’t have any choice about logging onto Zoom for classes or meetings. And linking to your Google account seems a small price to pay to arrange testing. As The New York Times’ editorial board wrote in an April 7 opinion piece, “many Americans now rely on digital tools to work remotely and stay connected. They shouldn’t have to sacrifice their privacy to use them.”
Especially because, as Senator Maria Cantwell expressed during an April 9th hearing on the role of Big Tech during the pandemic, “rights and data surrendered temporarily during an emergency can become very difficult to get back.” Many of the far-reaching powers granted to intelligence officials after 9/11, for example, are still in place nearly two decades later.
And yet, the undeniable omnipotence that makes personal data a threat to individual privacy is also what makes it a major asset in managing public health. It is an incredibly powerful resource, with enormous capabilities for good alongside its vulnerabilities. “In this particular case, if we have technology for minimizing harm, we have a moral obligation to use it,” Marcello Ienca, a bioethicist at the Swiss university ETH Zurich, told the New Yorker. “But we have to merge it with the best available technology in the areas of cybersecurity and privacy.”
Initiatives to regulate data in the era of COVID-19 are starting to emerge. At the end of April, U.S. senators proposed a privacy bill for COVID-19 contact tracing, while earlier in the month hundreds of academics around the world signed their names in support of privacy-friendly contact tracing apps.
Consumers and experts alike are still grappling with where to draw the line between public health and personal privacy. But one thing is clear: the stakes of the data privacy debate are higher than ever. “It’s important to consider,” Northeastern University professor of computer science and law Woodrow Hartzog said in May during a conversation about privacy and COVID-19, “that the decisions we make now will impact privacy for years to come.”
Source: Wunderman Thompson