Will Privacy Kill Innovation?

It’s a tug of war. On one side: fast-moving technologies, new business practices, new digital behaviors, the democratization of analytics and the massive adoption of artificial intelligence (AI) and machine learning technologies. On the other side: the fundamental right of individuals to protect their privacy.

It’s a tug of war between data-driven innovation and privacy, and the balance is not right – privacy laws needed to be reinforced to match the changes driven by technological innovations. When it comes to privacy and personal data protection, the previous EU directive was drawn up in 1995. That was before the massive adoption of internet, social networking and e-commerce. Since then, the world has changed. It has become data-driven, and personal data is the new gold.

Personal data is everywhere, highly accessible, and technologies are making it easy to use this data. As a result, privacy has never been as exposed.

But awareness is building up. Massive data breaches and revelations about the inappropriate use of personal data, or the lack of protection for it, are bringing the question of privacy to the top of the headlines.

GDPR

With the now-active General Data Protection Regulation (GDPR), the planets are aligning for a complete revamp of what privacy means, how personal data is handled, and what it means for organizations and their ability to innovate with data.

In this tug of war, the balance has started to even out.

Leaving Privacy Out of the Game

Innovating with data brings a lot of benefits to companies, people and society in general. But it can also expose the privacy of individuals in unprecedented ways.

The documentary Pre-Crime shows how police enforcement authorities in the United States are using large amounts of data and machine learning algorithms to predict the likelihood of somebody committing a crime or to identify places and times where a crime is likely to be committed. Surely, preventing a crime from happening is better than having to deal with the consequences of the crime.

But what if a person is wrongly targeted with preventive measures? Tagging someone as a potential high-risk criminal can have dramatic impacts on their life such as reputational damage, inability to get a loan, or inability to get job in a governmental agency. Who is accountable for making that decision? What logic did the program use to reach that conclusion? What data was used, and is it accurate? If not, how do we correct the data and get the person off the list?

The question of accountability is even more critical when using machine learning algorithms. The developer who coded the program most likely doesn’t know anything about crime or the socio-economic and psychological factors that lead to crime. He or she doesn’t even know what the program will eventually do, as it is designed to change over time, “learning” from the data.

Another example of disruptive innovations that raise privacy concerns is the recent breakthrough in face recognition technologies. Face recognition is not new; it has been used for some time now, in places such as border control, or by Facebook to flag people in our photo albums. But face recognition is now becoming mainstream, with exponential adoption across many domains and industries.

In parallel, the technology itself is getting better every day. For instance, research has shown that it is possible to reconstruct the face of a person based on his or her DNA. That can be very useful when it comes to identify the victims of an accident, or to catch the criminals using the DNA left behind. But the same techniques can be abused by authoritative regimes to monitor citizens’ whereabouts and track down political opponents.

In the same way as driving a car safely requires drivers to learn and respect the code of conduct and rules defined by society for road usage, innovating with data requires organizations to learn and respect the rules and regulations designed to protect the privacy of individuals.

Read More: http://bit.ly/2NUP84J