For more than 30 years, privacy and data protection experts from more than 70 countries have gathered for the International Conference of Data Protection and Privacy Commissioners (ICDPPC) to share knowledge and provide leadership on how to balance the free flow of data with protecting consumers’ personal information.
The 39th annual ICDPPC – taking place September 25-29 in Hong Kong – is certain to focus on how observational technologies have improved so dramatically and continue to accelerate. All over the world, connected devices are observing how much we sleep, how much we exercise, how fast we drive, where and when we took our pills, and so forth. Combined with our ability to develop algorithms and write software that analyzes sets of information based on rules, logic and human ethics, technologists may be able to discover new and valuable insights within that data that benefit not only certain people, but society as a whole. What happens, however, when humans are no longer writing the code?
Today, software performs analysis based on criteria typically written by human engineers and data scientists. However, recent advances in machine learning (ML) and artificial intelligence (AI) support the creation of software that can, with limited or no human intervention, modify its procedures and criteria based on data. Machine learning techniques can often help hone algorithmic analysis and improve results.
However, reduced human direction means that AI can do unexpected things. It also means data protection safeguards need to be crafted to ensure algorithmic decisions are lawful and ethical – a challenge when specific algorithmic criteria may be opaque or not practical to analyze. Increasingly, technologists and policymakers are grappling with hard questions about how machine learning works, how AI technologies can ethically interact with individuals, and how human biases might be reduced or amplified by algorithms that largely think for themselves.
Luckily, the Future of Privacy Forum (FPF) recently curated leading research highlighting the privacy challenges posed by artificial intelligence, providing an excellent basis and introduction to the complexity of these issues. Very importantly, the Information Accountability Foundation (IAF) has just published a paper on Artificial Intelligence, Ethics and Enhanced Data Stewardship, clearly outlining the need for ethical foundations that reflect our human values and provide guiding principles and accountability as we embrace these technologies.
On September 25, I will be presenting at an official ICDPPC side event specifically focused on these topics and how to ensure sustainable innovation with effective data protection, co-hosted by the FPF and the IAF. Those attending will hear more on:
- How machine learning and artificial intelligence work
- Why these emerging technologies can support better outcomes for users of online services, patients with mental health conditions, and systems designed to combat bias
- The challenges and implications raised by machine learning and artificial intelligence in the context of efforts to support legal, fair, and just outcomes for individuals
- How these emerging technologies can be ethically employed, particularly in circumstances when artificial intelligence is used to interact with people or make decisions that impact individuals
This will be an excellent opportunity to learn how AI works, why it matters for data protection frameworks, and to discuss the implications of algorithmic systems that interact with people, learn without little or no human intervention, and make decisions that matter to individuals. Together, the experts in this session will explore the application of “legal, fair and just” to AI and ML through examples and focused discussion, not just for the region but for the whole data protection community. Other speakers include:
- Rich Caruana, Senior Researcher, Microsoft
- Stan Crosley, IAF Senior Strategist
- Dr. Andy Chun, Associate Professor, Department of Computer Science, and Former Chief Information Officer, City University of Hong Kong
- Yeung Zee Kin, Deputy Data Protection Commissioner, Singapore
- Peter Cullen, IAF Executive Strategist
- John Verdi, FPF Vice President of Policy
The event will be held from 3:30 to 5:00 p.m. (15:30 – 17:00) in Kowloon Room II (M/F) of the conference venue in Hong Kong. Registration is not required. For more information, please contact John Verdi at firstname.lastname@example.org or Peter Cullen at email@example.com.