By continuing to use this site, you agree to the use of cookies in accordance with our privacy policy.

Social Science and Public Policy

Privacy-Preserving Machine Learning: Uses and Unintended Disparate Effects

September 29, 2021 at 3:00pm4:00pm EDT

Maxwell Hall, 204B Maxwell Hall and Virtual and Virtual (See event details)

This event has already occurred. The information may no longer be valid.

Autonomous Systems Policy Institute (ASPI) speaker series presents Ferdinando Fioretto.

Many agencies release statistics about groups of individuals that are then used as input to critical decision processes. For example, census data is used to allocate funds and distribute  resources to states and jurisdictions. Similarly, corporates are increasingly adopting machine learning systems to derive socio-technical decisions, including criminal assessment, landing, and hiring. The resulting decisions can have significant societal and economic impacts for participating individuals.
In many cases, the released data contain sensitive information whose privacy is strictly regulated and Differential Privacy has become the paradigm of choice for protecting data privacy. However, while differential privacy provides strong privacy guarantees, it has become apparent recently that it may induce biases and fairness issues in downstream decision processes, including the allotment of federal funds, apportionment of congressional seats, and learning biased classification results in lending and hiring. These issues may adversely affect the health, well-being, and sense of belonging of many individuals, and are currently poorly understood.

This talk will describe our efforts in understanding and addressing these issues at the intersection of privacy, fairness, and decision processes. I will first review the notion of Differential Privacy and discuss its applications in data release and learning tasks. I will then examine the societal impacts of privacy under a fairness lens and shed light on what aspects of the private algorithms and the data may be responsible in exacerbating unfairness. Finally, I will propose a path to partially address these fairness issues. The talk will conclude with an open discussion on the need for tools that may be used by policymakers to test and address unfairness in privacy-preserving decision processes.

Join Zoom Meeting

 

https://syracuseuniversity.zoom.us/j/94181628976?pwd=ZUZPZHBRajAwMWJrbFRHb3g3Z25Vdz09

Any questions, please contact Lynnell Cabezas at lncabeza@syr.edu

This event was published on September 27, 2021.


Event Details