Write a Blog >>
Thu 12 Nov 2020 01:03 - 01:04 at Virtual room 1 - Fairness

Machine learning software is increasingly being used to make decisions that affect people's lives. But sometimes, the core part of this software (the learned model), behaves in a biased manner that gives undue advantages to a specific group of people (where those groups are determined by sex, race, etc.). This "algorithmic discrimination" in the AI software systems has become a matter of serious concern in the machine learning and software engineering community. There have been works done to find "algorithmic bias" or "ethical bias" in the software system. Once the bias is detected in the AI software system, the mitigation of bias is extremely important. In this work, we a)explain how ground-truth bias in training data affects machine learning model fairness and how to find that bias in AI software,b)propose a method Fairway which combines pre-processing and in-processing approach to remove ethical bias from training data and trained model. Our results show that we can find bias and mitigate bias in a learned model, without much damaging the predictive performance of that model. We propose that (1) testing for bias and (2) bias mitigation should be a routine part of the machine learning software development life cycle. Fairway offers much support for these two purposes.

Conference Day
Thu 12 Nov

Displayed time zone: (UTC) Coordinated Universal Time change

01:00 - 01:30
01:00
2m
Talk
Do the Machine Learning Models on a Crowd Sourced Platform Exhibit Bias? An Empirical Study on Model Fairness
Research Papers
Sumon BiswasIowa State University, USA, Hridesh RajanIowa State University, USA
Link to publication DOI Pre-print Media Attached
01:03
1m
Talk
Fairway: A Way to Build Fair ML Software
Research Papers
Joymallya ChakrabortyNorth Carolina State University, USA, Suvodeep MajumderNorth Carolina State University, USA, Zhe YuNorth Carolina State University, USA, Tim MenziesNorth Carolina State University, USA
DOI
01:05
1m
Talk
Repairing Confusion and Bias Errors for DNN-Based Image Classifiers
Student Research Competition
Yuchi TianColumbia University
DOI
01:07
1m
Talk
Towards Automated Verification of Smart Contract Fairness
Research Papers
Ye LiuNanyang Technological University, Singapore, Yi LiNanyang Technological University, Singapore, Shang-Wei LinNanyang Technological University, Singapore, Rong ZhaoNanyang Technological University, Singapore
DOI Pre-print
01:09
21m
Talk
Conversations on Fairness
Paper Presentations
Joymallya ChakrabortyNorth Carolina State University, USA, Sumon BiswasIowa State University, USA, Ye LiuNanyang Technological University, Singapore, Yi LiNanyang Technological University, Singapore, Yuchi TianColumbia University, M: Christian BirdMicrosoft Research