Accenture’s new tool reportedly addresses unfair biases in algorithms by adjusting the impact of sensitive variables within a given dataset.
Interestingly, the tool offers a necessary trade-off between accuracy and fairness and the user is required to determine the balance. Rumman Chowdhury suggests the cost of increased fairness need not be that high and that if the accuracy takes a big hit, this indicates that there’s something going wrong in the data set and it should be addressed.
Read here.
For other tools and services that Accenture has started offering relating to transparency and ethics for maxine learning deployment, read here.
