Bay Area Police Try Out Controversial AI Software That Tells Them Where to Patrol

There are cultural and political reasons that explain over- or underreporting of certain crimes in certain areas. Through this use of AI, it will result in relatedly over- or under-policing of those areas, unless this is addressed and weighted against.


It is worth asking the question: Is the purpose of the system to get the same results, more efficiently, or is it to improve the decision making compared to previous systems or human control?

In this scenario, with regards to policing certain areas, I believe we’re aiming for the latter. For this reason, it is important to mitigate harm by addressing these biases. So far, this particular product has failed to win over several police departments in that battle.


Read here.


Also see here ('Artificial Intelligence Is Now Used to Predict Crime. But Is It Biased?') - Smithsonian, March 5, 2018.

#AI #Policing #Automation #Bias

©2019 by AITHICS