top of page

Partnership on AI: Algorithms Aren’t Ready to Automate Pretrial Bail Hearings

The Partnership on AI’s inaugural report is on ‘Algorithmic Risk Assessment Tools’, as used by the police and courts in coming to decisions, particularly regarding pretrial detention.


The report identifies serious shortcomings in the development and use of these risk assessment tools and turn around three major issues: technical concerns; human-computer interface; and transparency and accountability.


“- Concerns about the validity, accuracy, and bias in the tools themselves;

- Issues with the interface between the tools and the humans who interact with them; and

- Questions of governance, transparency, and accountability.”


PAI acknowledges the desire to mitigate human error and bias in the justice system through the use of this technology, but argues it is a misunderstanding to see the technology as either objective or flawless.


The report offers two approaches out of the current situation:


1) For jurisdictions to cease using these tools in deciding whether to detain individuals until they are shown to overcome the problems detailed in the report.


2) Attempt to improve risk assessment tools - “That would necessitate procurement of sufficiently extensive and representative data, development and evaluation of reweighting methods, and ensuring that risk assessment tools are subject to open, independent research and scrutiny”


In concluding, the report argues that policymakers are needed to set robust standards for the tools used in risk assessment in addition to the 10 requirements outlined by PAI.


Read the article here.


Read the report here.



#PAI #PartnershipOnAI #Crime #Justice #Automation #Risk #AI #Bail

bottom of page