Posts: 14,414
Threads: 9,507
Thanks Received: 9,034 in 7,184 posts
Thanks Given: 9,804
Joined: 12 September 18
13 February 19, 09:11
(This post was last modified: 13 February 19, 09:11 by harlan4096.)
![[Image: when-ai-decides-featured-1024x673.jpg]](https://media.kasperskydaily.com/wp-content/uploads/sites/92/2019/02/11105747/when-ai-decides-featured-1024x673.jpg)
Quote:Despite our previous coverage of some major issues with AI in its current form, people still entrust very important matters to robot assistants. Self-learning systems are already helping judges and doctors make decisions, and they can even predict crimes that have not yet been committed. Yet users of such systems are often in the dark about how the systems reach conclusions.
All rise, the court is now booting up
In US courts, AI is deployed in decisions relating to sentencing, preventive measures, and mitigation. After studying the relevant data, the AI system considers if a suspect to be prone to recidivism, and the decision can turn probation into a real sentence, or lead to bail refusal.
For example, US citizen Eric Loomis was sentenced to six years in jail for driving a car in which a passenger fired shots at a building. The ruling was based on the COMPAS algorithm, which assesses the danger posed by individuals to society. COMPAS was fed the defendant’s profile and track record with the law, and it identified him as an “individual who is at high risk to the community.” The defense challenged the decision on the grounds that the workings of the algorithm were not disclosed, making it impossible to evaluate the fairness of its conclusions. The court rejected this argument.
Full reading: https://www.kaspersky.com/blog/when-ai-decides/25607/