Friday Factoids Catch-Up: Leaning into the Z-generation (The erosion and datedness of humanistic values)

“The way to predict future behavior is to look at past behavior.” This statement is often hammered into the heads and hearts of anyone seeking to establish long-term careers in the behavioral and psychology fields. Our psychological tests and empirical research are guided and structured to line up succinctly with this ideological concept. But have we essentially highlighted our dispensability among this generation, when unconsciously promoting that theory?

 

I recently read an article in the BBC news, which discussed the Durham police department in England who are in the process of piloting a new program where an app would assess the risk of a potential suspect. According to the article, the police department will use a cell phone app to determine the probability a suspect will commit a violent act if not detained. During testing, the app called “Hart” was accurate 98 percent of the time when predicting low risk offenders and 88 percent when predicting high-risk offenders.

 

I must admit as a proclaimed Humanist, I had a knee jerk reaction when I first started reading this article. The thought of a mathematical equation making decisions with serious consequences on flesh and blood people frightened me. Images of the movie thriller, “Minority Report,” where an innocent man being found guilty of murder by a cold computer came to mind. However, as I read the article and understood the accuracy of the algorithm used in the app, I felt that my bias towards machines might have been illogical. This is natural because human beings are at times illogical and our conclusions are often wrong. When we watch the news, we constantly see evidence of our “wrong” and illogical behavior. We see examples of the law, being enforced unfairly, based on gender, race and/or class. Moreover, our biases do not start and end with law-based experiences. We also observed biases in how we hire employees, how we pick our mates, who we associate with, which political parties we support etc. And if we were to think about the Hart program more logically, we would come to the conclusion that if we were to be pulled over by a police officer, we would probably more likely prefer to be judged by a cold, heart-less, algorithmic computer rather than a hot blooded cop who is having a hard day.

 

We are living in an age where things are becoming more and more automated, and I believe we can be more rational in how we judge the computers and machines that are taking on roles once performed by humans. Still lingering on the morality fence with Carl Rogers and Hippocrates?  Then consider the following example: we humans are somewhat okay with the fact that 1.3 million people are killed every year in automobile accidents and accept that these accidents are a part of our lives as acceptable human error. If a driverless car were to hit and kill a child running into the streets after his ball, it would be safe to guess there would be a collective public outcry to end to driverless cars. Ultimately holding machines more accountable to the persons who made and designed them. The fear of machines come from an emotional part of our minds rather than the logical part. Therefore logically speaking, I am sure machines will make mistakes; but if the statistics show, they can make less mistakes than we humans can, should we not ethically yield and refer to their specialties?

 

Reference: http://www.bbc.com/news/technology-39857645

 

Dianne Rapsey-Vanburen, MA
WKPIC Doctoral Intern

 

This entry was posted in Blog, Continuing Education, Current Interns, Friday Factoids, Mental Health and Wellness, Resources for Interns and tagged , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

Before you post, please prove you are sentient.

What is 2 times 3?