Digital Ethics

By Caroline Santoro

     Innovation in our digital world has brought with it an onslaught of ethical implications that we have scarcely begun to consider. What used to exist as science fiction is now becoming reality. We must evaluate what this could mean for our society before we are overwhelmed by invasive algorithms, job-stealing robots, and artificial intelligence that is superior to our own.

     For example, a risk assessment system called COMPAS is used in the United States Justice System to predict convicts’ recidivism rate, or how likely they are to reoffend. The algorithm is responsible for determining bail amounts and sentence lengths across the country. However, when the news organization ProPublica analyzed COMPAS for algorithmic bias, they found that, although it achieved similar accuracy for both white and black convicts in a tested Florida county, black convicts were twice as likely to be erroneously labeled a major risk by the system (Spielkamp, 2017). Should the justice system use technology like this when the potentially biased results could have an immense impact on the lives of Americans?  

     Two Stanford researchers used over 35,000 photos of self-identified homosexual and heterosexual faces from an online dating website to test how accurately an algorithm could predict the sexual orientation of a person using his or her face alone. Shockingly, the researchers reported that the algorithm distinguished heterosexual and homosexual men correctly 81% of the time and heterosexual and homosexual women correctly 71% of the time. LGBTQ groups and advocates find this to be a dangerous invasion of privacy, as homosexual people could be unfairly targeted using this software (Hawkins, 2017). The researchers responded, “We did not build a privacy-invading tool. We studied existing facial recognition technologies, already widely used by companies and governments, to see whether they can detect sexual orientation more accurately than humans.” They added, “We were terrified to find that they do. This presents serious risks to the privacy of LGBTQ people” (Kolinski & Wang, 2017). Whether their research is unethical or not, the existence of technology with these existing capabilities has frightening privacy implications for people regardless of their sexual orientation.

     Frrole, an artificial intelligence company, recently developed DeepSense, a service that allows companies to create personality profiles of job applicants. These comprehensive personality reports are quite different from anything else online because in addition to organizing and reporting on an applicant’s online presence, the algorithm makes predictions about the qualities of the person. For example, in the DeepSense personality profile created for President Trump, his personality is described as “Easily excitable and sensitive. Passionate and impulsive, he wears his emotions on his sleeve. He is slightly emotional and judgmental. He is usually competitive and challenging as well as frank” (DeepSense, 2017). Companies seeing what news platforms a person uses and with which other accounts they interact is an opportunity for political and social discrimination. In addition, the personality predictions are taken from social media presence, which is not necessarily an accurate depiction of who the person is or how he or she would perform at a specific job.

     Just as stem-cell research, therapeutic cloning, and gene editing gave rise to the field of bioethics, artificial intelligence and robotics will require ethical debate, regulation, and public policy to protect the privacy and livelihood of everyone affected by these world-changing technologies.

 

Works Cited

 

Frrole. “DeepSense – @realdonaldtrump.” Frrole DeepSense App, www.frrole.ai/deepsense-app/donaldtrump.

 

Hawkins, Derek. “Researchers Use Facial Recognition Tools to Predict Sexual Orientation. LGBT Groups Aren’t Happy.” The Washington Post, WP Company, 12 Sept. 2017, www.washingtonpost.com/news/morning-mix/wp/2017/09/12/researchers-use-facial-recognition-tools-to-predict-sexuality-lgbt-groups-arent-happy/?utm_term=.4cb3e39bb146.

 

Kosinski, Michal, and Yilun Wang. “Author’s Note.” Google Drive, Google , 28 Sept. 2017, https://docs.google.com/document/d/11oGZ1Ke3wK9E3BtOFfGfUQuuaSMR8AO2WfWH3aVke6U/edit?usp=sharing.

 

Spielkamp, Matthias. “Inspecting Algorithms for Bias.” MIT Technology Review, MIT Technology

Review, 12 June 2017, www.technologyreview.com/s/607955/inspecting-algorithms-for-bias/.

 

Thibodeaux , Wanda. “This Artificial Intelligence Can Predict How You’ll Behave At Work Based on Social Media.” Inc. , 3 Nov. 2017, www.inc.com/wanda-thibodeaux/this-artificial-intelligence-can-use-social-media-to-tell-hiring-managers-about-your-personality.html.