Human Morals in Machines


One of my co-authors, Laura Sawyer, shared the below TED Talk, “Machine intelligence makes human morals more important” by Zeynep Tufekci. Tufekci discusses both the precision of our machine learning algorithms, as well as how the unconscious biases of the programmers developing the algorithms are making their way into the algorithms. She explores these issues by giving us examples of how machine learning could affect the hiring (read: not hiring) of people that might struggle with depression or that are likely to become pregnant, how these algorithms use biases to decide who gets paroled, and how these algorithms decide who sees what content or ads.

Tufekci makes a strong case for the need for transparency and to audit our machine learning algorithms, as once those algorithms begin to learn on their own, they become a black box. We don’t know what they’re learning, how they’re making decisions, and what biases are present in making those decisions.

In our republic, we’ve set up checks and balances for decision-making with the Executive, Legislative and Judicial branches of government. In our companies, we have checks and balances for decision-making with the structure of the executive team, board of directors, and investors. Our society’s decision-making has thrived because of thoughtful and rigorous debate. So, why couldn’t we hold our machines to the same standards?

My favorite quote from Tufekci talk is below. Enjoy the video.

“We cannot outsource our moral responsibilities to machines. Artificial intelligence does not give us a get out of ethics free card.”

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s