Business and Ethics

SU Leads the Way in Exploring Ethical Dimensions of Technology

Written by Lincoln Vander Veen

April 21, 2020

AI and Ethics

Share this

Seattle University’s Initiative in Ethics and Transformative Technologies launches two new strategic efforts.

Nathan Colaner, a senior instructor of management in the Albers School of Business and Economics, is one of the driving forces behind two strategic efforts of Seattle University’s Initiative in Ethics and Transformative Technologies (IETT). One of those efforts is a free online course for the public while the other helps elected officials understand their obligations around the use of emerging technologies. 

Recently highlighted in this Seattle Times piece, the course helps elucidate the ethical quandaries of artificial intelligence. 

“This course is about the awareness of the issues and inspiring people to learn more on their own,” explains Colaner. The self-paced, self-directed course that demystifies thorny ethical issues—like facial recognition and the use big data—has seen enrollment balloon to well over 500 people.

“The ultimate goal is to follow up with the folks who have completed the course and ask ‘how did you benefit from what you learned?’ ‘Do you have employees or colleagues who are also interested in exploring these topics further?’ because artificial intelligence is changing the world, not just the tech giants we all know,” he says.

And as the private sector utilizes ever more potentially intrusive technologies to streamline its operations, another effort of the IETT is kicking into gear—a workshop that educates public policy makers on emerging technologies and their role in protecting the common good. 

The workshop, tentatively titled the “Algorithmic Accountability Act,” is intended to educate elected officials on how governments can protect the rights of individuals, especially those most vulnerable to exploitation. 

“Lots of decisions are now automated and elected officials have an obligation to develop mechanisms that to develop guardrails for those machines,” says Colaner. 

Think, for instance, of the Department of Social and Health Services. If DSHS contracts with a vendor utilizing machine learning to determine the allocation of public benefits, elected officials must insist that implicit or explicit biases do not exist in the algorithm. 

Sound complicated? It is and Seattle U and the IETT is committed to help lead individuals and governments to think critically and deeply about these issues.

Share this