Presenter: Katie Shilton


Description: As the discourse on responsible and trustworthy AI intensifies, Participatory AI (PAI) presents a compelling approach to the democratic and ethical development of automated technologies. But how should we think about how and whether participatory methods increase the trustworthiness of AI systems? In response to the recent growth in PAI research, we conducted a systematic examination to understand the landscape of methods and theoretical lenses used in participatory AI projects. Analyzing 95 global PAI projects helped us understand the participatory landscape of AI design, evaluation, and governance. This talk will focus on the role that HCI methods play in PAI projects, and the places where HCI methods for incorporating stakeholder participation can be applied to the design of trustworthy design.

Bio: Katie Shilton is a professor in the College of Information at the University of Maryland, College Park, and is currently visiting faculty in Computational Media at UCSC. Her research focuses on technology and data ethics. She is a co-PI of the NSF Institute for Trustworthy Artificial Intelligence in Law & Society (TRAILS), and a co-PI of the UMD Values-Centered Artificial Intelligence (VCAI) initiative. She was also recently the PI of the PERVADE project, a multi-campus collaboration focused on big data research ethics. Other projects include improving online content moderation with human-in-the-loop machine learning techniques and designing experiential data ethics education. Katie received a B.A. from Oberlin College, a Master of Library and Information Science from UCLA, and a Ph.D. in Information Studies from UCLA.


Hosted by: Professor Norman Su


Zoom link: https://ucsc.zoom.us/j/96830885491?pwd=eiLqJBpSvDE4jibDyUc61sWbq04baB.1

 

NOTE: There will be a remote viewing room at UCSC Campus, in room
E2-399.
 

Event Details

See Who Is Interested

  • Andrea Knight Dolan

1 person is interested in this event

User Activity

No recent activity