Welcome! This website was created as a summation of work from a competetive graduate summer research position in 2022. In this site, you’ll find information about me and my advisor, the project we worked on, a blog that documents notable learning moments, and a final research paper that outlines next steps. Enjoy!
About Me
In 2022 I completed my Master’s in Computer Science at Northeastern University in Vancouver, BC. I recieved my undergraduate degree in the Liberal Arts and Sciences from Quest University Canada in Squamish BC. I previously worked in education and am continuing my passion for inclusive education. I’ve worked as both a Teaching Assistant and a Student Ambassador at Northeastern. In 2021 I completed a co-op as a Software Developer with a local real estate marketing firm in BC, Canada. Find my linkedin here. Navigate back to sommerharris.com.
About My Advisor
John Alexis Guerra Gómez is an Information Visualization Researcher and Engineer. In his own words: “I help people extract insights from their data using interactive infovis and data science. PhD in Computer Science, Assistant Teaching Professor at Northeastern University Bay Area. I conduct research on Visual Analytics, Accessibility, Big data, Human Computer Interaction and Web Development. Formerly at UCBerkeley, Uniandes Colombia, Yahoo Labs, Xerox PARC and DUTO.”
You can access his website at this link
About My Project
The general area for our research project is data visualization for instances of irresponsible artificial intelligence. This means that we research and improve the best practices for visualizing this kind of data, with domain experts being our main audience. We are working in observable notebook with d3 and Vega-Lite. The following interactive tool was built by John, our research advisor, to explore datasets. This tool is an example of what we would aim to build, or at least propose features for, during this project. This instance of the navio is loaded with data from our project, so you can also explore our dataset below.
The specific problem we are working on tackling is that AI in its current state is often unfair, furthers pre-existing societal discrimination, and is difficult to control. Our solution is to build an Irresponsible AI Atlas to track what instances of irresponsible AI actually exist, in order to foster more discussion and action toward accountability.
There is a knowledge gap we aim to cover with the AI Atlas, as well as a knowledge gap that we aim to cover in our research. The gap we aim to cover with the AI Atlas is: What instances of Irresponsible AI exist? We want to track all records of them in one place. The gap our research aims to cover is: Can an Interactive Visualization Tool better support the analysis of data from instances of irresponsible AI?
We can explore this question through research on what features have made for successful data visualization tools in this domain, in the past. We will conduct a literature review on best practices. One key to answering this question will be developing specific usage scenarios for the specific audience that might use our tool.
Some usage scenario examples:
-
Rachel, a phd student who does research to support AI accountability organizations, wants to be able to view the descriptions of an incident and company responsible, associated with a scatterplot point by clicking on it
-
Rachel wants to look at the instances of irresponsible AI on a timeline, in order to determine if the impact on specific marginalized populations are growing with time
-
Rachel wants to be able to select a small section in the timeline, and zoom in to look at instances of irresponsible AI just in that timeframe, for example, looking at specific dates associated with presidential terms
-
Rachel is an AI researcher who wants to compare the domestic versus international impact of US companies irresponsibly using AI (we are trying to come up with any useful use cases that are map specific, or at least using location data.