Week 4

Reflections from the Cascad.ai conference

This week a big highlight was watching the sessions from the Cascad.AI conference. The conference was designed to speak openly and share insights on emerging best practices for AI applications. With the goal of building and employing AI responsibly, this conference had speakers sharing insights from both industry and academia.

This conference is relevant to our research project because our dataset is about cases of irresponsible AI. Learning more about best practices in responsible AI– and hearing from analysts in industry and academia– helps me understand what parts of the data might be most important to an analyst, and how to design a data visualization tool that can support the display of key insights.

Out of what was shared, the following take aways stood out to me the most. I also share how these insights might impact the design of a visualization tool:

  • It is crucial to give ‘non-technical people’ a seat at the table for the human rights conversations related to AI. One form of this might be in technological review boards (similar to transportation or medical review boards). Knowing this impacts our research for a data visualization tool by informing us that we might want to include data on existing review or accountability organizations for a given instance of AI.

  • Humans must acknowledge that we are all technological citizens and that technology does not happen in a vacuum. This informs our data visualization research by reminding us that pre-existing structures in society are important to take into account when we visualize irresponsible AI. For example, when looking at populations impacted by AI we might want our user to be able to sort by populations that are already marginalized to see how an AI instance impacts them in particular.

  • It is important for human actors to be responsible for the AI they create rather than falsely attributing the capacity for moral/ethical reasoning to AI. This statement could apply to our visualization tool by supporting the importance of highlighting who is resposible for creating and implementing each instance of AI. We can build a data visualization tool that contributes to a culture of people taking ownership and haing accountability for their creations.

  • It is important to ultimately give the doman expert flexibility and power on how the information they are analyzing is displayed. To implement this, we would simply build a variety of selections for type of data displayed and how it is displayed into our tool. This supports the domain experts in being able to get the visualizations they want and being empowered rather than limited by our visualization tool.

Written on June 13, 2022