3.png

Workshop 1: Embodied AI & Gender

Registration deadline: 14 September 2021, any time on Earth

Workshop date: 15 September 2021, 14:00 - 18:30 (CET)

We welcome a broad audience coming from various disciplines and practices, including but not limited to roboticists, human-robot interaction researchers, human-computer interaction researchers, philosophers, engineers, computer scientists, sociologists, psychologists. We welcome practitioners working in industry and non-profits. We particularly encourage citizens and societal associations interested in the topic of gender and AI to participate. We will make the workshop inclusive, making space for various voices. Every expertise is valuable.

REGISTER HERE

Program

14:00 Welcome

14:30 Keynote and Q&A: Eduard Fosch-Villaronga (Leiden University)

15:00 Keynote and Q&A: Catherine D’Ignazio (MIT)

15:45 Break

16:00 Keynote and Q&AI: Lisa Mandemaker (Designer and Artist)


Workshop for registered participants:

16:30 Introduce the activities and divide in groups 

16:35 Our ways of working in Embodied AI: how we treat Gender now

17:00 Break

17:15 Our ways of working in Embodied AI: desirable future for Gender in AI

17:45 Plenary Discussion

State of the world. Today, embodied AI (e.g. smart objects, robots, smart personal assistants) is an expression of power: It can be used to support human flourishing through human-agent relationships, but it can also support surveillance, perpetuate bias, and amplify injustice. Recent years have seen a growing number of calls for considering gender during the design or evaluation of software, websites, or other digital technology. Research has outlined how gender plays a role in the design and use of software and other digital technology. Bias, stereotypes, and gender norms are often embedded in technology implicitly and explicitly with massive societal impact. Designers, researchers, and societal stakeholders all have the responsibility to reflect on the values, perspectives, biases, and stereotypes they embed in embodied AI technology.

Integrating DEI in the way we develop embodied AI. For example, we’ve learned that voice assistants may not recognize certain accents, image recognition algorithms embedded in IoTs may mislabel people based on assumed gender, and embodied AI, like robots, can be non-inclusive in design, e.g. robots with "female" voices with white bodies. These issues raise several questions about gender in the design of embodied AI: How do we make sure that we can make room for designing and developing while being mindful of the biases, stereotypes and values we have about gender? How do we integrate these reflections in our processes rather than confining them as an afterthought? What practical actions can we take in our daily practices to integrate diversity, equity, and inclusion into the design process itself? 


What is the workshop about? In this half-a-day workshop, we are going to learn more about gender in embodied AI together with experts, artists, colleagues, and societal stakeholders. We will use methods from critical design to 1) create a hands-on understanding of our current practices and narrative, 2) compile a concrete, desirable future scenario, providing practical pointers to implement design processes with diversity, equity, and inclusion in mind.

Outcomes. We will co-shape future scenarios of DEI practices that are tangible and will help us in our everyday practices. Outcomes might be compiled in an academic publication and in a zine magazine. 

Catherine_D'Ignazio.jpeg

Keynote: Catherine D’Ignazio

(Data+Feminism Lab, MIT)

Catherine D’Ignazio is a scholar, artist/designer and hacker mama who focuses on feminist technology, data literacy and civic engagement. She has run reproductive justice hackathons, designed global news recommendation systems, created talking and tweeting water quality sculptures, and led walking data visualizations to envision the future of sea level rise. With Rahul Bhargava, she built the platform Databasic.io, a suite of tools and activities to introduce newcomers to data science. Her 2020 book from MIT Press, Data Feminism, co-authored with Lauren Klein, charts a course for more ethical and empowering data science practices. Her research at the intersection of technology, design & social justice has been published in the Journal of Peer Production, the Journal of Community Informatics, and the proceedings of Human Factors in Computing Systems (ACM SIGCHI). Her art and design projects have won awards from the Tanne Foundation, Turbulence.org and the Knight Foundation and exhibited at the Venice Biennial and the ICA Boston. D’Ignazio is an Assistant Professor of Urban Science and Planning in the Department of Urban Studies and Planning at MIT. She is also Director of the Data + Feminism Lab which uses data and computational methods to work towards gender and racial equity, particularly in relation to space and place.

Title talk: Data Feminism

As data are increasingly mobilized in the service of governments and corporations, their unequal conditions of production, their asymmetrical methods of application, and their unequal effects on both individuals and groups have become increasingly difficult for data scientists--and others who rely on data in their work--to ignore. But it is precisely this power that makes it worth asking: "Data science by whom? Data science for whom? Data science with whose interests in mind? These are some of the questions that emerge from what we call data feminism, a way of thinking about data science and its communication that is informed by the past several decades of intersectional feminist activism and critical thought. Illustrating data feminism in action, this talk will show how challenges to the male/female binary can help to challenge other hierarchical (and empirically wrong) classification systems; it will explain how an understanding of emotion can expand our ideas about effective data visualization; how the concept of invisible labor can expose the significant human efforts required by our automated systems; and why the data never, ever “speak for themselves.” The goal of this talk, as with the project of data feminism, is to model how scholarship can be transformed into action: how feminist thinking can be operationalized in order to imagine more ethical and equitable data practices.

Keynote: Lisa Mandemaker

(Design for Debate)

Lisa Mandemaker is a speculative designer with a strategic, contextually aware and critical approach to research and practice. She considers design as a tool for debate and crafts (future) narratives through designed artifacts, using these as a form of storytelling to challenge assumptions, question or excite. Making impactful, topical work and creating strong interventions and conversation starters are key elements to her practice. Her work centres on the effects of emerging technology on people and their behaviour.

- If there is a new piece of technology arriving, it’s important we don’t just blindly accept it - we really need to think about what it does and means -

d700xvar.jpg

Keynote: Eduard Fosch-Villaronga

(

Leiden University

)

Dr. Fosch-Villaronga is an Assistant Professor at the eLaw Center for Law and Digital Technologies at Leiden University (NL) where he investigates legal and regulatory aspects of robot and AI technologies, with a special focus on healthcare. Eduard recently published the book ‘Robots, Healthcare, and the Law. Regulating Automation in Personal Care’ with Routledge and is interested in human-robot interaction, responsible innovation, and the future of law.

Eduard is the co-leader of the Ethical, Legal, and Societal Aspects Working Group at the H2020 Cost Action 16116 on Wearable Robots and the Social Responsibility Working Group at the H2020 Cost Action 19121 GoodBrother. Eduard served the European Commission in the Sub-Group on Artificial Intelligence (AI), connected products and other new challenges in product safety to the Consumer Safety Network (CSN) to revise the General Product Safety directive.

Previously, he worked as a Marie Skłodowska-Curie Postdoctoral Researcher under the LEaDing Fellows at eLaw (Jan 2019-Dec 2020). He also was a postdoc at the Microsoft Cloud Computing Research Center at Queen Mary University of London (the UK, 2018) investigating the legal implications of cloud robotics; and at the University of Twente (NL, 2017) as a postdoc, exploring iterative regulatory modes for robot governance. Eduard Fosch-Villaronga holds an Erasmus Mundus Joint Doctorate (EMJD) in Law, Science, and Technology coordinated by the University of Bologna (IT, 2017), an LL.M. from University of Toulouse (FR, 2012), an M.A. from the Autonomous University of Madrid (ES), and an LL.B. from the Autonomous University of Barcelona (CAT, 2011). Eduard is also a qualified lawyer in Spain and his publications are available online

Title: On the consequences of missing  diversity considerations in the development of algorithms

Algorithms can help presume user preferences, sensitive attributes such as race, gender, sexual orientation, and opinions. These opaque methods can predict behaviors for marketing purposes and influence behavior for profit, serving attention economics and reinforcing existing biases such as gender stereotyping. Content moderator tools silencing drag queens online or misgendering users online are examples of how not integrating gender and sex considerations in AI can have adverse consequences for society. Although two international human rights treaties include explicit obligations relating to harmful and wrongful stereotyping, these stereotypes persist online and offline. In this short talk, we identify how inferential analytics may reinforce gender stereotyping and affect marginalized communities in these two cases with the hope that it can shed light on opportunities for addressing these concerns in the context of AI and embodied agents.