Ms. Dewey was Microsoft’s first virtual assistant launched in 2006. It was a search engine with over 600 lines of recorded dialog where Janina Gavankar played the role of a flirtatious librarian. However, information scholar Miriam Sweeney revealed the gendered and racialized implications of Dewey’s replies in her 2013 doctoral dissertation. Sweeney pointed out the example of Dewey’s reply, “Hey, if you can get inside your computer, you can do whatever you want to me” and how searching for “blow jobs” caused a clip of Dewey eating a banana to play. Dewey’s virtual behavior was designed for the white, straight male user.
On the other hand, virtual assistants like Siri and Cortana were accused of perpetuating similar patterns of prejudice. Microsoft programmed Cortana to firmly rebuff sexual queries or advances, leading to boiling outrage on social media. However, the pushback against ChatGPT and similar bots has now led to warnings against empathizing with them. While the previous generation of AI was sold as perfect servants, the new chatbot search engines are evolving to be our new confidants and therapists.
The criticism of AI abuse shifts from “people being too mean to bots” to “people being too nice to them” because the political economy of AI has suddenly and dramatically changed. Tech companies are now selling bots to us as our best friends. However, the pathological response to each bot generation has required humans to humanize them. The bot’s owners weaponize our worst and best impulses.
On a certain level, people mistook their virtual assistants for human beings, leading them to abuse them. Dehumanization is not the failure to see someone as human but the desire to see someone as less than human and act accordingly. Therefore, it was precisely the degree to which people mistook their virtual assistants for real human beings that encouraged them to abuse them.
Leave a Reply