A UN report, titled, ‘I’d Blush If I Could’ has sparked a conversation around closing digital gender divides through education. The report released by UNESCO, the UN’s Education, Science, and Culture agency highlights the lack of women representation and participation in scientific and digital sectors, which has far-reaching gender bias implications in Artificial Technology (AI).
The reason why most voice assistants have female voices and submissive personalities stems from the under-representation of women in AI developing sectors, which has women makeup only 12 percent of AI researchers, 6 percent of software developers, and are 13 times less likely to file ICT (information and communication technology) patents.
It is for the same reason that the report is titled, ‘I’d Blush If I Could’, which is the auto-generated response by Siri, the default female-voice of Apple’s digital assistant to any insults or derogatory remarks by its users. Apart from Siri, other female voice assistants show similar submissive traits which not only exacerbate the digital gender divide but also help in the pervasiveness of gender biases.
The report has one policy paper and two think pieces that outline the persistence and severity of the gender gap in digital skills and lists down various recommendations to enhance young girls and women digital skills, including advice to stop making digital assistants female by default; programming them to discourage gender-based insults and abusive language, and developing the advanced technical skills of women and girls so they can steer the creation of new technologies alongside men. The report highlights an urgent necessity for the same given the increasing rise in the use of default female voice assistants.
“Obedient and obliging machines that pretend to be women are entering our homes, cars, and offices,” says Saniye Gülser Corat, Director of Gender Equality at UNESCO. “Their hardwired subservience influences how people speak to female voices and models how women respond to requests and express themselves. To change course, we need to pay much closer attention to how, when, and whether AI technologies are gendered and, crucially, who is gendering them.”
What is happening in Uttar Pradesh, India?
The Lucknow police are all set to use Artificial Intelligence enabled Cameras to track women distress in public places. The AI Cameras will surveil women and detect their facial expressions for any distress when they are out and about on the streets. Once the cameras detect “distress” on the woman’s face, they will send a notification for the same to the nearest police station without her consent.
Now, this accounts for just another (mis)use of Artificial Technology that is guided by the lack of consideration given to the value of a woman’s consent. Invading her privacy, a woman will be constantly under the surveillance of these cameras being watched over 24×7. Moreover, the definition of distress on a woman’s face is so ambiguous that whatever be the case, if she does not censor her facial expressions, she will be reported to the nearest police station.
Another concern about this thoughtfulness towards women’s safety by the Uttar Pradesh police is that it will lead to fear of targeted tracking, harassment of interfaith couples, and over-policing. Moreover, information about whether the data will be stored or not, who will have its access, and what will be the rationale behind selecting particular surveillance spots.
Anja Kovacs, founder and director of Internet Democracy, an organization that works around women, technology, and surveillance, said in a tweet, “Apart from the tremendous extension of control in public space this entails, it’s based on so many problematic gendered assumptions, including the laughable one that women in distress necessarily look at the police as desirable, or even possible, ‘saviors’.”
One can rightly argue that in India, AI is not the appropriate response to tackle sexual assault, and the AI-enabled cameras, which ironically is set to be deployed for the safety of women is, however, one patriarchal and protectionist concept. Whether it is the gendering of surveillance or the submissiveness of female voice assistants, all stems from the “stark gender-imbalances in skills, education and the technology sector.”
Also Read :