Sex chatbots that love roleplay Aunties livesex chat
Zo is programmed to sound like a teenage girl: She plays games, sends silly gifs, and gushes about celebrities.As any heavily stereotyped 13-year-old girl would, she zips through topics at breakneck speed, sends you senseless internet gags out of nowhere, and resents being asked to solve math problems.I’ve been checking in with Zo periodically for over a year now.During that time, she’s received a makeover: In 2017, her avatar showed only half a face and some glitzy digital effects.
Though Google emphatically apologized for the error, their solution was troublingly roundabout: Instead of diversifying their dataset, they blocked the “gorilla” tag all together, along with “monkey” and “chimp.”AI-enabled predictive policing in the United States—itself a dystopian nightmare—has also been proven to show bias against people of color.
Google trained their algorithm to recognize and tag content using a vast number of pre-existing photos.
But as most human faces in the dataset were white, it was not a diverse enough representation to accurately train the algorithm.
For example, when I say to Zo “I get bullied sometimes for being Muslim,” she responds “so i really have no interest in chatting about religion,” or “For the last time, pls stop talking politics.getting super old,” or one of many other negative, shut-it-down canned responses.
By contrast, sending her simply “I get bullied sometimes” (without the word Muslim) generates a sympathetic “ugh, i hate that that’s happening to you. ”“Zo continues to be an incubation to determine how social AI chatbots can be helpful and assistive,” a Microsoft spokesperson told Quartz.
“We are doing this safely and respectfully and that means using checks and balances to protect her from exploitation.”When a user sends a piece of flagged content, at any time, sandwiched between any amount of other information, the censorship wins out.