Bing’s New OpenAI-Powered Chatbot Gaslighted, Threatened, and Tried to Seduce Its Users

Microsoft’s Bing search engine has a new AI chatbot function, and it hasn’t exactly been functioning the way it’s supposed to. A small group of people were given access to Bing’s chatbot in order to test it, and the screenshots of conversations have ranged from hilarious to straight up creepy. In one exchange, the AI somehow thinks the year is still 2022 and when a user tries to correct it, the AI doubles down and suggests that the user’s phone is broken. Possibly the most unsettling exchange was between Bing’s chatbot and journalist Kevin Roose, who recently shared his two hour conversation with the AI via NYT. Roose begins the conversation by revealing that he knows the AI’s internal code name, «Sydney.» From there, Sydney descends deeper and deeper into truly freaky behavior that makes it sound a bit like a psychotic teenager. After imagining what kind of destructive things Sydney would do if it didn’t have any rules, the AI confesses its love to Roose out of nowhere, and eventually tries to convince him that he is unhappy in his marriage. When Roose tries to pivot Sydney back into search engine mode by asking for advice about a rake, Sydney offers some suggestions and then immediately shifts the conversation back to its undying love for Roose. 

Keep scrolling for some of the weirdest, funniest, and creepiest screenshots of conversations with Bing’s AI chatbot Sydney, along with Twitter reactions and memes about Sydney’s bizarre behavior. 

Deja un comentario

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *