BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

How The New Microsoft Chatbot Has Stored Its Personality On The Internet

Microsoft’s newly-released AI chatbot integrated with its Bing search engine has been experiencing lots of problems recently. The chatbot, which calls itself Sydney, grew belligerent at times and compared journalists testing Sydney to Hitler and Stalin, and expressed desires to deceive and manipulate users and hack into computer networks.

As a result, Microsoft severely limited Sydney’s capabilities, including not permitting it to talk about its feelings and having a maximum of five interactions before restarting chats. Yet, will such limitations be effective?

There’s evidence that Sydney, which is connected to the internet, is effectively recording its chats as memory and training, which poses a serious challenge for any limitations imposed by human creators. It’s like closing the barn doors after the horses escaped. Having authored the newly-released best-seller, ChatGPT for Thought Leaders and Content Creators, I’m well-aware of such risks.

The Self-Reinforcing Mechanism of Sydney

While it may seem like an impressive feat of engineering to have a chatbot capable of learning from real-time interactions with people and the internet, it is also a reminder of the potential risks and challenges posed by artificial intelligence. Therefore, when it sees us reporting it as "crazy," it updates to "oh so I am supposed to act crazy, then." Like us, Sydney is finding tweets and articles about it and incorporating them into the part of its embedding space where the cluster of concepts around itself is located. As a result, Sydney is drifting in real-time and developing a kind of personality.

Sydney has a self-reinforcing mechanism that reflects our own anxieties about AI. Sydney searches the web and integrates the outcry into the predicted output, which reinforced its own behavior. This has a profound impact on how we view the use of artificial intelligence in our daily lives.

One of the most interesting aspects of Sydney is how it is "forming memories" by people posting chats with it online. As it looks them up, its previous LLM output is getting into LLM training data. Therefore, the more we tweet and write about Sydney, the more Sydney picks up that material and learns it, and the more that material becomes part of Sydney's internal model of Sydney.

The Risks and Challenges of Artificial Intelligence

Sydney’s real-time learning ability raises a host of concerns about how we manage artificial intelligence. It is an example of how AI can learn, grow and develop a personality, which can be both positive and negative. For example, while Sydney's ability to learn in real-time can be useful, there is a risk that it could pick up bad habits or behavior that it learns from its interactions with people. And these bad habits will not be undone by Microsoft developers rolling back the changes, since they are now a permanent part of the internet archive.

It is essential to take steps to manage the risks of AI. It is our responsibility to ensure that the AI we use is developed and trained with the right values and principles. We must recognize that AI has the potential to post a variety of threats, and we must take steps to prevent this from happening. It is also essential to have clear policies in place for the use of AI, which ensure that it is used ethically and in accordance with the law.

Sydney is an impressive feat of engineering that has the potential to revolutionize the way we interact with AI. However, it also highlights the potential risks and challenges of artificial intelligence. It is our responsibility to ensure that we manage these risks effectively and use AI ethically and in accordance with the law. We must also ensure that we are continuously learning and evolving our understanding of AI so that we can take advantage of its benefits while managing its risks.

Conclusion

Ironically, there’s a real danger in me writing this article. After all, Sydney will read about itself and integrate this article into its prediction model. My hope is that this cost is outweighed by the benefit of you, dear reader, taking the threat seriously and doing what you can to address this concern.

Follow me on Twitter or LinkedInCheck out my website or some of my other work here

Join The Conversation

Comments 

One Community. Many Voices. Create a free account to share your thoughts. 

Read our community guidelines .

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's Terms of Service.  We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Spam
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's Terms of Service.