Risks of Social AI

For the first time, computers can do a convincing human impression. What could go wrong?

Will AI have a positive or negative impact? Imagine trying to answer that question about the Internet. The correct answer isn’t yes or no, it’s a more nuanced response that considers all of the ways this technology impacts society.

When it comes to the downsides, I tend to think about it in terms of risks. As is often the case with new technologies, “social” AI technology introduces new risks. In this post, I’ll discuss three of the biggest risks that I foresee (in no particular order):

  1. Impact on interpersonal relationships

  2. Echo chambers, disinformation, and conspiracy theories

  3. Low interpretability

To what extent these risks become part of the ultimate story of AI mostly depends on us, the users. Anyone building AI products should be conscious of these risks and take steps to mitigate them.

I don’t believe that these risks should prevent us from building the products at all. The same qualities of this technology that create these risks also offer unique benefits. If used for the right purposes and in the right setting, there could be real benefits from “Empathetic AI”, by which I mean AI that can emulate human empathy. For example, it could help support people in introspective activities like journal writing or therapy. This was part of the inspiration behind Selftalk.

At the same time, the risks outlined here will grow over time if (and it’s still an if) chatbots develop something resembling EQ, or emotional intelligence.

Risk #1: Impact on interpersonal relationships

The quality of our relationships and human networks is one of the most important factors impacting our quality of life. That’s supported by evidence, and also just feels correct. AI chatbots, and the products that we build with them, have the potential to fundamentally change how we form relationships. With any technology like that, we should think hard about what could go wrong.

We’ve already seen a version of this with social media. Facebook, Twitter, Instagram, and other social media have significantly changed the way we connect with people. Not all of these changes are bad, and these platforms help people connect in real and meaningful ways. But they also act as a digital panopticon that can be especially harmful for young people who are not well-equipped to deal with its effects.

AI chatbots take it a step further. Instead of just using technology to connect with other people, as with social media, with AI there’s no human on the other end. AI chatbots have the potential to change how we form relationships. I think this can be attributed to 3 qualities of LLMs:

  1. They do a good human impression. The best chatbots can have convincingly human-like conversations. ChatGPT passes the Turing Test.

  2. They can talk to you however you want, about whatever you want. In interpersonal relationships, you have to respect the other person’s autonomy. You can try to persuade them to do something, but there’s no guarantee that they will do it. Even in casual conversation, people expect reciprocity and balance in sharing personal details. It can feel weird in a conversation to share lots of personal details about yourself while the other person holds their cards close to their chest.

    Chatbots are different in that they will follow your commands about what to talk about and how they talk about it. They can customize both the style and content of their communication to fit exactly what the user wants. For example: “You are a micromanaging boss and an unhappy person. I’m your employee who made a basic mistake when creating a management report. What do you say to me?” In practice, many popular chatbots will refuse to follow instructions like this, but only because their creators have programmed them to do so. It’s not a constraint of the underlying technology.

  3. They can learn a lot about you, very quickly. In interpersonal relationships, it generally takes a while to get to know someone and learn about their life and to incorporate that understanding into how you communicate with them. Until you really get to know someone, you probably won’t trust their advice and opinions very much. Chatbots accelerate this process in a fraction of the time.

These factors combined could cause users to forget that AI chatbots are not real people. This could lead to people forming relationships with bots that are ultimately unhealthy and antisocial. Unfortunately, there are many lonely people in the world, and this technology is only getting more accessible.

Risk #2: Echo chambers, disinformation, and conspiracy theories

Social media has exposed our predisposition towards echo chambers that reinforce what we already think. For example, Facebook uses a user’s own content, behavior, and personal information to filter what content users see in their news feed. For example, if a user interacts with QAnon posts, they will probably see more QAnon posts. If the user has similar demographics to other users that interact with that content, the effect might be even more pronounced.

AI has the potential to take it a step further. Beyond filtering existing content, AI can create entirely new content to fit exactly what a user wants to see. Imagine a Facebook feed filled with AI-generated posts and comments about a conspiracy theory that no real person actually believes. Of course, this already happens to some extent on Facebook, X, and other social media sites, where some percentage of users are bots that post AI-generated fake news. The fact that this has not completely taken over those sites shows that Meta and X do not believe it is in their long-term business interests to allow it. That is a business constraint, not a technological one.

Say that demand were to emerge for a product that provides a “custom fake news feed”. I don’t think there is anything preventing an existing social media site (Truth Social, anyone?) from pivoting to this business, or a new platform emerging to fill that demand. There is also minimal regulation in the US governing how tech companies can run their platforms.

Bots making up 20% of your feed is bad enough– what if it were the whole feed? What if all of the content was tailor-made for your viewpoints, personality, interests, and background? Many products optimize for user engagement, and personalization helps them achieve that. What would happen if an AI, optimizing for user engagement and attention, created a novel conspiracy theory personalized for you? Would you be more likely to believe it if, instead of reading about it on a newsfeed, a chatbot talked to you about it?

Like all AI-generated content, humans could create a fake news feed ourselves. Humans have shown the ability to create wild conspiracy theories all on our own. What we will do when the technology makes it easier than ever?

The risk really comes from how much AI brings down the barrier of entry: more people now have the ability to create this type of content more quickly, easily, and cheaply. The most skilled con artists, cult leaders, and politicians already have the ability to get people to convince people of things that are not real. With AI chatbots, though, there is an unprecedented potential for personalization.

What would it mean to get addicted to a product like this? We don’t know yet. Social media has existed for long enough that we can clearly see the effects. The risks of AI chatbots and generative AI are more speculative at this point, and potentially greater.

Risk #3: Low interpretability

AI chatbots like Selftalk and ChatGPT rely on a type of machine learning technology called a neural network. A well-known weakness of neural networks is that it’s difficult, even for the people who create them, to explain how they make decisions or arrive at some output. In computer science speak, they have low interpretability. This essentially means that we don’t have a very good understanding of how they think.

This is particularly a risk in situations where the model makes decisions that have significant consequences for people and where people want to know how it made those decisions. For example, when you go to the doctor and they recommend surgery, you may want to ask what observations and evidence led them to that decision. Or if a company turns you down for a job, you may wonder which factors they considered from your professional history and interviews. In these types of situations, is not only the decision that matters, but how the decision was made.

Previous
Previous

Sharing the Spoils of User Data