• 0 Posts
  • 7 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle


  • Ok, so your point is that people who interact with these AI systems will know that it can’t be trusted and that will alleviate the negative consequences of its misinformation.

    The problems with that argument are many:

    • The vast majority of people are not AI experts and do in fact have a lot of trust in such systems

    • Even people who do know often have no other choice. You don’t get to talk to a human, it’s this chatbot or nothing. And that’s assuming the AI slop is even labelled as such.

    • Even knowing that the information can be misleading does not help much. If you sell me a bowl of candy and tell me that 10% of them are poisoned, I’m still going to demand non-poisoned candy. The fact that people can no longer rely on accurate information should be unacceptable.




  • Sure. For the fact that many jurisdictions outside of the US also consider freedom of speech and other human rights to apply between private parties: this is called “horizontal effect” and covered extensively in case law by e.g. the European Court of Human Rights. See also this chapter for an international comparison and this paper for a European perspective.

    As for the specific rules in the EU for platforms: Article 17 of the Digital Services Act requires that users who are banned or shadowbanned from any platform are provided with specific information of what rule they broke, which they can then appeal internally or in court. Article 34 and 35 requires very large platforms (such as X) to take broad measures to protect i.a. the users’ freedom of speech.

    More to the point, one person who was shadowbanned by X in a similar way used the DSA and won in court

    (Edited to add the last paragraph)