• ArbitraryValue@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    10 hours ago

    ChangeMyView seems like the sort of topic where AI posts can actually be appropriate. If the goal is to hear arguments for an opposing point of view, the AI is contributing more than a human would if in fact the AI can generate more convincing arguments.

      • Fleur_@aussie.zone
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        4
        ·
        6 hours ago

        Blaming a language model for lying is like charging a deer with jaywalking.

        • shneancy@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 hour ago

          the researchers said all AI posts were approved by a human before posting, it was their choice how many lies to include

        • tribut@infosec.pub
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 hours ago

          Nobody is blaming the AI model. We are blaming the researchers and users of AI, which is kind of the point.

        • Ecco the dolphin@lemmy.ml
          link
          fedilink
          English
          arrow-up
          4
          ·
          5 hours ago

          Which, in an ideal world, is why AI generated comments should be labeled.

          I always break when I see a deer at the side of the road.

          (Yes people can lie on the Internet. If you funded an army of propagandists to convince people by any means necessary I think you would find it expensive. People generally find lying like this to feel bad. It would take a mental toll. With AI, this looks possible for cheaper.)

          • Rolivers@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 hours ago

            I’m glad Google still labels the AI overview in search results so I know to scroll further for actually useful information.

      • ArbitraryValue@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        8 hours ago

        That lie was definitely inappropriate, but it would still have been inappropriate if it was told by a human. I think it’s useful to distinguish between bad things that happen to be done by an AI and things that are bad specifically because they are done by an AI. How would you feel about an AI that didn’t lie or deceive but also didn’t announce itself as an AI?

        • sinceasdf@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          6 hours ago

          I think when posting on a forum/message board it’s assumed you’re talking to other people, so AI should always announce itself as such. That’s probably a pipe dream though.

          If anyone wants to specifically get an AI perspective they can go to an AI directly. They might add useful context to people’s forum conversations, but there should be a prioritization of actual human experiences there.