• TDCN@feddit.dk
    link
    fedilink
    English
    arrow-up
    1
    ·
    17 days ago

    Jesus Christ! Just hardcode a default answer when someone says Thank you, and respond with “no problem” or something like that.

      • UndercoverUlrikHD@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        17 days ago

        I’m fairly sure that the people who developed a fairly revolutionary piece of technology are not your typical “vibe coder”. Just because you don’t like LLM doesn’t make the feat of developing it less impressive.

        They could easily fix the problem if they cared.

  • Rooskie91@discuss.online
    link
    fedilink
    English
    arrow-up
    1
    ·
    17 days ago

    Seems like a flacid attempt to shift the blame of consuming immense amounts of resources Chat got uses from the company to the end user.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      ·
      17 days ago

      They’re just making excuses for the fact that no one can work out how to make money with AI except to sell access to it in the vague hope that somebody else can figure something useful to do with it and will therefore pay for access.

      I can run an AI locally on expensive but still consumer level hardware. Electricity isn’t very expensive so I think their biggest problem is simply their insistence on keeping everything centralised. If they simply sold the models people could run them locally and they could push the burden of processing costs onto their customers, but they’re still obsessed with this attitude that they need to gather all the data in order to be profitable.

      Personally I hope we either run into AGI pretty soon or give up on this AI thing. In either situation we will finally stop talking about it all the time.

  • VeryFrugal@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    15 days ago

    Realistically, they’ll never do simple filter. Maybe a dedicated thank you button with predefined messages? Tiny model?

  • tibi@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    17 days ago

    You can solve this literally with an if statement:

    if msg.lower() in [“thank you”, “thanks”] return “You’re welcome”

    My consulting fee is $999k/hour.

    • Hawk@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      16 days ago

      Well, it could change the meaning of the prompt unintentionally.

      The real challenge is that this technology is not universally accessible so people aren’t learning effective use-case and prompt strategies.

      Whilst 1B models are easy enough to run and have plenty of use, nobody can teach this, its a nightmare on Windows and most universities have collapsed under their own weight. Half my comp sci profs didn’t know python 10 years ago and I know for a fact this hasn’t improved (hiring developers – not fun).

  • ALoafOfBread@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    17 days ago

    Their CEO said he liked that people are saying please and thank you. Imo it’s because he thinks it’s helpful to their brand that people personify LLMs, they’ll be more comfortable using it, trust it more, etc.

    Additionally, because of how LLMs work, basically taking in data, contextualizing user inputs, and statistically determining the output iteratively (my understanding, is oversimplified) - if being polite yields better responses in real life (which it does) then it’ll probably yield better LLM output. This effect has been documented.

    • SgtAStrawberry@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      17 days ago

      I also feel like AI is already taking over the internet, might as well train it to be nice and polite. Not only dose it make the inevitable AI content nice to read, it helps with sorting out actual assholes.

      • superkret@feddit.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        17 days ago

        AI isn’t trained by input from its users.
        They tried that with Tay, and it didn’t work out so well

  • Lovable Sidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    17 days ago

    Please, if it’s not too much effort and you wouldn’t mind…

    Thank you for taking the trouble to fulfill the aforementioned request! I look forward eagerly to your response.

  • superkret@feddit.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    17 days ago

    Saying anything to it costs the company money, since no one has yet figured out how to actually make money with AI, nor what it’s good at.

    • Swedneck@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      i maintain that duckduckgo are the only ones who half-decent setup for it with their AI overview: instead of hoping to god the LLM can recite information accurately from its memory, they just have a list of vetted sources (e.g. wikipedia) that they pull a couple relevant articles from and have the LLM summarize to answer your query.

      E.g. if you search “when was america founded” it pulls from https://en.wikipedia.org/wiki/History_of_the_United_States and https://history.state.gov/milestones/1776-1783/declaration, producing the answer “America is generally considered to have been founded on July 4, 1776, when the Declaration of Independence was adopted, officially declaring the Thirteen Colonies’ independence from Great Britain. The name “United States of America” was formally adopted by Congress on September 9, 1776.”, and then whenever someone searches that query again they just re-use the already generated answer.

      And even better, if it’s a real simple query like pure maths or some simple information available on wikidata (like the diameter of the moon) it skips the LLM alltogether because all it would do is waste electricity and introduce the risk of an incorrect answer.

      and when it does show an AI answer they make it very clear that it might be inaccurate, you’re asked to rate if the answer is helpful, and you can easily adjust how often you want to see the AI answers.

  • bitwolf@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    15 days ago

    So an automation that sends positive affirmations to chatgpt, to ensure it knows its appreciated, would be no bueno?

    • JackRiddle@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      17 days ago

      Not really, though it would help the environment. It would hurt them if people kept using it but stopped talking about it. The cost of running the things far outweighs the gains of any of their subscriptions, and the only thing keeping the bubble afloat right now is hype.

  • Sixty@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    Are the responses these corpo bots give when you swear at them and they refuse to answer AI generated? Or canned responses?

    Clive or whatever on Firefox let me name myself swear words when I politely explained CuntFucker is my legal birth name and how dare it censor my legitimate name, but it only worked for my name.

  • Agent641@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    17 days ago

    When I learned that it could factor primes, I got it to write me a simple python GUI that would calculate a shitload of primes, then pick big ones at random, then multiply them, then spit out to clipboard a prompt asking ChatGPT to factor the result. I spent an afternoon feeding it these giant numbers and making it factor them back to their constituent primes.

    • ImplyingImplications@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      17 days ago

      You could probably just say “thank you” over and over. Neural networks aren’t traditional programs that exit early for trivial inputs. If you get a traditional program to sort a list, the first thing it’ll do is check to see if the input is already sorted and exit if it is. The first thing AI does is convert the list into starting values for variables in a giant equation with billions of variables. Getting an answer requires calculating the entire thing.

      Maybe these larger models have some preprocessing of inputs by a traditional program to filter stuff, but seeing as they all seem to need a nuclear power plant and 10,000 GPUs to run, I’m guessing there isn’t much optimization.

    • jjjalljs@ttrpg.network
      link
      fedilink
      English
      arrow-up
      0
      ·
      17 days ago

      But don’t LLMs not do math, but just look at how often tokens show up next to each other? It’s not actually doing any prime number math over there, I don’t think.

      • Agent641@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        17 days ago

        If I fed it a big enough number, it would report back to me that a particular python math library failed to complete the task, so it must be neralling it’s answer AND crunching the numbers using sympy on its big supercomputer

        • jjjalljs@ttrpg.network
          link
          fedilink
          English
          arrow-up
          1
          ·
          17 days ago

          Is it running arbitrary python code server side? That sounds like a vector to do bad things. Maybe they constrained it to only run some trusted libraries in specific ways or something.

          • Swedneck@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            given the track record of these things i would not be surprised if you just have to finagle the prompt just right to sometimes slip through the cracks and pull off some ACE