anus@lemmy.world to Technology@lemmy.worldEnglish · 4 days agoA cheat sheet for why using ChatGPT is not bad for the environmentandymasley.substack.comexternal-linkmessage-square41fedilinkarrow-up11arrow-down11
arrow-up10arrow-down1external-linkA cheat sheet for why using ChatGPT is not bad for the environmentandymasley.substack.comanus@lemmy.world to Technology@lemmy.worldEnglish · 4 days agomessage-square41fedilink
minus-squareSaik0@lemmy.saik0.comlinkfedilinkEnglisharrow-up2·13 hours agoNo, not basically no. https://mashable.com/article/openai-o3-o4-mini-hallucinate-higher-previous-models By OpenAI’s own testing, its newest reasoning models, o3 and o4-mini, hallucinate significantly higher than o1. Stop spreading misinformation. The company itself acknowledges that it hallucinates more than previous models.
minus-squareanus@lemmy.worldOPlinkfedilinkEnglisharrow-up1·13 hours agoI stand corrected thank you for sharing I was commenting based on anecdotal experience and I didn’t know where was a test specifically for this I do notice that o3 is more overconfident and tends to find a source online from some forum and treat it as gospel Which, while not correct, I would not treat as hallucination
No, not basically no.
https://mashable.com/article/openai-o3-o4-mini-hallucinate-higher-previous-models
Stop spreading misinformation. The company itself acknowledges that it hallucinates more than previous models.
I stand corrected thank you for sharing
I was commenting based on anecdotal experience and I didn’t know where was a test specifically for this
I do notice that o3 is more overconfident and tends to find a source online from some forum and treat it as gospel
Which, while not correct, I would not treat as hallucination