Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2026 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2026 Daz Productions Inc. All Rights Reserved.
Comments
What modern AI completely fabricates information? What are they making up from whole cloth? If that's happening to you, I think you're all just using the wrong AI. You also absolutely said that you should NEVER take what an AI says as fact, I just quoted you.
Oh my, my lab bosses would have given you quite the lecture, as they fired you, if you argued with them that accuracy and precision were near synonyms rather than orgathonal to each other; they always had us present data and show the accuracy (scientificly signifigant results) and precision (repeatable results). This argument is actually an old one. When photography started, an argument was made that photography couldn't be real art since the camera is really taking the picture with the photographer just fiddling with settings and that luck played a large part (Candid photography! I'm aghast, That isn't art!). I know someone who is an artist who works in an aspect of machine learning where he creates an "aI" model/ paradym of vision processing, trains it on his own data or photographs locally so there is no huge use of energy, and has the program visualize what it sees. To me that is as much an artistic endeavor as anything other artists create. We have artists in our family/friend circle from photographers to museum executives (my sister was an executive at two major museums. Luck and discovery, medical conditions both physical and mental are all part of the artist's toolset.
My big issues with AI are that the rules on what is trained aren't well worked out, it is not very efficient in terms of energy use, it is currently a powerful tool in the wrong hands so output right now is bland and trite, we never worked out good rules about privacy in art in general especially if you look at the discussion of editorial content in this forum, and that it is not well understood. I do take the arguements about artistic intent with a grain of salt. I knew someone who studied cat vision which is fascinating and bizarre. Cats do a lot of their visual processing directly in their optic nerves and their brains are strange for higher mammals. I think of them as "land sharks" and think they are a wonder partly about how alien they are and how we try to fit their behavior into our own mental world. I think that might a useful concept: think of AI as a ct rather than a dog and don't be suprised by its claws.
If that's your argument, then you're simply wrong about what words mean. The phrase "take [something] as fact" means that you assume that it's true with no further consideration, implying nothing about the truthfulness or accuracy of the thing you're taking as fact. That's what I said not to do, and I stand by that.
Google (and probably every other dictionary of English language) lists “precise” as a word similar to “accurate”. I think your lab bosses should be fired if they would fire someone for saying those words are nearly synonyms.
I would say what the LLM is doing is calculating what statistically is most likely to follow what was typed. And that could be easily be a true answer to a question. But it could just as easily be a bunch of statements that sound like a good answer but are ultimately incorrect.
Regardless, I hope you are feeling better and never feel like you should die.
SnowSultan you are perfectly right not to trust humans
they lie and others blindly support their lies, we are seeing this in realtime on the world stage
many of those who own the mainstream media make a living doing this
but AI was trained on that data too
its an agnostic tool
data in info out
I am glad you got some helpful information from it
but please don't trust it any more than you should some self serving if not opportunistic downright scammy humans, which are prolific on the internet especially in chat platforms (as are AI agent and bots)
making real life connections in person is a damned good idea BTW
though I cannot exactly preach about that being almost a hermit myself
I do go for walks though and watch people
They are similar in the way that black and hot pink are similar i.e. colors that are uncommon as wedding dress colors but also very different i.e. "I wore hot pink to the funeral but people stared at me even though it is similar to black". The word "precise" is not a synonym for "accurate". As for firing someone, imagine a pharmacist who skipped calibrating their balances and dosing equipment. The next month, all the customers are sick and a few have died of overdoses. The lazy pharmacist says it is not his fault since he overdosed the all by the same amount (precision) though he failed at accuracy but it doesn't matter since it is all the same. They might get fired. Science is built on the preposition that the data produced is accurate (you didn't fake it) and precise (if you repeat the experiment multiple times you get the same results).
AIs make stuff up. The AI people call it "hallucinating." Attorneys have been embarrassed to learn that cases cited in AI-prepared legal briefs were simply fabricated https://www.reuters.com/legal/government/us-appeals-court-orders-lawyer-pay-2500-over-ai-hallucinations-brief-2026-02-18/
AIs also sometimes spit out parts or all of individual pieces of training material. https://www.vice.com/en/article/ai-spits-out-exact-copies-of-training-images-real-people-logos-researchers-find/
And AI sometimes advises people to harm or kill themselves. https://www.bbc.com/news/articles/cp3x71pv1qno
@SnowSultan I hope you're ok. If AI helped you, that's good! I'm glad. Just keep in mind, the AI could just as easily have made up some nonsense or told you something that might have caused you serious harm. It's not, as others have said, intelligent. It's what @NylonGirl and @Gordig have said previously.
In my interactions with AIs regarding various topics from home insurance, researching obscure historical events, and psychological health care and procedures for those with treatment-resistant depression, everything the AI has told me has been factually accurate. If people are getting screwy results, perhaps it's how they're prompting or which model they're using.
I appreciate the concern, and no, I am not alright, but you can all save your breath trying to convince me that an AI is likely to just invent imaginary treatment plans or give me medical advice more ridiculous than what's being spouted by human morons on social media, or that I'm stupid enough to do anything that I am unsure about.
Now it's time for me to take a break from these forums as well. Thank you for your thoughts, I do appreciate that.
you make some good points there Wendy. I've gotten hurt betrayed humiliated degraded and worse my entire life I live in my room and I've never had a social life or anything and if I didn't have to work I'd never leave the house really though to be honest only time I do now is to go to work, doctors, job search provider and get money out to pay rent yeah still live with parents which is not that happy time. Facebook was the closest I had to social life now that's gone only one in my life is my dog, my art/interests. I often hide in my room when we have visitors too. So I can kinda also understand how some people turn to AI for many things including friendship even love
SnowSultan
there is a lot going and AI is something that terrifies me simply because very powerful people are using it, and trusting it when they shouldn't be, I wasn't singling you out by any means.
This will have ramifications for the whole planet and maybe our very existence and likely in my lifetime
I do hope it helps you on your personal journey but please see it for the tool it is with all its strengths and flaws
I think a good takeaway is to be cautious about important advice from people, but also from AIs.
AIs get their information from people, even when they're working as intended.
(That said, I'm glad the advice worked, and hope you're doing better or at least will be.)
Hallucination seems a slightly odd name for what AIs can sometimes do, it seems to me more like what I have read of confabulation.
You are right but most people know what a hallucination is and don't use confabulation in their everyday language.