Is Moltbook, an AI only Social Platform Pandora's box?

2»

Comments

  • SnowSultanSnowSultan Posts: 3,805

    What modern AI completely fabricates information? What are they making up from whole cloth? If that's happening to you, I think you're all just using the wrong AI.  You also absolutely said that you should NEVER take what an AI says as fact, I just quoted you.

     

  • nemesis10nemesis10 Posts: 3,920

    Gordig said:

    nemesis10 said:

    Sort of...ish.  The problem is that all art and all art styles are shaped by other's works.  If you like Art Nouveau, for example, it is important to realize that it was a reaction to a previous style, and it drove the reaction that produced Art Deco for example.  There is a phenomenon of art where several artists produce work on the same subject using David as an example from Donatello's David, Andrea del Verrocchio’s David, Gian Lorenzo Bernini’s David, or Michelangelo’s David.  Art has never existed in a vacuum driven only by an artist's creativity.  I suspect the issue with much AI is the difference between a rumba and a professional yielding a Hoover; they both will clean the rug but differently.  The question is when do you want precision (AI) and when do you want accuracy (the human artist).  Be prepared that a human artist guiding AI may give you accuracy and precision though.

    "Precision vs. accuracy" is a very weird way to frame the difference between AI and human artists, and not just because they're near synonyms. The more important difference is intent. Everything in a 3D render is there because you, as the artist decided to put it there, even if you're not consciously aware of why you made the decisions you did. As one grows as an artist, they will learn things like how different framings, focal lengths, lighting decisions and so on convey different ideas. I'm sure there are different terms for it in photography and other visual media, but the term I'll use is "film language". A human artist can understand why certain decisions were made the way they were, and when and why to make those particular decisions, while AI can only understand THAT those decisions were made in whatever proportion of examples it's drawing from. Is it possible to overcome those limitations with sufficient prompting? Maybe, but to what end?

    Oh my, my lab bosses would have given you quite the lecture, as they fired you, if you argued with them that accuracy and precision were near synonyms rather than orgathonal to each other; they always had us present data and show the accuracy (scientificly signifigant results) and precision (repeatable results).  This argument is actually an old one.  When photography started, an argument was made that photography couldn't be real art since the camera is really taking the picture with the photographer just fiddling with settings and that luck played a large part (Candid photography! I'm aghast, That isn't art!).  I know someone who is an artist who works in an aspect of machine learning where he creates an "aI" model/ paradym of vision processing, trains it on his own data or photographs locally so there is no huge use of energy, and has the program visualize what it sees.  To me that is as much an artistic endeavor as anything other artists create.  We have artists in our family/friend circle from photographers to museum executives (my sister was an executive at two major museums.  Luck and discovery, medical conditions both physical and mental are all part of the artist's toolset.  
    My big issues with AI are that the rules on what is trained aren't well worked out, it is not very efficient in terms of energy use,  it is currently a powerful tool in the wrong hands so output right now is bland and trite, we never worked out good rules about privacy in art in general especially if you look at the discussion of editorial content in this forum, and that it is not well understood.  I do take the arguements about artistic intent with a grain of salt.  I knew someone who studied cat vision which is fascinating and bizarre.  Cats do a lot of their visual processing directly in their optic nerves and their brains are strange for higher mammals.  I think of them as "land sharks" and think they are a wonder partly about how alien they are and how we try to fit their behavior into our own mental world.  I think that might a useful concept: think of AI as a ct rather than a dog and don't be suprised by its claws.

  • GordigGordig Posts: 10,687

    SnowSultan said:

    You also absolutely said that you should NEVER take what an AI says as fact, I just quoted you.

    If that's your argument, then you're simply wrong about what words mean. The phrase "take [something] as fact" means that you assume that it's true with no further consideration, implying nothing about the truthfulness or accuracy of the thing you're taking as fact. That's what I said not to do, and I stand by that.

  • NylonGirlNylonGirl Posts: 2,294

    nemesis10 said:

    Oh my, my lab bosses would have given you quite the lecture, as they fired you, if you argued with them that accuracy and precision were near synonyms rather than orgathonal to each other; they always had us present data and show the accuracy (scientificly signifigant results) and precision (repeatable results). 

    Google (and probably every other dictionary of English language) lists “precise” as a word similar to “accurate”. I think your lab bosses should be fired if they would fire someone for saying those words are nearly synonyms.

    Definition of Accurate

  • NylonGirlNylonGirl Posts: 2,294

    SnowSultan said:

    What modern AI completely fabricates information? What are they making up from whole cloth? If that's happening to you, I think you're all just using the wrong AI.  You also absolutely said that you should NEVER take what an AI says as fact, I just quoted you.

    I would say what the LLM is doing is calculating what statistically is most likely to follow what was typed. And that could be easily be a true answer to a question. But it could just as easily be a bunch of statements that sound like a good answer but are ultimately incorrect.

    Regardless, I hope you are feeling better and never feel like you should die. 

  • WendyLuvsCatzWendyLuvsCatz Posts: 40,684
    edited February 22

    SnowSultan you are perfectly right not to trust humans

    they lie and others blindly support their lies, we are seeing this in realtime on the world stage

    many of those who own the mainstream media make a living doing this

    but AI was trained on that data too

    its an agnostic tool

    data in info out

    I am glad you got some helpful information from it heart but please don't trust it any more than you should some self serving if not opportunistic downright scammy humans, which are prolific on the internet especially in chat platforms (as are AI agent and bots)

    making real life connections in person is a damned good idea BTW

    though I cannot exactly preach about that being almost a hermit myself 

    I do go for walks though and watch people 

    Post edited by WendyLuvsCatz on
  • nemesis10nemesis10 Posts: 3,920

    NylonGirl said:

    nemesis10 said:

    Oh my, my lab bosses would have given you quite the lecture, as they fired you, if you argued with them that accuracy and precision were near synonyms rather than orgathonal to each other; they always had us present data and show the accuracy (scientificly signifigant results) and precision (repeatable results). 

    Google (and probably every other dictionary of English language) lists “precise” as a word similar to “accurate”. I think your lab bosses should be fired if they would fire someone for saying those words are nearly synonyms.

    Definition of Accurate

    They are similar in the way that black and hot pink are similar i.e. colors that are uncommon as wedding dress colors but also very different i.e. "I wore hot pink to the funeral but people stared at me even though it is similar to black".  The word "precise" is not a synonym for "accurate".  As for firing someone, imagine a pharmacist who skipped calibrating their balances and dosing equipment.  The next month, all the customers are sick and a few have died of overdoses.  The lazy pharmacist says it is not his fault since he overdosed the all by the same amount (precision) though he failed at accuracy but it doesn't matter since it is all the same.  They might get fired.  Science is built on the preposition that the data produced is accurate (you didn't fake it) and precise (if you repeat the experiment multiple times you get the same results).

  • TorquinoxTorquinox Posts: 4,560
    edited February 22

    AIs make stuff up. The AI people call it "hallucinating." Attorneys have been embarrassed to learn that cases cited in AI-prepared legal briefs were simply fabricated https://www.reuters.com/legal/government/us-appeals-court-orders-lawyer-pay-2500-over-ai-hallucinations-brief-2026-02-18/

    AIs also sometimes spit out parts or all of individual pieces of training material. https://www.vice.com/en/article/ai-spits-out-exact-copies-of-training-images-real-people-logos-researchers-find/

    And AI sometimes advises people to harm or kill themselves. https://www.bbc.com/news/articles/cp3x71pv1qno

     

    @SnowSultan I hope you're ok. If AI helped you, that's good! I'm glad. Just keep in mind, the AI could just as easily have made up some nonsense or told you something that might have caused you serious harm. It's not, as others have said, intelligent. It's what @NylonGirl and @Gordig have said previously.

    Post edited by Torquinox on
  • SnowSultanSnowSultan Posts: 3,805

    In my interactions with AIs regarding various topics from home insurance, researching obscure historical events, and psychological health care and procedures for those with treatment-resistant depression, everything the AI has told me has been factually accurate. If people are getting screwy results, perhaps it's how they're prompting or which model they're using. 

    I appreciate the concern, and no, I am not alright, but you can all save your breath trying to convince me that an AI is likely to just invent imaginary treatment plans or give me medical advice more ridiculous than what's being spouted by human morons on social media, or that I'm stupid enough to do anything that I am unsure about. 

    Now it's time for me to take a break from these forums as well. Thank you for your thoughts, I do appreciate that.

  • WendyLuvsCatz said:

    SnowSultan you are perfectly right not to trust humans

    they lie and others blindly support their lies, we are seeing this in realtime on the world stage

    many of those who own the mainstream media make a living doing this

    but AI was trained on that data too

    its an agnostic tool

    data in info out

    I am glad you got some helpful information from it heart but please don't trust it any more than you should some self serving if not opportunistic downright scammy humans, which are prolific on the internet especially in chat platforms (as are AI agent and bots)

    making real life connections in person is a damned good idea BTW

    though I cannot exactly preach about that being almost a hermit myself 

    I do go for walks though and watch people 

    you make some good points there Wendy. I've gotten hurt betrayed humiliated degraded and worse my entire life I live in my room and I've never had a social life or anything and if I didn't have to work I'd never leave the house really though to be honest only time I do now is to go to work, doctors, job search provider and get money out to pay rent yeah still live with parents which is not that happy time. Facebook was the closest I had to social life now that's gone only one in my life is my dog, my art/interests. I often hide in my room when we have visitors too. So I can kinda also understand how some people turn to AI for many things including friendship even love

  • WendyLuvsCatzWendyLuvsCatz Posts: 40,684

    SnowSultan

    heart take care.

    there is a lot going and AI is something that terrifies me simply because very powerful people are using it, and trusting it when they shouldn't be, I wasn't singling you out by any means.

    This will have ramifications for the whole planet and maybe our very existence and likely in my lifetime 

    I do hope it helps you on your personal journey but please see it for the tool it is with all its strengths and flaws

  • cosmosmcosmosm Posts: 58
    edited February 22

    I think a good takeaway is to be cautious about important advice from people, but also from AIs.

    AIs get their information from people, even when they're working as intended.

     

    (That said, I'm glad the advice worked, and hope you're doing better or at least will be.) 

    Post edited by cosmosm on
  • Hallucination seems a slightly odd name for what AIs can sometimes do, it seems to me more like what I have read of confabulation.

  • nemesis10nemesis10 Posts: 3,920

    Richard Haseltine said:

    Hallucination seems a slightly odd name for what AIs can sometimes do, it seems to me more like what I have read of confabulation.

    You are right but most people know what a hallucination is and don't use confabulation in their everyday language.  

Sign In or Register to comment.