• KeenFlame@feddit.nu
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    Very long layman take. Why is there always so many of these on every ai post? What do you get from guesstimating how the technology works?

    • ClamDrinker@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      I’m not an expert in AI, I will admit. But I’m not a layman either. We’re all anonymous on here anyways. Why not leave a comment explaining what you disagree with?

      • KeenFlame@feddit.nu
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        I want to just understand why people get so passionate about explaining how things work, especially in this field where even the experts themselves just don’t understand how it works? It’s just an interesting phenomenon to me

        • ClamDrinker@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          6 months ago

          Hallucinations in AI are fairly well understood as far as I’m aware. Explained in high level on the Wikipedia page for it. And I’m honestly not making any objective assessment of the technology itself. I’m making a deduction based on the laws of nature and biological facts about real life neural networks. (I do say AI is driven by the data it’s given, but that’s something even a layman might know)

          How to mitigate hallucinations is definitely something the experts are actively discussing and have limited success in doing so (and I certainly don’t have an answer there either), but a true fix should be impossible.

          I can’t exactly say why I’m passionate about it. In part I want people to be informed about what AI is and is not, because knowledge about the technology allows us to make more informed decision about the place AI takes in our society. But I’m also passionate about human psychology and creativity, and what we can learn about ourselves from the quirks we see in these technologies.

          • KeenFlame@feddit.nu
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            6 months ago

            Not really, no, because these aren’t biological, and the scientists that work with it is more interested in understanding why it works at all.

            It is very interesting how the brain works, and our sensory processing is predictive in nature, but no, it’s not relevant to machine learning which works completely different