• thepreciousboar@lemm.ee
      link
      fedilink
      arrow-up
      3
      ·
      1 month ago

      Because “ai” ad we colloquially know today are language models: they train on and can produce language, that’s what they are designed on. Yes, they can produce images and also videos, but they don’t have any form of real knowledge or understanding, they only predict the next word or the next pixel based on their prompt and their vast examples of words and images. You can only talk to them because that’s what they are for.

      Feeding research papers will make it spit research-sounding words, which probably will contain some correct information, but at best an llm trained on that would be useful to search through existing research, it would not be able to make new one

    • Alice@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      1 month ago

      Because that’s what it’s designed for? I’m curious what else it could be good for. A machine capable of independent, intelligent research sounds like a totally different invention entirely.

      • Melatonin@lemmy.dbzer0.comOP
        link
        fedilink
        arrow-up
        1
        ·
        1 month ago

        It’s sort of like the communication aspect of it isn’t the sole purpose of it. It’s as if we invented computers but the only thing we cared about was the monitor and the keyboard.

        We want it to DO things. Stick to the truth, not just placate.

        • Alice@beehaw.org
          link
          fedilink
          arrow-up
          1
          ·
          1 month ago

          Didn’t realize that. The only applications I’ve seen for it are conversation or generating media based on text input. I thought all it did was analyze text and create a response based on patterns it had observed.

          I haven’t done much with it myself though so that’s probably a very limited POV.