• curiousaur@reddthat.com
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    12
    ·
    6 hours ago

    People keep saying this, but I’m not convinced our own brains are doing anything more.

    • Allonzee@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      8
      ·
      edit-2
      1 hour ago

      Let the haters hate.

      Despite the welcome growth of atheism, almost all humans at one level or another cling to the idea that our monkey brains are filled with some magic miraculous light that couldn’t possibly be replicated. The reality is that some of us only have glimmers of sapience, and many not even that. Most humans, most of the time, are mindless zombies following a script, whether due to individual capacity, or a civilization that largely doesn’t reward metacognition or pondering the questions that matter, as that doesn’t immediately feed individual productivity or make anyone materially wealthier, that maze doesn’t lead to any yummy cheese for us.

      AI development isn’t finally progressing quickly and making people uncomfortable with its capability because it’s catching up to our supposedly transcendental superbrains (that en masse spent hundreds of thousands of years wandering around in the dirt before it finally occurred to any of them that we could grow food seasonally in one place). It’s making a lot of humans uncomfortable because it’s demonstrating that there isn’t a whole hell of a lot to catch up to, especially for an average human.

      There’s a reason pretty much everyone immediately discarded the Turing Test and basically called it a bullshit metric after elevating it for decades as a major benchmark in the development of AI systems… The moment a technology and design that could readily pass it became available. That’s the blind hubris of man on grand display.

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 hour ago

          see I was just gonna go for “promptfondlin” but I’m glad I hesitated cause this is my new favorite ban reason

          • David Gerard@awful.systemsOPM
            link
            fedilink
            English
            arrow-up
            2
            ·
            41 minutes ago

            I used to do this and it helped my mental state a lot. LSD refresh every 6-12 months.

            gasbag with occasional live one, the most tragic form of bad poster

          • self@awful.systems
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 hour ago

            I wonder if any of the people about to downvote your comments are the weird non-sapient humans who work exactly like LLMs you seem to think exist, or maybe your posts are just inane promptfondling horseshit we’ve seen before

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      16
      ·
      6 hours ago

      thinking is so easy to model when you don’t do it and assume nobody else does either

  • Optional@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    1 day ago

    Did someone not know this like, pretty much from day one?

    Not the idiot executives that blew all their budget on AI and made up for it with mass layoffs - the people interested in it. Was that not clear that there was no “reasoning” going on?

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      16
      ·
      23 hours ago

      there’s a lot of people (especially here, but not only here) who have had the insight to see this being the case, but there’s also been a lot of boosters and promptfondlers (ie. people with a vested interest) putting out claims that their precious word vomit machines are actually thinking

      so while this may confirm a known doubt, rigorous scientific testing (and disproving) of the claims is nonetheless a good thing

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        ·
        16 hours ago

        No they do not im afraid, hell I didnt even know that even ELIZA caused people to think it could reason (and this worried the creator) until a few years ago.

    • khalid_salad@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      1 day ago

      Well, two responses I have seen to the claim that LLMs are not reasoning are:

      1. we are all just stochastic parrots lmao
      2. maybe intelligence is an emergent ability that will show up eventually (disregard the inability to falsify this and the categorical nonsense that is our definition of “emergent”).

      So I think this research is useful as a response to these, although I think “fuck off, promptfondler” is pretty good too.

    • DarkThoughts@fedia.io
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      1 day ago

      A lot of people still don’t, from what I can gather from some of the comments on “AI” topics. Especially the ones that skew the other way with its “AI” hysteria is often an invite from people who know fuck all about how the tech works. “Nudifier” or otherwise generative images or explicit chats with bots that portray real or underage people being the most common topics that attract emotionally loaded but highly uninformed demands and outrage. Frankly, the whole “AI” topic in the media is so massively overblown on both fronts, but I guess it is good for traffic and nuance is dead anyway.

      • Optional@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        Indeed, although every one of us who have seen a tech hype train once or twice expected nothing less.

        PDAs? Quantum computing. Touch screens. Siri. Cortana. Micropayments. Apps. Synergy of desktop and mobile.

        From the outset this went from “hey that’s kind of neat” to quite possibly toppling some giants of tech in a flash. Now all we have to do is wait for the boards to give huge payouts to the pinheads that drove this shitwagon in here and we can get back to doing cool things without some imaginary fantasy stapled on to it at the explicit instruction of marketing and channel sales.

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          10
          ·
          edit-2
          15 hours ago

          Xml also used to be a tech hype for a bit.

          And i still remember how media outlets hyped up second life, forgot about it and a few months later discovered it again and more hype started. It was fun.

            • self@awful.systems
              link
              fedilink
              English
              arrow-up
              4
              ·
              7 hours ago

              Sarvega, Inc., the leading provider of high-performance XML networking solutions, today announced the Sarvega XML Context™ Router, the first product to enable loosely coupled multi-point XML Web Services across wide area networks (WANs). The Sarvega XML Context Router is the first XML appliance to route XML content at wire speed based on deep content inspection, supporting publish-subscribe (pub-sub) models while simultaneously providing secure and reliable delivery guarantees.

              it’s fucking delicious how thick the buzzwords are for an incredibly simple device:

              • it parses XPath quickly (for 2004 (and honestly I never knew XPath and XQuery were a bottleneck… maybe this XML thing isn’t working out))
              • it decides which web app gets what traffic, but only if the web app speaks XML, for some reason
              • it implements an event queue, maybe?
              • it’s probably a thin proprietary layer with a Cisco-esque management CLI built on appropriated open source software, all running on a BSD but in a shiny rackmount case
              • the executive class at the time really had rediscovered cocaine, and that’s why we were all forced to put up with this bullshit
              • this shit still exists but it does the same thing with a semi-proprietary YAML and too much JSON as this thing does with XML, and now it’s in the cloud, cause the executive class never undiscovered cocaine
                • froztbyte@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  5 hours ago

                  and now of course instead of people handcrafting xml documents by string-cating angle brackets and tags together in bad php files, we have people manually dash-cating yaml together in bad jinja and go template files! progress!

            • rook@awful.systems
              link
              fedilink
              English
              arrow-up
              5
              ·
              16 hours ago

              The trackpad and trackpoint of my aging linux laptop stop working if the thing gets its lid shut. The touchscreen continues to work just fine, however. It turns out that while two stupid things can’t make a good thing, they can sometimes cancel each other out.

              • Optional@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                9 hours ago

                A handy benefit no doubt, but not quite the earth-shaking revolution the touchscreen hype-train promised at the time.

              • Optional@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                ·
                9 hours ago

                Of course, of course. At the time though, it was expected that this would change the face of computing - no more keyboards! No more mice! No, this is more like Star Trek where you glance down at some geometric assemblage of colored shapes and tap several in random succession to immediately bring up the data you were looking for.

                That, uh, did not happen.

        • astrsk@fedia.io
          link
          fedilink
          arrow-up
          4
          ·
          19 hours ago

          Which is my point, and forgive me, but I believe is the point of the research publication.

      • DarkThoughts@fedia.io
        link
        fedilink
        arrow-up
        1
        ·
        1 day ago

        My best guess is it generates several possible replies and then does some sort of token match to determine which one may potentially be the most accurate. Not sure if I’d call that “reasoning” but I guess it could potentially improve results in some cases. With OpenAI not being so open it is hard to tell though. They’ve been overpromising a lot already so it may as well be just complete bullshit.

        • lunarul@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          23 hours ago

          My best guess is it generates several possible replies and then does some sort of token match to determine which one may potentially be the most accurate.

          Didn’t the previous models already do this?

  • Optional@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 day ago

    We suspect this research is likely part of why Apple pulled out of the recent OpenAI funding round at the last minute.

    Perhaps the AI bros “think” by guessing the next word and hoping it’s convincing. They certainly argue like it.

    🔥

    • lunarul@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      23 hours ago

      Perhaps the AI bros “think” by guessing the next word and hoping it’s convincing

      Perhaps? Isn’t that the definition of LLMs?

      Edit: oh, i just realized it’s not talking about the LLMs, but about their apologists

  • masterplan79th@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    8
    ·
    1 day ago

    When you ask an LLM a reasoning question. You’re not expecting it to think for you, you’re expecting that it has crawled multiple people asking semantically the same question and getting semantically the same answer, from other people, that are now encoded in its vectors.

    That’s why you can ask it. because it encodes semantics.

    • sc_griffith@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      7 hours ago

      guy who totally gets what these words mean: “an llm simply encodes the semantics into the vectors”

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        6 hours ago

        all you gotta do is, you know, ground the symbols, and as long as you’re writing enough Lisp that should be sufficient for GAI

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      21 hours ago

      thank you for bravely rushing in and providing yet another counterexample to the “but nobody’s actually stupid enough to think they’re anything more than statistical language generators” talking point

    • leftzero@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      10
      ·
      21 hours ago

      Paraphrasing Neil Gaiman, LLMs don’t give you information; they give you information shaped sentences.

      They don’t encode semantics. They encode the statistical likelihood that each token will follow a given sequence of tokens.

      • LainTrain@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        3
        ·
        21 hours ago

        It’s worth pointing out that it does happen to reconstruct information remarkably well considering it’s just likelihood. They’re pretty useful tools like any other, it’s funny ofc to watch silicon valley stumble all over each other chasing the next smartphone.

    • ebu@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      24 hours ago

      because it encodes semantics.

      if it really did so, performance wouldn’t swing up or down when you change syntactic or symbolic elements of problems. the only information encoded is language-statistical