• jorp@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      23 days ago

      I get that it’s cool to hate on how AI is being shoved in our faces everywhere and I agree with that sentiment, but the technology is better than what you’re giving it credit for.

      You don’t have to diminish the accomplishments of the actual people who studied and built these impressive things to point out that business are bandwagoning and rushing to get to market to satisfy investors. like with most technologies it’s capitalism that’s the problem.

      LLMs emulate neural structures and have incredible natural language parsing capabilities that we’ve never even come close to accomplishing before. The prompt hacks alone are an incredibly interesting glance at how close these things come to “understanding.” They’re more like social engineering than any other kind of hack.

      • AppleTea@lemmy.zip
        link
        fedilink
        arrow-up
        2
        ·
        23 days ago

        The trouble with phrases like ‘neural structures’ and ‘language parsing’ is that these descriptions still play into the “AI” narrative that’s been used to oversell large language models.

        Fundamentally, these are statistical weights randomly wired up to other statistical weights, tested and pruned against a huge database. That isn’t language parsing, it’s still just brute-force calculation. The understanding comes from us, from people assigning linguistic meaning to patterns in binary.

        • jorp@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          23 days ago

          Brain structures aren’t so dissimilar, unless you believe there’s some metaphysical quantity to consciousness this kind of technology will be how we do achieve general AI

          • AppleTea@lemmy.zip
            link
            fedilink
            arrow-up
            0
            ·
            23 days ago

            Living, growing, changing cells are pretty damn dissimilar to static circuitry. Neural networks are based on an oversimplified model of neuron cells. The model ignores the fact neurons are constantly growing, shifting, and breaking connections with one another, and flat out does not consider structures and interactions within the cells.

            Metaphysics is not required to make the observation that computer programmes are magnitudes less complex than a brain.

            • ChickenLadyLovesLife@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              23 days ago

              Neural networks are based on an oversimplified model of neuron cells.

              As a programmer who has studied neuroanatomy and the structure/function of neurons themselves, I remain astonished at how not like real biological nervous systems computer neural networks still are. It’s like the whole field is based on one person’s poor understanding of the state of biological knowledge in the late 1970s. That doesn’t mean it’s not effective in some ways as it is, but you’d think there’d be more experimentation in neural networks based on current biological knowledge.

                • ChickenLadyLovesLife@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  22 days ago

                  The one thing that stands out to me the most is that programmatic “neurons” are basically passive units that weigh inputs and decide to fire or not. The whole net is exposed to the input, the firing decisions are worked through the net, and then whatever output is triggered. In biological neural nets, most neurons are always firing at some rate and the inputs from pre-synaptic neurons affect that rate, so in a sense the passed information is coded as a change in rate rather than as an all-or-nothing decision to fire or not fire as is the case with (most) programmatic neurons. Implementing something like this in code would be more complicated, but it could produce something much more like a living organism which is always doing something rather than passively waiting for an input to produce some output.

                  And TBF there probably are a lot of people doing this kind of thing, but if so they don’t get much press.

            • NιƙƙιDιɱҽʂ@lemmy.world
              link
              fedilink
              arrow-up
              0
              arrow-down
              1
              ·
              22 days ago

              The fact that you believe software based neural networks are, as you put it, “static circuitry” betrays your apparent knowledge on the subject. I agree that many people overblow LLM tech, but many people like yourself grossly underestimate it as well.

  • schnurrito@discuss.tchncs.de
    link
    fedilink
    arrow-up
    2
    ·
    22 days ago

    LLMs aren’t virtual dumbasses who are constantly wrong, they are bullshit generators. They are sometimes right, sometimes wrong, but don’t really care either way and will say wrong things just as confidently as right things.

    • constantokra@lemmy.one
      link
      fedilink
      arrow-up
      1
      ·
      23 days ago

      I work in a technical field, and the amount of bad work I see is way higher than you’d think. There are companies without anyone competent to do what they claim to do. Astonishingly, they make money at it and frequently don’t get caught. Sometimes they have to hire someone like me to fix their bad work when they do cause themselves actual problems, but that’s much less expensive than hiring qualified people in the first place. That’s probably where we’re headed with ais, and honestly it won’t be much different than things are now, except for the horrible dystopian nature of replacing people with machines. As time goes on they’ll get fed the corrections competent people make to their output and the number of competent people necessary will shrink and shrink, till the work product is good enough that they don’t care to get it corrected. Then there won’t be anyone getting paid to do the job, and because of ais black box nature we will completely lose the knowledge to perform the job in the first place.

  • iAvicenna@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    23 days ago

    I think tech CEOs can empathise with chatgpt on how uninformed its opinions are and how well it can it bullshit

  • StaySquared@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    22 days ago

    Lots of companies jumping the gun… laying off so many people only to realize they’re going to need those people back. AI is still in its infancy, using it to replace an actual human is a dumb dumb move.

  • Snowclone@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    23 days ago

    They put new AI controls on our traffic lights. Cost the city a fuck ton more money than fixing our dilapidated public pool. Now no one tries to turn left at a light. They don’t activate. We threw out a perfectly good timer no one was complaining about.

    But no one from silicone valley is lobbing cities to buy pool equipment, I guess.

    • dan@upvote.au
      link
      fedilink
      arrow-up
      1
      ·
      23 days ago

      A lot of people in Silicon Valley don’t like this AI stuff either :)

    • lazynooblet@lazysoci.al
      link
      fedilink
      English
      arrow-up
      0
      ·
      23 days ago

      Whilst it’s a shame this implementation sucks, I wish we would get intelligent traffic light controls that worked. Sitting at a light for 90 seconds in the dead of night without a car in sight is frustrating.

      • lemmyvore@feddit.nl
        link
        fedilink
        English
        arrow-up
        1
        ·
        23 days ago

        That was a solved problem 20 years ago lol. We made working systems for this in our lab at Uni, it was one of our course group projects. It used combinations of sensors and microcontrollers.

        It’s not really the kind of problem that requires AI. You can do it with AI and image recognition or live traffic data but that’s more fitting for complex tasks like adjusting the entire grid live based on traffic conditions. It’s massively overkill for dead time switches.

        Even for grid optimization you shouldn’t jump into AI head first. It’s much better long term to analyze the underlying causes of grid congestion and come up with holistic solutions that address those problems, which often translate into low-tech or zero-tech solutions. I’ve seen intersections massively improved by a couple of signs, some markings and a handful of plastic poles.

        Throwing AI at problems is sort of a “spray and pray” approach that often goes about as badly as you can expect.

      • MIDItheKID@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        23 days ago

        Linus was ahead of the game on this one. Nvidia should start building data centers next to public pools. Cool the systems and warm the pools.

        • Captain Aggravated@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          23 days ago

          I’ve seen a video of at least one spa that does that. They mine bitcoin on rigs immersed in mineral oil, with a heat exchanger to the spa’s water system. I’m struggling to imagine that’s enough heat, especially piped a distance through the building, to run several hot tubs, and I’m kind of dubious about that particular load, but hey.

          • areyouevenreal@lemm.ee
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            22 days ago

            A large data centre can use over 100 MW at the high end. Certainly enough to power a swimming pool or three. In fact swimming pools are normally measured in kW not MW.

  • ristoril_zip@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    23 days ago

    I read a pretty convincing article title and subheading implying that the best use for so called “AI” would be to replace all corporate CEOs with it.

    I didn’t read the article but given how I’ve seen most CEOs behave it would probably be trivial to automate their behavior. Pursue short term profit boosts with no eye to the long term, cut workers and/or pay and/or benefits at every opportunity, attempt to deny unionization to the employees, tell the board and shareholders that everything is great, tell the employees that everything sucks, …

    • snooggums@midwest.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      23 days ago

      Then some hackers get in and reprogram the AI CEOs to value long term profit and employee training and productivity. The company grows and is massively profitable until some venture capitalists swoop in and kill the company to feed from the carcass.

  • uis@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    23 days ago

    CEOs(dumbasses who are constantly wrong): rush replacing everyone with AI before everyone replaces them with AI

    • skuzz@discuss.tchncs.de
      link
      fedilink
      arrow-up
      0
      ·
      22 days ago

      Funny thing is, the CEOs are exactly the ones to be replaced with AI. Mediocre talent that is sometimes wrong. Perfect place for an AI, and the AI could come to the next decision much faster at a fraction of the cost.

      • Krauerking@lemy.lol
        link
        fedilink
        arrow-up
        2
        ·
        22 days ago

        So, I’d say there is some slight issue with replacing all decision makers with AI cause Walmart and Amazon does it for employee efficiency. It means the staff are micro managed and treated like machines the same way the computer is.

        Walmart employees are moved around the floor like roombas to never interact with each other and no real availability for customers to get someone. Warehouse workers are overworked by bullshit ideas of efficiency.

        Now I get that it could be fixed by having the AI systems designed to be more empathetic but who is choosing how they are programmed? The board still?

        We just need good bosses who still interact with their employees on their level. We don’t need AI “replacing” anyone pretty much anywhere, but can be used as a helpful tool.

        • skuzz@discuss.tchncs.de
          link
          fedilink
          arrow-up
          2
          ·
          19 days ago

          Yeah, apologies, I was being a bit glib there. Honestly, I kinda subscribe to the Star Trek: Insurrection Ba’ku people’s philosophy. “We believe that when you create a machine to do the work of a man, you take something away from the man.”

          While it makes sense to replace some tasks like dangerous mining or assembly line work away from humans, interaction roles and decision making roles both seem like they should remain very human.

          In the same way that nuclear missile launches during the Cold War always had real humans as the last line before a missile would actually be fired.

          I see AI as being something that becomes specialized tools for each job. You are repairing a lawn mower, you have an AI multimeter type device that you connect to some test points and you converse with in some fashion to troubleshoot. All offline, and very limited in capabilities. The tech bros, meanwhile, think they created digital Jesus, and they are desperate to figure out what Bible to jam him into. Meanwhile, corps across the planet are in a rush to get rid of their customer service roles en masse. Can you imagine 911 dispatch being replaced with AI? The human component is 100% needed there. (Albeit, an extreme comparison.)

    • Match!!@pawb.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      23 days ago

      Generative AI is amazing for some niche tasks that are not what it’s being used for

        • Waraugh@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          0
          arrow-down
          1
          ·
          23 days ago

          Creating drafts for white papers my boss asks for every week about stupid shit on his mind. Used to take a couple days now it’s done in one day at most and I spend my Friday doing chores and checking on my email and chat every once in a while until I send him the completed version before logging out for the weekend.