Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Last week’s thread

(Semi-obligatory thanks to @dgerard for starting this)

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    ·
    46 minutes ago

    Me, a nuclear engineer reading about “Google restarting six nuclear power plants”

    lol, lmao even

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    6 hours ago

    v light, only weakly techtakes material, but I’m immature enough to want to share:

    spoiler

    I just got a sales email from “Richard at Autodesk” titled “Hear from the probing experts”

    Does anyone read these things before or after they’re sent?

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      5 hours ago

      Does anyone read these things before or after they’re sent?

      It sounds like spam - by my guess, they usually aren’t read at all.

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      6 hours ago

      the raw, mediocre teenage energy of assuming you can pick up any subject in 2 weeks because you’ve never engaged with a subject more complex than playing a video game and you self-rate your skill level as far higher than it actually is (and the sad part is, the person posting this probably isn’t a teenager, they just never grew out of their own bullshit)

      given how oddly specific “application auth protocol” is, bets on this person doing at best minor contributions to someone else’s OAuth library they insist on using everywhere? and when they’re asked to use a more appropriate auth implementation for the situation or to work on something deeper than the surface-level API, their knowledge immediately ends

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 minutes ago

          the absolute worst type of coworker from my cubicle days: heard about a technology at a conference, decided they invented it

    • barsquid@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      8 hours ago

      This person has certainly committed to this philosophy, even to the extent of spending less than one week of thought coming to this very conclusion.

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        24 minutes ago

        Fun fact: The plain vanilla physics major at MIT requires three semesters of quantum mechanics. And that’s not including the quantum topics included in the statistical physics course, or the experiments in the lab course that also depend upon it.

        Grad school is another year or so of quantum on top of that, of course.

        (MIT OpenCourseWare actually has fairly extensive coverage of all three semesters: 8.04, 8.05 and 8.06. Zwiebach was among the best lecturers in the department back in my day, too.)

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          1
          ·
          59 seconds ago

          I almost want to go Twitter diving to see if kache has the requisite unhinged rant about how universities are only making quantum physics hard to get money/because of woke or whatever

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      12 hours ago

      Boo! Hiss! Bring Saltman back out! I want unhinged conspiracy theories, damnit.

      It feels like this is supposed to be the entrenchment, right? Like, the AGI narrative got these companies and products out into the world and into the public consciousness by promising revolutionary change, and now this fallback position is where we start treating the things that have changed (for the worse) as fair accompli and stop whining. But as Ed says, I don’t think the technology itself is capable of sustaining even that bar.

      Like, for all that social media helped usher in surveillance capitalism and other postmodern psychoses, it did so largely by providing a valuable platform for people to connect in new ways, even if those ways are ultimately limited and come with a lot of external costs. Uber came into being because providing an app-based interface and a new coat of paint on the taxi industry hit on a legitimate market. I don’t think I could have told you how to get a cab in the city I grew up in before Uber, but it’s often the most convenient way to get somewhere in that particular hell of suburban sprawl unless you want to drive yourself. And of course it did so by introducing an economic model that exploits the absolute shit out of basically everyone involved.

      In both cases, the thing that people didn’t like was external or secondary to the thing people did like. But with LLMs, it seems like the thing people most dislike is also the main output of the system. People don’t like AI art, they don’t like interacting with chatbots in basically anywhere, and the confabulation problems undercut their utility for anything where correlation to the real world actually matters, leaving them somewhere between hilariously and dangerously inept at many of the functions they’re still being pitched for.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      12 hours ago

      You know, I can’t tell if this is supposed to be “I know you’re saying that calling unhoused people vermin is some Nazi shit, but it’s more complicated than that” or “I know calling unhoused people vermin is some Nazi shit, and I’m honestly okay with that”.

      Gonna guess the latter given where it’s coming from and the fact that the actual “more complicated” is a salad of non sequiturs.

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    19 hours ago

    Max Tegmark has taken a break from funding neo-Nazi media to blather about Artificial General Intelligence.

    As humanity gets closer to Artificial General Intelligence (AGI)

    The first clause of the opening line, and we’ve already hit a “citation needed”.

    He goes from there to taking a prediction market seriously. And that Aschenbrenner guy who thinks that Minecraft speedruns are evidence that AI will revolutionize “science, technology, and the economy”.

    You know, ten or fifteen years ago, I would have disagreed with Tegmark about all sorts of things, but I would have granted him default respect for being a scientist.

    • BigMuffin69@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 hour ago

      After he started rambling about his Mathematical Universe Hypothesis, it was obvious his brain was cooked.

      As humanity gets closer to Artificial General Intelligence (AGI)

      Arrow of time and all that, innit? And God help me, I actually read part of the post as well as the discussion comments where the prompt fondlers were lamenting that all it takes is one rogue ai code to end the world because it will “optimize against you!” I assume Evil GPT is constructing anti matter bombs using ingredients it finds under the kitchen sink.

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 hour ago

        This is just straight-up gossip, but why not:

        Tegmark used to go around polling physicists at conferences about which interpretation of quantum mechanics they prefer. A colleague of mine said that they were sitting near Tegmark and saw him fudging the numbers in his notes — erasing the non-Many Worlds tallies from those who said they supported Many Worlds as well as others, IIRC.