Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Semi-obligatory thanks to @dgerard for starting this.)

  • skillissuer@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    13 hours ago

    ai fan asks chempros about their use of lying boxes: majority opinion is that this shit is useless, leaks confidential information and is a massive legal liability https://www.reddit.com/r/Chempros/comments/1hgxvsj/ai_in_the_workplace_how_have_chemistsscientists/

    top response:

    It’s a good trick to be instantly dismissed. No, really, that’s the latest I had in terms of company policy. If you’re caught using AI for anything, you’re out the door. It’s a lawsuit waiting to happen (and a lawsuit we cannot defend against). Gross misconduct, not eligible for rehire, and all that. Same as intentionally misrepresenting data (because it is). (Pharma)

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 hours ago

      From the replies:

      In cGMP and cGLP you have to be able to document EVERYTHING. If someone, somewhere messes up the company and authorities theoretically should be able to trace it back to that incident. Generative AI is more-or-less a black box by comparison; plus how often it’s confidently incorrect is well known and well documented. To use it in a pharmaceutical industry would be teetering on gross negligence and asking for trouble.

      Also suppose that you use it in such a way that it helps your company profit immensely and—uh oh! The data it used was the patented IP of a competitor! How would your company legally defend itself? Normally it would use the documentation trail to prove that they were not infringing on the other company’s IP, but you don’t have that here. What if someone gets hurt? Do you really want to make the case that you just gave Chatgpt a list of results and it gave a recommended dosage for your drug? Probably not. When validating SOPs are they going to include listening to Chatgpt in it? If you do, then you need to make sure that OpenAI has their program to the same documentation standards and certifications that you have, and I don’t think they want to tangle with the FDA at the moment.

      There’s just so, SO many things that can go wrong using AI casually in a GMP environment that end with your company getting sued and humiliated.

      And a good sneer:

      With a few years and a couple billion dollars of investment, it’ll be unreliable much faster.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 hours ago

      AI could be a viable test for bullshit jobs as described by Graeber. If the disinfotmatron can effectively do your job then doing it well clearly doesn’t matter to anyone.