![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.hogru.ch/api/v3/image_proxy?url=https%3A%2F%2Fderp.foo%2Fpictrs%2Fimage%2F11562610-c411-43fd-a26e-a6e9b5b4f736.png)
I’d rather people kill themselves cleanly, painlessly and successfully, that fail to do it, suffer and be made prisoners of others more than they were already of themselves.
I’d rather people kill themselves cleanly, painlessly and successfully, that fail to do it, suffer and be made prisoners of others more than they were already of themselves.
despite them completely hiding it in the trailers and promotional material
…
The article says this growth is happening on places where the restrictions were made. So I would say it is up in the US.
Uh. Didn’t predict this.
Yet another example of corporations running the world and nothing to be done about it on an individual level.
I’m out of fucks to give. May the world burn.*turns on AC*
Meanwhile she just probably went “oh girl, a job the world doesn’t expect me to succeed at? Woo, failing upwards!!!”
I dunno what I could have done, everything I try to have an impact on is always a pittance compared to the size of the problem, but I know what I can do going forward.
I’m quitting. I’m having zero children. Good luck, have fun the rest of you.
Don’t even need to make it about code. I once asked what a term meant in a page full of a certain well known FOSS application’s benchmarks page. It gave me a lot of garbage that was unrelated because it made an assumption about the term, exactly the assumption I was trying to avoid. I try to deviate it away from that, and it fails to say anything coherent and then loops back and gives that initial attempt as the answer again. I was stuck unable from stopping it from hallucinating.
How? Why?
Basically, it was information you could only find by looking at the github code, and it was pretty straightforward - but the LLM sees “benchmark” and it must therefore make a bajillion assumptions.
Even if asked not to.
I have a conclusion to make. It does do the code thing too, and it is directly related. Once asked about a library, and it found a post where someone was ASKING if XYZ was what a piece of code was for - and it gave it out as if it was the answer. It wasn’t. And this is the root of the problem:
AI’s never say “I don’t know”.
It must ALWAYS know. It must ALWAYS assume something, anything, because not knowing is a crime and it won’t commit it.
And that makes them shit.