This is brilliant and I’m saving it and will post a link to it the next time someone at work asks why we can’t “just use AI to do it” when a ticket gets rejected for being stupid and/or unreasonable.
However:
The first is that we have some sort of intelligence explosion, where AI recursively self-improves itself, and we’re all harvested for our constituent atoms […]. It may surprise some readers that I am open to the possibility of this happening, but I have always found the arguments reasonably sound.
Yeah, I gotta admit, I am surprised. Because I have not found a single reasonable argument for this horseshit and the rest of the article (as well as the others I read from their blog) does not read like it’s been written by someone who’d buy into AI foom.
This is brilliant and I’m saving it and will post a link to it the next time someone at work asks why we can’t “just use AI to do it” when a ticket gets rejected for being stupid and/or unreasonable.
However:
Yeah, I gotta admit, I am surprised. Because I have not found a single reasonable argument for this horseshit and the rest of the article (as well as the others I read from their blog) does not read like it’s been written by someone who’d buy into AI foom.
data scientists can have little an AI doomerism, as a treat