I don’t think that follows, because those are temporary conditions, and consuming the drug is a choice made by an individual not currently under the influence. So it’s the person’s responsibility before they consume the drug to prepare their environment for when they are under the influence. If they’re so destructive under the influence that they can’t not commit a crime, it is their responsibility not to take the drug at all.
They weren’t, because LLMs don’t have reasoning ability, at least not in the way you as a human do. They are generative models, so the short answer is the model most likely made the numbers up, though there’s a chance they pulled them directly from some training data that’s likely completely unrelated to the user’s prompt.
What they generate is supposed to have similar multidimensional correlation as the input data, so there are complex relationships between what the question asked and the output it gave, but these processes don’t look anything like the steps you would go through to answer the same question.