I haven’t used it often, but the few times I have asked it very specific programming questions, it has usually been pretty good. For example, I am not very good with regex, but I can usually ask Copilot to create regex that does something like verifying a string matches a certain pattern, and it performs pretty well. I don’t use regex enough to spend a lot of time learning it, and I could easily find a few examples online that can be combined to make my answer, but Copilot is much quicker and easier for me. That said, I don’t think I would trust it past answering questions about how to implement a small code snippet.
Stop trying to make Clippy happen.
AI models are so broken. They are wrong most of the time in my experience. This meme is accurate for most intelligent people.
For those of you confused… Don’t worry about it. Just understand your being blatantly lied to by a computer more often than you know.
Your
“Tell me what happened to Bob, Clippy, and Cortana…”
What about that one that turned racist overnight so they killed it?
OMG - Tay! I forgot about her!
https://www.theguardian.com/world/2016/mar/29/microsoft-tay-tweets-antisemitic-racism
I tried asking OpenAI what the name of a song is, based on some lyrics I barely remember. It’s a song whose name has escaped me for about 15 years. Anyways, when it wasn’t just straight up lying about song names or their lyrics, it would not stop guessing the same song names, even after I told it to stop, several times.
Needless to say, I still don’t know the song name.
This is why I personally take time out of my day to help manage expectations of LLMs online.
Expect them to draw power and generate bullshit forever.
Just feed it info as if jar jar binks is speaking directly to it.
I mean, can you imagine Jar Jar and Yoda having an arguement? Or what if that arguement leads to hot steamy sex?
MMMMmmmmmMMMM!!! CUM IN MY ASS YOU WILL!!!
MeSa No GonnA BE Able To Hold Back! No No No NO!!!
MeSa Sorry, Yoda!
On my face, it got…
Yes. I DID just put those images into your brain. Now go put it into AI’s brain.
Why
The same reason you speak jibberish to all AI call center prompts. To distort AIs ability to understand humans, and force a human to look at ghe errors. Hopefully in an attempt to abandon this technology entirely.
That’s not how AIs are trained.
In a session they’re responding to what you wrote before because they have a long buffer of context for your session, but that’s just temporary and doesn’t get fed back to into anything permanent.
Yup, that’s standard. If you’re about three responses in, give up, it’s already lost and incapable of focusing on the requirements. It will also lie to please. There is zero admission of confidence score. You only pick up on the things you know are wrong and considering how often that is, the rest can only be taken with a grain of salt.
So far, I have found AI to be profoundly useless for just about any practical purpose aside from maybe trying to bullshit your way through a school paper or something but it’s wrong so often it really cannot be trusted so if you don’t already know what you’re doing, it’s a huge gamble as to whether it gets something right or not.
It can often make a semi decent summary of a long text that helps you decide if it’s worth reading or not. I’ve found it relatively useful for that.