Look, AI/LLMs are the scourge of the internet and I wish the bubble would pop already. Heck I downvote people using AI to answer questions online myself.
But there is a qualitative difference between a plain LLM being forced down your throat, hallucinating left and right based on outdated training data, and the RAG that Kagi uses.
First of all, it’s not rammed down your throat, you chose if you want it by appending a question mark to the end of your query, the default is to not show any AI content.
Second, it being RAG, the generation is is based on real documents fetched from regular search and it has actual citation links to which page the information came from (these are not hallucinated but based on the search results). If you’d bothered to read my post you’d have seen me mentioning that you still can’t trust it’s output (it is still LLM technology and makes shit up), but it does work really well as an initial filter on which of the search results might be relevant to your query, and then actually read those pages by picking the one that fits what you actually want the most based on the summary.
I don’t usually turn this on for regular searches, but for technical programming things it is helpful, especially when searching for things where there’s little information. There’s really two cases, sometimes there’s 5 different ways of solving something and it will enumerate them with a short summary, making it faster to know which stack overflow or blog post to read for a likely solution.
The other, much more useful scenario imo is for those problems where there’s little information. For instance I’m currently building a bluetooth touchpad to attack to my keyboard. For this, I need to specify USB HID usages and usages pages so the OS properly picks up the device. Bluetooth touchpads are almost non-existent, especially DIY ones, so there’s not really any information on them out there. So I’ll do a search like “bluetooth hid usage for a touchpad?” and I’m immediately faced with bland, generic LLM garbage not relevant to my problem:
This immediately tells me that my query isn’t specific enough, that none of the top results contain relevant information and that I should try again. I didn’t have to waste time wading through the results.
So I do another search, a bit more specific and get
That looks more like what I’m looking for. Notice though, that the result is wrong in this case. 0x0A
is not Generic Desktop
, 0x01
is. I picked this specifically because it’s one of the recent examples where the output was just wrong. But I don’t care about the AI summary itself. But what it says tells me immediately that the actual search results themselves are much more relevant to what I’m looking for, and the two links it cites are actually relevant to my query, they’re documentation of bluetooth hid profiles. In the search results themselves, these are results 4 and 5. So I read those, and it takes some more queries, to realize there just isn’t a specific code for HID touchpads, they’re just generic pointer devices.
So did the AI answer my question? No. Does is sometimes answer my question, sure, but I still need to double check. Does it allow me to iterate on my searches faster and to guess if the answer is within the top 5 results? absolutely.
Well, kinda hard to order when there’s no internet, so thanks china?