Even worse is they reply to themselves figured it out
and say nothing else.
i can imagine some kind of LRU cache being reasonably useful for this situation, assuming you have some latency hierarchy. For example if the desktop has an SSD, HDD, and some USB HDDs attached I can imagine you having a smaller cache that keeps more frequently accessed files on the SSD, followed by a bigger one on the internal HDD, and followed again by USB HDDs as the ultimate origin of the data. Or even just have the SSD as cache and everything else is origin. I don’t know if there’s software that would do this kind of thing already though.
You may want to consider zipping files for transfer though, especially if the transfer protocol is creating new tcp connections for every file.
before there was reddit there were message boards and these message boards tended to be pretty small and niche. They would have low thousands of users, if that. I don’t think having low user counts is something to be afraid of - especially for sites run and paid for by volunteers.
who cares?
i dont really understand the revenue model here. i also dont understand how there’s going to be enough computational power to do LLM shit for all windows users all the time? this sounds bad for the environment.
i thought it wasnt a hexbear thread? maybe it wasnt a lemmy.ml thread but they should know better.
jesus christ we just had a huge struggle session about this.
At first I thought he was a divorcee just trying to get his dog back but now that I know he’s a divorcee spending time with his daughter I like the story even more.