I would also keep the “on speed” part. Even if the band aren’t users
It was a great adventure. But yeah, that setup was on 24/7. Not because of compilation, but it definitely made a lot of this more feasible
Gentoo unstable was a little bit tiring in the long run. The bleeding edge, but often I needed to downgrade because the rest of the libraries were not ready
Gentoo stable was really great. Back then pulseaudio was quite buggy. Having a system where I could tell all applications and libraries to not even link to it (so no need to have it installed at all) made avoiding its problems really easy
But when my hardware got older and compilation of libreoffice started to take 4h, I remembered how nice it was on Slackware where you just install package you broke and you’re done
Arch looked like a nice middle-ground. Most of the things in packages, big focus on pure Linux configurability (pure /etc files, no Ubuntu(or SUSE?) “you need working X.org to open distro-specific graphics card settings”) and AUR for things there are no official packages for. Turned out it was a match :)
Windows (~6 years) -> Mandriva (Mandrake? For I think 2-3 years) -> Ubuntu (1 day) -> Suse (2 days) -> Slackware (2-3 years) -> Gentoo unstable (2-3 years) -> Gentoo stable (2-3 years) -> Arch (9 years and counting)
The only span I’m sure about is the last one. When I started a job I decided I don’t have the time to compile the world anymore. But the values after Windows sum up to 21, should be 20, so it’s all more or less correct
If you want to access your computer from outside your LAN, it would be a good idea to at least secure it or, unfortunately the best, learn to understand what you are doing
Coming back to the topic, though, I’d start with checking these out
Characters in the title are not the regular ones making it look like a spam mail, no link, description sounds like corpo LLM. If there really is some podcast somewhere, I think it deserves better
No, that is actually useful. Blocking access for anonymous users is not
If anything, the boom of LLMs like copilot and chatgpt actually shows the power of open source and open access to information. Underlying algorithms would mean nothing without open source, open access to stackoverflow, forums, etc
Microsoft acquired Github and the discussions around the future of opensource on a microsoft owned infrastructure
Personally I’m impressed it took them so long to start driving it to the ground
I moved to Codeberg
Codeberg is a non-profit, community-led organization that aims to help free and open source projects prosper by giving them a safe and friendly home
But machine will not do the creative part. It can only fill in the time-sinks around our creative ideas. Ask an LLM to tell you a joke no-one has ever heard before and then google it. The creative part still has to come from humans
EDIT; and the truth is that we very rarely come up with something creative. We mostly just recompile previously met combinations
trying to weasel out of putting some effort into something that sounds worth putting some effort into
But that depends what do they need it for
Personally I don’t see a difference between legalese boilerplate and 10k word story. But that discussion might lead us nowhere
What about text creation have you learned
In many cases I don’t want nor need to learn that. I just need volume about the key points
Why an LLM is any different?
Let’s say I want my RPG players to find a corporate mail that gives them some plot info. Why not ask an LLM to write the boilerplate around the info I want to give them? Just as example
Let’s not put any effort into anything: the machine will do it for me
So you are not using a calculator, I presume? Only math done on abacus is not being lazy?
If you want something local and open source, I think your main problem will be the number of parameters (the b
thing). ChatGPT-3 is (was?) noticeably big and open source models are usually smaller. There is, of course, an exchange about how much the size of the model matters and how the quality of the training data affects the results. But when I did a non-scientific comparison ~half a year ago, there was a noticeable difference between smaller models and bigger ones.
Having said all of that, check out https://huggingface.co/ it aims to be like GitHub for AIs. Most of the models are more or less open source, you will only need to figure out how to run one and if you have some bottlenecks on PI
Haven’t tested it but it seems so. Android client has the button too