- cross-posted to:
- hackernews@derp.foo
- cross-posted to:
- hackernews@derp.foo
Please create a comment or react with an emoji there.
(IMO, they should’ve limited comments,and gone with reaction count there, its looks mess right now )
Please create a comment or react with an emoji there.
(IMO, they should’ve limited comments,and gone with reaction count there, its looks mess right now )
It requires powerful gpus yes but not always. It depends a lot on how fast you want it to run. Microsoft and openai need powerful ai gpus because they have a lot of requests, data and want it to go fast. The dataset may also require to be stored in memory or gpu memory for fast access and use by the ai.
For Llama, it has been released as open source. And what is amazing about open source, is the community. A Llama entirely in c++ has been created https://github.com/ggerganov/llama.cpp .
And someone even managed to make it run, fast enough, on a phone with 8gb of available ram https://github.com/ggerganov/llama.cpp/discussions/750 . Tho with a smaller dataset.