Does anyone have experience with parallel and distributed computing in a local AI context? Not the kind that would compete with anyone in the big leagues but maybe running “gently used” or older hardware that is just gathering dust. I’ve heard of folks building Raspberry Pi “clusters” or repurposing old Mac and Windows laptops (running various Linux distros), but don’t what is true and what is clickbait or wishful thinking.
Is it worth exploring further or more of a pipe dream?
2 Likes
I’ve seen people stack Mac Minis to create some crazy setups!
Other than that though I know it typically just comes down to running a bunch of Nvidia graphics cards in a server rack. Haven’t found a specific video for that one though.
2 Likes
Thanks, Cole.
Interesting video; always glad to see any level of democratization with AI (especially local AI!).
I still haven’t found a great DIY level video for leveraging older or otherwise unused equipment yet, and it’s probably not something worth spending a ton of time on because of the hardware needs of even smaller models, but I did run across an interesting talk from Stephen Balaban at Lambda about the real thing. It’s almost ancient history at this point ( 2020 ) but it’s still super relevant in a lot of areas. It’s pretty technical, so most of it ended up on my ever-growing “list of cool things to learn more about”, but great for getting the overall feel of a data center level GPU cluster.
1 Like