VibeBuilders.ai Logo
VibeBuilders.ai
[D] Playing big league at home on a budget?

[D] Playing big league at home on a budget?

ballerburg9005
April 15, 2025
reddit

I am a hobbyist and my Nvidia 660 is 10 years old and only has 2GB. Obviously that isn't going to cut it nowadays anymore.

I am thinking about options here. I don't have thousands and thousands of dollars.

And I highly doubt that spending close to a thousand dollars on a brand new card is still viable in 2020-2022.

I wanted to use Wavenet today and then found out about Melnet. I mean, maybe I could run Wavenet but nobody in their right mind wants to after hearing Melnet results. On Github this one guy complained he couldn't get his implementation to work due to OOM with 2x 2080 RTX, which he bought solely for this purpose. Then on the other repo the guy casually mentioned that tier XY doesn't fit with some 10 year old lowfi dataset, even with batch size 1, on a 16GB Tesla P100.

The wisdom for OOM has always been "decrease batch size". But as far as I can tell, for most of any of the interesting stuff in the last 8 years or so you simply can't decrease batch size. Either because batch sizes are already so tiny, or because the code is written in a way that would require you to somehow turn it inside out, probably involving extreme knowledge of higher mathematics. I am a hobbyist, not a researcher. I am happy if I crudely can grasp what is going on.

Most of anything in the field suffers from exactly the same issue: It simply won't run without utterly absurd amounts of VRAM.

So what about buying shitty cheapo AMD GPUs with lots of VRAM? This seems to be the sensible choice if you want to be able to run anything noteworthy at all that comes up in the next 2 years and maybe beyond. People say, don't but AMD its slow and it sucks, but those are apparently the same people that buy a 16GB Titan GPU for $1500 three times on Ebay without hesitation, when there are also 16GB AMD GPUs for $300.

How much slower are AMD GPUs really? Let's say they are 5 times cheaper so they could be just 5 times slower. So I have to train my model over night instead of seeing the result in the afternoon. That would be totally awesome!; given that the alternative is to buy a $300 Nvidia GPU, which has maybe 4 or 6GB and simply can't run the code without running out of memory. And say $300 is not enough, let's buy a $700 RTX 3080. It still only has 10GB of VRAM not even 16GB. Then its just as useless! What's the point of buying a fast GPU if it can't even run the code?

I don't know how much slower AMD GPUs really are. Maybe they are not 5x but 50x slower. Then of course training a model that was developed on some 64GB Tesla might take month and years. But maybe speed is not the issue, only memory. I have seen some stuff even being optimized for CPU, apparently because there weren't any big enough GPUs around. I don't really know how viable that can be (it seems rarely if ever it is), I have no experience.

And what about renting AWS? Let's say, I am a beginner and I want to toy around for a week and probably max out 4 Teslas like 80% of the time without really getting anywhere. How expensive is that? $25, $50, $100, $500? (Found the answer: fucking $2000 https://aws.amazon.com/ec2/instance-types/p3/ ) Ok, so AWS is bullshit, here its 6x cheaper: https://vast.ai/console/create/ . They don't really have 4x 16GB V100 though, just one V100. $0.5 per hour * 24 * 7 = $84 per month (there are more hidden cost like bandwidth, it doesn't seem to be huge but I never used this so don't take it at face value). On AWS the same is over $3 per hour. So a day is $12, this could be viable! (look at calculation below).

There really isn't much info on the net about hardware requirements and performance for machine learning stuff. What bothers me the most is that people seem to be very ignorant of the VRAM issue. Either because they aren't looking ahead of what might come in 1-2 years. Or because they are simply so rich they have no issue spending thousands and thousands of dollars every year instead of just 500 every couple of years. Or maybe they are both.

So, yeah, what are your thoughts?

Here is what I found out just today:

Until 2 years ago, tensorflow and pytorch wouldn't work with AMD cards, but this has changed. https://rocmdocs.amd.com/en/latest/Deep_learning/Deep-learning.html

For older cards though, ROCm only works with certain CPUs: it needs PCIe 3.0 with atomics (see: https://github.com/RadeonOpenCompute/ROCm ). So you can't simply buy any 16GB card for $300 on Ebay like I suggested, even if it supports ROCm, because it will only work for "newer" PCs. The newer GFX9 AMD cards (like Radeon VII and Vega) don't suffer from this problem and work with PCIe 2.0 again... Although I have seen 16GB Vega cards for like $350 on Ebay, I think that is a pretty rare catch. However looking 1-2 years in the future, this is great because Radeon VII prices will be hugely inflated by Nvidia 3000 series hype (maybe down to $180 even) and maybe the next gen cards from AMD even have 24 or 32GB for $500-$1000 and can still run on old machines.

According to this https://arxiv.org/pdf/1909.06842.pdf Radeon VII 16GB performs only half as good as Tesla V100 16GB, whereas V100 should be roughly along the lines of 11GB RTX 2080 Ti. So you could say that you get half the RAM, double the speed, double the price. I am not sure though if that holds. I think they were putting 16GB in those cards trying to push it for ML with ROCm, clearly addressing the problem of the time, but no one really jumped on the train and now Resnet shrinks RAM but needs more processing power. So they released 8GB cards again with slightly better performance, and I guess we are lucky if the next generation even has 16GB because games probably don't need it at all.

Still though with Revnets and everything said in the comments, I think on a budget you are better on the safe side buying the card with the most amount of VRAM, rather than the most performance. Tomorrow some paper might come out that uses another method, then you can't trick-shrink your network anymore and then everyone needs to buy big ass cards again like it used to be and can do nothing but throw their fancy faster cards in the dumpster. Also the huge bulk of ML currently focuses on image processing, while sound has only been gaining real momentum recently and this will be followed by video processing and eventually human-alike thought processes that sit atop of all that and have not even been tackled yet. Its a rapidly evolving field, hard to predict what will come and stay. Running out of VRAM means total hardware failure, running slower just means waiting longer. If you just buy the newest card every year, its probably save to buy the fast card because things won't change that fast after all. If you buy a new card every 4 years or longer then just try to get as much VRAM as possible.

Check this out: https://www.techspot.com/news/86811-gigabyte-accidentally-reveals-rtx-3070-16gb-rtx-3080.html

There will be a 3070 16GB version!

Let's compare renting one V100 at $12/day vs. buying a 3070 Ti 16GB:

The 2080 Ti was 1.42x the price of the regular 2080 and released the next summer. So let's assume the same will be true to the 3070 Ti so it will cost $700. That is $30/month & $1.88/day for two years - $15/month & $0.94/day in four years (by which time you can probably rent some 32GB Tesla card for the same price and nothing recent runs on less anymore). If you max out your setup 24/7 all year, then power cost obviously becomes a huge factor to that figure. In my country running at 500W cost $4.21/day, or $1.60 / 9hrs overnight. If you live elsewhere it might be as much as a quarter of that price. Of course your PC may run 10h a day anyway, so its maybe just 300W plus, and an older graphics card is inefficient for games it eats more Watts to do the same things so you save some there as well. There is a lot to take into account if comparing.

Anyway, factoring in power cost, to break even with buying the card vs. renting within two years, you would have to use it for at least 4 days a month, or almost 2 weeks every 3 month. If you use it less than that, you maybe have a nice new graphics card and less hassle with pushing stuff back and forth onto servers all the time. But it would have been more economic to rent.

So renting isn't that bad after all.

Overall if you are thinking about having this as your hobby, you could say that it will cost you at least $30 per month, if not $50 or more (when keeping up to date with cards every 2 instead of 4 years + using it more cost more power). I think that is quite hefty. Personally I am not even invested enough into this even if it wasn't over my finances. I want a new card of course and also play some new games, but I don't really need to. There are a lot of other (more) important things I am interested in, that are totally free.

Vibe Score

LLM Vibe Score

0

Sentiment

Human Vibe Score

0.778

Rate this Resource

Join the VibeBuilders.ai Newsletter

The newsletter helps digital entrepreneurs how to harness AI to build your own assets for your funnel & ecosystem without bloating your subscription costs.

Start the free 5-day AI Captain's Command Line Bootcamp when you sign up:

By subscribing, you agree to our Privacy Policy and Terms of Service.