Next GPU releases: 2022 edition

BO(V)BZ

Ars Tribunus Militum
2,085
Stop giving Intel the benefit of the doubt. It's a multi-billion dollar company that has had aspirations of getting back into dGPU since 2009, and has been poaching industry talent the whole time. They've been working on this for 15 years, and continue to fuck it up due to their arrogance.
And Intel has been making GPUs the whole time, just not discrete ones. I get they aren't great GPUs and their drivers are also not great, but seriously - they had to have learned SOMETHING from all that, right? I have no idea how they allocated funding, but from everything I've seen with GPUs, it really feels like the driver team should receive the majority of the money to get a solid product out the door.
 

ttnuagmada

Smack-Fu Master, in training
73
NVidia is more akin to the Sun Microsystems of the AI bubble. They're generating massive profits but their profits and stock value is being inflated by the bubble. They may still survive and be profitable after the bubble bursts but their sales and stock value will likely decline considerably.

AI isn't even close to a bubble. The only "bubble" is everyone racing to get their own chatbots out there. We haven't even seen the real push yet. It's still essentially brand new. Practically every company on the planet is currently looking or going to be looking at how it can be implemented into their business structure. I don't know if you've ever been part of implementing a major new piece of system software or system software upgrade at a large business, but it's a months/years long process.

NVDA is going to be printing money for a good while. The only concern there is the China/Taiwan thing and the fact that Nvidia has to rely on a 3rd party to make their chips. Intel regaining parity as a semiconductor fab has a lot more riding on it than just Intel's business model. Nvidia eventually being able to fab at Intel is as much a national security issue as it is anything.
 

BernieW

Wise, Aged Ars Veteran
186
AI isn't even close to a bubble. The only "bubble" is everyone racing to get their own chatbots out there. We haven't even seen the real push yet. It's still essentially brand new. Practically every company on the planet is currently looking or going to be looking at how it can be implemented into their business structure. I don't know if you've ever been part of implementing a major new piece of system software or system software upgrade at a large business, but it's a months/years long process.

NVDA is going to be printing money for a good while. The only concern there is the China/Taiwan thing and the fact that Nvidia has to rely on a 3rd party to make their chips. Intel regaining parity as a semiconductor fab has a lot more riding on it than just Intel's business model. Nvidia eventually being able to fab at Intel is as much a national security issue as it is anything.
The fact that everyone is racing to get their own chatbots and many AI startups less than a year old are valued at billions is how you know it's a bubble. An investor once told me that the way he knew that a correction was inevitable was when there was a period where all stocks were increasing in value and it looked almost impossible to lose money on an investment. It happened with dotcom, it happened with crypto currency, it happened with small launch, it will happen with AI.

It's still early in the AI bubble but many of these startups will fail. Which one's will be the next Google and which will be Pets.com is anyone's guess. Nvidia is riding the wave to record profits and will likely survive when the bubble bursts but will see a huge drop in their stock and profits like Cisco and Sun during the dotcom bubble.
 
It's still early in the AI bubble but many of these startups will fail. Which one's will be the next Google and which will be Pets.com is anyone's guess. Nvidia is riding the wave to record profits and will likely survive when the bubble bursts but will see a huge drop in their stock and profits like Cisco and Sun during the dotcom bubble.

Not really that hard to tell. OpenAI has massive, deeply integrated partnerships with Apple and Microsoft (among others). They are pretty much the Google of AI.
 

cogwheel

Ars Tribunus Angusticlavius
6,713
Subscriptor
Not really that hard to tell. OpenAI has massive, deeply integrated partnerships with Apple and Microsoft (among others). They are pretty much the Google of AI.
The bubble isn't an AI bubble, it's a LLM bubble. Nvidia is far more like Cisco or Sun since their products have plenty of uses other than LLMs. In contrast, LLMs are all OpenAI has, so when the LLM bubble pops, OpenAI might survive but they'll be massively reduced even with all their connections.
 

ttnuagmada

Smack-Fu Master, in training
73
The fact that everyone is racing to get their own chatbots and many AI startups less than a year old are valued at billions is how you know it's a bubble. An investor once told me that the way he knew that a correction was inevitable was when there was a period where all stocks were increasing in value and it looked almost impossible to lose money on an investment. It happened with dotcom, it happened with crypto currency, it happened with small launch, it will happen with AI.

It's still early in the AI bubble but many of these startups will fail. Which one's will be the next Google and which will be Pets.com is anyone's guess. Nvidia is riding the wave to record profits and will likely survive when the bubble bursts but will see a huge drop in their stock and profits like Cisco and Sun during the dotcom bubble.

You'll see a lot of consolidation im sure, and there will be some that crash and burn, but I think you're over-estimating how much of the AI investment money out there is riding on the success of ChatGPT clones, or how much of a demand there is for it that isn't immediately apparent. Every major enterprise software company out there is trying to figure out how to implement it. You'll have some of the really big ones try to do it themselves, and have others partner with your OpenAI/Anthropics of the world. There will be endless niches and different companies that specialize in different types of implementation for every possible business case you can think of, and even more that you haven't.

In any case, Nvidia is in a good spot regardless of what happens. In terms of a dot-com bubble anlogy, they're more like an internet ISP in that scenario. Regardless of what the AI landscape looks like 5 years from now, you can guarantee that the demand for AI accelerators is going to keep climbing. Just look stuff that we already know is in the pipeline, like Sora, stuff like that is going to cause demand for hardware to explode, even from where it is right now, and Nvidia has such a lead on everyone on top of the fact that they almost never misfire, that I just don't see how they're not going to be printing cash as long as there's someone to fab their GPU's.

Also, as far as a bitcoin comparison, we're at the stage where Bitcoin hit 30 dollars and everyone started buying up AMD GPU's.
 
  • Like
Reactions: PrcStinator

ttnuagmada

Smack-Fu Master, in training
73
The bubble isn't an AI bubble, it's a LLM bubble. Nvidia is far more like Cisco or Sun since their products have plenty of uses other than LLMs. In contrast, LLMs are all OpenAI has, so when the LLM bubble pops, OpenAI might survive but they'll be massively reduced even with all their connections.

It's not even an LLM bubble. I really think everyone is underestimating how little penetration the technology has compared to how it will look 5 years from now. If there's a bubble, it's a "ChatGPT clone" bubble.
 

NervousEnergy

Ars Legatus Legionis
10,592
Subscriptor
I'm rarely interested in such things as GPUs well down the performance curve, but this blub at TPU caught my eye and it's supremely cute:

Sakura Blizzard ITX size 4070 GPU

It'll be better when 4090 processing power fits in that small a case, but even at 4070 levels it's a very nifty size. Almost makes me want to put together a cherry blossom themed ITX build.
 
  • Like
Reactions: continuum

mpat

Ars Praefectus
5,978
Subscriptor
In the latest edition of "integrated graphics is catching up to discrete GPUs!!!1!!one", AMDs 890M integrated graphics (16CUs of RDNA 3.5" on LPDDR5X) is within 5% of mobile Geforce 3050. Which is nice enough on a 54W TDP for the entire SOC (3050 has 50W TDP for the GPU only), but we have seen this story play out so many times by now. Intel was showing their Crystalwell design (their eDRAM cache on Broadwell CPUs, 5th gen Core) beating Cape Verde (Radeon 7770) 10 years ago, yet here we are.
 

hobold

Ars Tribunus Militum
2,674
AMDs 890M integrated graphics (16CUs of RDNA 3.5" on LPDDR5X) is within 5% of mobile Geforce 3050.
It seems to me that marketing departments of both the OEMs and AMD itself don't know how to position such fast-ish iGPUs. I saw a list of a dozen or so Strix Point laptop models, and all but one came with a discrete GPU. The vendors are hell-bent on pricing these devices up, up, up, and are totally missing the point that a chip like Strix Point is meant to deliver midrange performance at lower price than last year's midrange.

If you include a discrete GPU in a laptop, you can either omit the respectable iGPU and save that cost. Or you can include a very tiny and slow one as a power saving strategy to dynamically shut down the gamery dGPU.
 

IceStorm

Ars Legatus Legionis
24,900
Moderator
Hardware Unboxed put up their regular GPU pricing video a couple days ago:

View: https://www.youtube.com/watch?v=ZnjO1spYcxg

They have a couple tidbits about upcoming GPUs from discussions they had at Computex:

  • nVidia looks like they may launch the 5080 first, and towards the end of this year, with the 5090 following at the start of 2025.
  • AMD will apparently launch RDNA4 at CES 2025.
  • They're not sure what Intel will be doing, but there's no rumors of a high end card. I don't think even Intel knows what Intel plans to do.

Their general feeling is that if you were looking at higher end parts like the 4080, 4090, 7900 XT and 7900 XTX, just wait. This is presuming those buyers buy high end GPUs fairly often, so they already have something like a 3080 or 3090. nVidia will be the only player in this space for next generation parts.

Midrange, they think there will be sales during "Black Friday" so that retailers can clear out inventory for the refreshes coming in 2025. RDNA4 sounds like it will be the first release in this performance tier.

Mainstream, it's just not good, and it's unlikely to change for at least a year.
 

BO(V)BZ

Ars Tribunus Militum
2,085
Are the days of $200 GPUs basically over at this point? It's not the market I'm interested in, but it's obviously MUCH bigger than the high end market. The 4060 is $300, and the 7600 is $270. Intel's lowest-end Arc card is only $100, but I'm guessing nobody is buying those.

APUs continue to get more powerful, but those do come at a price premium, so it seems likely that will be the choice - buy a high-end APU for a hundred or so dollars more than a comparable CPU, or go discrete and put that hundred toward the add-in GPU that's going to cost at least $150 more for the whole system.
 

mpat

Ars Praefectus
5,978
Subscriptor
Are the days of $200 GPUs basically over at this point? It's not the market I'm interested in, but it's obviously MUCH bigger than the high end market. The 4060 is $300, and the 7600 is $270. Intel's lowest-end Arc card is only $100, but I'm guessing nobody is buying those.

APUs continue to get more powerful, but those do come at a price premium, so it seems likely that will be the choice - buy a high-end APU for a hundred or so dollars more than a comparable CPU, or go discrete and put that hundred toward the add-in GPU that's going to cost at least $150 more for the whole system.
The newest version of the 3050 (based on GA107) is below $200. In practice 6600 is also there, though officially the only cards that low from AMD is the Navi 24 siblings (6300, 6400, 6500XT).
 

Mister E. Meat

Ars Tribunus Angusticlavius
7,253
Subscriptor
Are the days of $200 GPUs basically over at this point? It's not the market I'm interested in, but it's obviously MUCH bigger than the high end market. The 4060 is $300, and the 7600 is $270. Intel's lowest-end Arc card is only $100, but I'm guessing nobody is buying those.

APUs continue to get more powerful, but those do come at a price premium, so it seems likely that will be the choice - buy a high-end APU for a hundred or so dollars more than a comparable CPU, or go discrete and put that hundred toward the add-in GPU that's going to cost at least $150 more for the whole system.
I too remember the days of Radeon 580s available for $200 :(
 
  • Sad
Reactions: Nevarre

Xavin

Ars Legatus Legionis
30,175
Subscriptor++
Are the days of $200 GPUs basically over at this point? It's not the market I'm interested in, but it's obviously MUCH bigger than the high end market. The 4060 is $300, and the 7600 is $270. Intel's lowest-end Arc card is only $100, but I'm guessing nobody is buying those.
Pretty much. Part of that is just inflation, but another big part is that GPUs are just a bigger chunk of the "stuff" inside a PC compared to previously. You used to have to have a lot more discrete parts and they cost a lot more. These days a gaming PC just needs a motherboard, CPU, RAM, SSD, and GPU and all of those except the motherboard and GPU cost a lot less than they used to. Everything else is built-in.
 

BO(V)BZ

Ars Tribunus Militum
2,085
The newest version of the 3050 (based on GA107) is below $200. In practice 6600 is also there, though officially the only cards that low from AMD is the Navi 24 siblings (6300, 6400, 6500XT).
That's true, so that might be the third possibility - the ~$200 price point will be the last-gen cards re-released in the middle of the current gen cycle.
 
They also say the highest-end RDNA4 AMD GPU, the likely 8800XT, will perform like the 7900XT, so basically 4070TiS-tier performance. Now there's a lot of room to come in under the 4070TiS $799 MSRP with a new generation, and if it was aggressively priced at like $399 that would revitalize the mid-range. It won't be, though.

Anyway, if you're coming from a RTX3080 like me that's a whopping 28% performance gain. Whoop-dee-freakin-doo. I'm looking for >75%, minimum. And that's the best part AMD is planning to release. They have nothing for an Nvidia user looking to upgrade after 2 generations. I feel confident Nvidia will not have the same problem, although theirs will likely cost a thousand dollars.
 

Asral

Ars Scholae Palatinae
1,145
Subscriptor
They have nothing for an Nvidia user looking to upgrade after 2 generations.
Even if they did, would you even consider buying it?

Because judging the last few years of posts in threads like this one, I very much doubt they could profitably sell a hypothetical 5080 competitor at a price that most of the high-end Nvidia customers would demand at this point.
 
  • Like
Reactions: Spunjji

ScifiGeek

Ars Legatus Legionis
16,413
Even if they did, would you even consider buying it?

Because judging the last few years of posts in threads like this one, I very much doubt they could profitably sell a hypothetical 5080 competitor at a price that most of the high-end Nvidia customers would demand at this point.

Oddly if I look at the Steam HW survey, the ONLY AMD RX 7000 series card that sold enough to get past the survey cutoff is the 7900 XTX. Their top card.
 
I’m not particularly price-sensitive, but I can’t see myself paying a thousand dollars for a GPU unless it’s a massive upgrade, like my upgrade from a GTX1080 to RTX3080 was; around 130% as I recall. Not 30% better, no, 130% faster. Well over double.

That’s what I see as a real skipping a generation upgrade; it’s sexy. If the RTX5080 is $999 and offers well over double the performance, well, ok then. GPUs are obnoxiously expensive, I’d think to myself, but you have the money and it’s a tasty little upgrade.

Now if it’s $999 for RTX4090-class performance, as seems overwhelmingly likely, ooh, geez. That’s only a 90% uplift and it’s a lot of money. Maybe I wait for the 5080ti or super. It’s borderline, perhaps an announced DLSS 4.0 with realtime AI texture upsampling going only to the 50-series would get me over the hump.

Now if the 50-series is a real correction like the 30-series was, I can imagine a RTX5080 with 4090-class performance for $799. And that milkshake would bring all the boys to the yard. I just don’t know that the Nvidia from 2020, the one that prioritized gamers alongside datacenter , exists. So I doubt it.
 
Last edited:

mpat

Ars Praefectus
5,978
Subscriptor
AMD already hints at "not cheap" by calling it a -800. That's how you make sure Nvidia can price theirs however they want, AMD won't have a credible competitor at the high end anyway.

7900XT is €800, next gen 8800XT is about as fast, so hopefully €650? Great success 👍
AMD hasn’t called it anything yet. All names are guesses by rumor sites. Personally, I don’t think it will be as much as $650. 7800XT, full Navi 32, is $499. 6700XT, full Navi 22, was $479. Vanilla 6800 was $579. I think it is somewhere in that range.

Oddly if I look at the Steam HW survey, the ONLY AMD RX 7000 series card that sold enough to get past the survey cutoff is the 7900 XTX. Their top card.
24GB VRAM in a card that costs about half of the 4090, the only other card that can offer that. Some people want lots of VRAM, and probably not mainly for gaming.
 

hansmuff

Ars Tribunus Angusticlavius
9,389
Subscriptor++
Somehow VRAM amount has really crept into the GPU discussion in the gaming scene, I guess because NV has been particularly cheap on that end, and AAA's are getting hungrier. I do think it's outliers where 12GB even at/near 4K don't suffice.. unless it must be ultra settings which.. well to each their own. I dial those in when I can, too, but it won't make me spend $2K on ultra settings, I'd rather go to one lower setting and not stare at screenshots wondering what could be.

Things are getting a lot more complex with the various upscaling techniques: we have another slider for "more FPS or more eye candy" that isn't exactly the same as "high quality assets yes or no" when an upsscaled lower-res high-quality assets looks better than native-res medium assets setting.
It changes the VRAM amount discussion quite a bit.

At 3840x1600 I see my 3080Ti suffering more and more at native ultra-settings on new AA/AAA games, but DLSS makes it just sort of.. go away with very little degradation to my eyes. Of course not every game supports it, but AAA's mostly seem to, and those are really the games people refer to as models for VRAM starvation.

Seems to me that at least on a somewhat modern GPU from any manufacturer, you have a good set of tools to make the card live longer and still enjoy very decent quality and performance.

All that to say, I think the VRAM discussion is a little overblown in the gaming scene.
 

Drizzt321

Ars Legatus Legionis
28,573
Subscriptor++
The big reason I went with an AMD card, and why my next card (some day, 3 years? 5 years?) will be likely to be an AMD card is the Linux drivers. AMD has finally caught up with their mostly/almost entirely open source drivers, software support, etc. I'd much rather than have the increasingly good upstream/downstream kernel & distro support by them then put up with Nvidias closed source drivers and their whims. Plus it helps me reward, in my tiny way, behavior I want to see (e.g. the open source drivers).
 

Demento

Ars Legatus Legionis
13,814
Subscriptor
Not at all, it’s happening for a reason. Many games don’t run well with 8GB VRAM these days, even at 1080p
Nonsense, or only for a very small definition of "many". In my experience, even at 1440p, I run out of performance to keep frames above 50fps or so long before I run out of VRAM. Now I admit there's the odd game out there that's very poorly optimised, or rolls out the door desperately needing a patch to address some issue, but it's not common and it's not "many" games. And if you don't care about keeping your frames high, then you're in luck because running out of VRAM just impacts performance and doesn't stop the game from running. It's literally never been an issue for me. Alan Wake 2 is a great example, it's just punishing to us lesser creatures and was way under 8GB by the time I got useful frames out of it. Still Ultra textures, of course.
(3060Ti, so the fastest 8GB cards - 4060Ti and 3070 only about 10% faster)

That said, there's little excuse for anything more powerful than a purported RTX5050 having 8GB in the next generation.
 

Spunjji

Ars Scholae Palatinae
617
Subscriptor
They also say the highest-end RDNA4 AMD GPU, the likely 8800XT, will perform like the 7900XT, so basically 4070TiS-tier performance. Now there's a lot of room to come in under the 4070TiS $799 MSRP with a new generation, and if it was aggressively priced at like $399 that would revitalize the mid-range. It won't be, though.

Anyway, if you're coming from a RTX3080 like me that's a whopping 28% performance gain. Whoop-dee-freakin-doo. I'm looking for >75%, minimum. And that's the best part AMD is planning to release. They have nothing for an Nvidia user looking to upgrade after 2 generations. I feel confident Nvidia will not have the same problem, although theirs will likely cost a thousand dollars.
You might not do too badly with Nvidia, depending on how trends hold. If they launch an xx70 Ti early again (or realign the xx70 back to that performance position, which seems unlikely given their love of segmentation) and it meets the trend of hitting prior-gen xx90 performance, then you'll get your wish at the $800-900 mark. Factoring in inflation, that's not too bad of a price/performance improvement over your 3080. If you can hold on until a probably-inevitable 5070 Super refresh, you'd do even better.

It also suggests that AMD would need to hit the $500-or-below mark for a speculative 8800 XT to make sense. That could be pretty fierce value for money, and a second-rung model could be really good for folks like me on 6700 XT-class hardware. I guess we'll see how Nvidia decides to lead on price, because at this point it's clear that AMD are determined to "compete" only in the mildest sense of the word.
 
Last edited:
Nonsense, or only for a very small definition of "many".
No, Daniel Owen did a pretty extensive examination of this. If you run at ultra, many (the traditional definition of many) of the latest graphically intensive games will be impacted to various degrees with 8GB VRAM.

For example, Horizon Forbidden West gets a whopping 44% advantage in 1% lows at 1080p very high, indicating dramatically smoother gameplay. This is easy to miss because the average is only 9% higher. If you go to 1080p high settings, the discrepancy disappears. Also Resident Evil 4 and path-traced Cyberpunk were very memory-hungry. Lots of other games showed a 10-15% advantage in 1% lows, indicating they would like more VRAM but can deal with it for now, it's more prevalent at 1440p than 1080p, and DLSS framegen eats a ton of VRAM also.

He tested Alan Wake 2 path-traced too. On the 4060ti 8/16GB cards he was using, he went to 1080p DLSS quality and found a 35% 1% low advantage for the 16GB card. Much smoother.

That isn't necessarily a problem, except $400+ GPUs are still being sold with 8GB VRAM and people would reasonably expect them to run 1080p balls to the wall. But they won't. I certainly wouldn't recommend buying an 8GB GPU in 2024 unless it was a great deal and you know what you're getting. Today it's just hitting ultra, but eventually high settings will be impacted too and who wants to be bothered?

We had an "8GB VRAM is enough for anybody for now" thread here on Ars and I agreed with you at the time, a year or two back. I figured with so many 8GB cards on the market, developers would be forced to support them. And to be fair they still are, just not at ultra or with path-tracing.

Regarding a theoretical $800 RTX5080 with RTX4090-level performance, that would be attractive. $1000 much less so.


View: https://youtu.be/nrpzzMcaE5k?t=729
 
Last edited:
No, Daniel Owen did a pretty extensive examination of this. If you run at ultra, many (the traditional definition of many) of the latest graphically intensive games will be impacted to various degrees with 8GB VRAM.

For example, Horizon Forbidden West gets a whopping 44% advantage in 1% lows at 1080p very high, indicating dramatically smoother gameplay.

Very High is the Maximum setting for the game. Turn it to high and you are totally fine, even at 1440p. The visual difference are likely unnoticeable unless doing split screen slow motion.

That isn't necessarily a problem, except $300+ GPUs are still being sold with 8GB VRAM and people would reasonably expect them to run 1080p balls to the wall.

I completely disagree with this mindless just run at Max settings mentality, unless you have a completely overkill monster like a 4090. There have been games in the past that would cripple even high GPUs on max settings. Plus these days even medium settings can look amazing.

If you want to mindlessly run the max settings, you are either looking at high end GPU, or maybe just get a console if want to avoid tweaking settings.

My favorite videos are when they take a game and go through all the settings to create an optimized set, that still essentially looks near indistinguishable from Max settings but uses less VRAM and performs significantly better.

Though I definitely would NOT recommend buying a $300+ GPU with 8GB. I'd either buy an entry level 8GB card, or pay enough to move to 12GB or more (I bought a 4070).

Those 3GB VRAM chips can't come fast enough for next gen. That should move anything but the lowest end to 12GB.
 
Very High is the Maximum setting for the game. Turn it to high and you are totally fine,
Yes I mentioned that fourteen or fifteen times myself, just turn down the settings.

The real difference from our discussion 2 years ago is that 1080p gets hit too, but the conclusion is the same. Don't run at ultra.

If I'm paying >$400 for a GPU in late 2024, I damn well do expect games to run like buttah at 1080p. Maybe not path-traced, but otherwise yes, abso-fucking-lutely. The problem is not my expectations, the problem is those $400 GPUs simply aren't priced well, they're sidegrades, the Turing-redux. Hopefully Blackwell is Ampere-redux.
 
Last edited:

Demento

Ars Legatus Legionis
13,814
Subscriptor
I mean, I'm playing Forbidden West at all High settings right now. Nicely fluid, and good to see they optimised a bit better than some other recent ports did. Though Ghost of Tsushima ran like butter too, so I shouldn't be too harsh on the PS ports.

Just did a test and bumped FW's texture detail to VH. I still come in at 7GB of VRAM in use with all other settings at high. With DLSS and AMD framgen, it's 100+FPS though I've no idea how much of that is the frame generation. I just have a habit of leaving things at High, but it looks like FW could possibly handle VH throughout. I can't be bothered to test it all out though. It's very probable that the person in question tested it at release, rather than 8 patches later.
 

Demento

Ars Legatus Legionis
13,814
Subscriptor
That video was from 3 weeks ago. Suggest you check your 1% lows, not just the averages.
That's with me having a live FPS counter up. FW doesn't have a performance test mode. Again, VH textures only. The rest is High and I'm not going to devote more time to this. Frankly, if I can't see the dips, if they exist, does it matter?

Edit to add: I do think 8GB is Bare Minimum, btw. It's just not the total shitshow that some people seem to think it is. Run at high, never notice the difference, enjoy the greater framerates. I think only the 3070/Ti sticks out as something that should never had had 8GB. It was a perfectly reasonable total for the xx60 cards at the time.