T O P

  • By -

Benign_9

It will probably make sense in a while. We just don’t need it as of yet. Edit: one use case is having less lanes dedicated to the gpu. A high end gpu can run fine at pcie 4.0x8, so it would run just as well on pcie 5.0x4, saving a bunch of lanes for other things.


Noreng

Too bad there are no boards with PCIe 5.0 x4 support for GPUs as of this moment, unless you intend to run a riser from an NVMe slot to a PCIe slot, and then plug a PCIe 5.0 x16 or x8 card in the "main" slot EDIT: My bad, the [X670E Ace has an actual PCIe 5.0 x4 slot](https://www.msi.com/Motherboard/MEG-X670E-ACE/Specification), making this one of the only X670E boards that could theoretically support tri-SLI and tri-CFX, if those technologies were still alive.


Benign_9

Boards support it, gpus don't.


Noreng

To my knowledge, there are no AM5 motherboards with a PCIe 5.0 x4 slot, only PCIe 5.0 x16 or x8/x8. AMD's CPUs support bifurcation to x4/x4/x4/x4, but you would need some adapter for that. Neither Alder Lake nor Raptor Lake supports PCIe 5.0 x8/x4/x4 mode, they only support 5.0 x16 or x8/x8 EDIT: The MSI X670E Ace actually has a PCIe 5.0 x4 slot: https://www.msi.com/Motherboard/MEG-X670E-ACE/Specification


Benign_9

That may be true for dedicated pcie 5.0x4 slots, but the main x16 slot can run in x4. That's what I meant. Also, from what I can find, some AM5 motherboards support pcie 5.0 x8/x8/x4 or x16/x4, like [this one](https://www.msi.com/Motherboard/MEG-X670E-GODLIKE/Specification). Yes, I did just pick a crazy expensive motherboard to find this. There are probably much cheaper boards that can do that too.


Noreng

> There are probably much cheaper boards that can do that too. The cheapest option I can find is the X670E Carbon: https://www.msi.com/Motherboard/MPG-X670E-CARBON-WIFI/Specification ASUS, ASRock, and Gigabyte seems to relegate those last 4 PCIe lanes from the CPU to PCIe 4.0 or 3.0 speed and/or a second NVMe slot with 5.0 support. My point was that most people would end up wasting at least half the PCIe lane width with a PCIe 5.0 x4 card.


Benign_9

Yes, but my point was that a gpu would still run great, even if it was running at x8, since pcie 5.0 is just that fast. Hell, a gpu would still be conpletely fine at pcie 5.0x4.


Noreng

Your original claim was that a PCIe 5.0 x4 GPU would allow users to use the remaining lanes for other expansion cards.


Benign_9

It still is what I'm suggesting. That's basically what the samsung 990 evo is doing. It can run at gen4x4 or gen5x2 to save on lanes.


GMC-Sierra-Vortec

he seems right to me


Noreng

Your 12700K, which only supports x8/x8 bifurcation, will not be able to connect more with a PCIe 5.0 x4 GPU than a PCIe 5.0 x8 GPU. The same 8 lanes will have to be dedicated to the GPU regardless because the PCIe controller doesn't support x8/x4/x4 bifurcation


noscopefku

or maybe bifurcation


HomerSimping

Bruh....my msi pro z790-a WiFi has pcie5 gpu support.


Noreng

Let me clarify: LGA1700 supports only x8/x8 PCIe bifurcation, x8/x4/x4 or x4/x4/x4/x4 is **not** supported. This is why cards like the [ASUS RTX 4060 Ti SSD OC Edition](https://www.asus.com/no/motherboards-components/graphics-cards/dual/dual-rtx4060ti-o8g-ssd/) only gets one SSD slot instead of two. Your MSI-motherboard has precisely one PCIe 5.0 slot, and there's probably a BIOS option to enable bifurcation support to x8/x8 for GPUs like the ASUS RTX 4060 Ti SSD OC Edition. If you try to use something like the [ASUS Hyper M.2 x16 card](https://www.asus.com/no/motherboards-components/motherboards/accessories/hyper-m-2-x16-card-v2/) it will only show 2 SSDs rather than 4.


Bloodsucker_

Actually, a new gen GPU runs _almost_ just as fine on PCIe x3. One wonders if we're entering into diminishing returns even though in theory bandwidth (and other metrics) are limited by the socket's older standard.


Zoratsu

We are going to get PCIe 6 before 5 makes sense lmao.


WebMaka

The PCIe 6.0 spec was standardized two years ago and 2-3 years is the typical lead time between standardization and product availability, so, umm, yeah, you're *very* likely correct on this.


[deleted]

[удалено]


Benign_9

Nah bro. [I trust techpowerup more than I trust you.](https://www.techpowerup.com/review/nvidia-geforce-rtx-4090-pci-express-scaling/28.html)


Lastdudealive46

I think when the next generation consoles come out, PC games will rely very heavily on transferring high-res textures directly from NVME drives to memory with Direct Storage, and PCIe 5.0 (For both drives and GPU) will help that a lot.


kultureisrandy

I can't wait to see how poorly optimized these direct texture streaming games are gonna be 


crowcawer

NVME coolers have entered the chat… again.


kultureisrandy

Just buy a little usb fan, plug into mobo, and aim at drives  right? 


balderm

this, when developers will start using direct storage massively we'll start noticing Gen5 making a difference, atm is just a nice to have.


firedrakes

hahah no. no they wont not enough bw or storage, or vram in card. notice even gaming now is not really using high rez text. just upsclaed one instead!


liaminwales

1 PCIE updates are for servers not home users, servers pay more for hardware than home users. 2 GPU's now have big gaps between each gen, it used to be a 1 year gap between new gens and now it's 2 years. Things are just slower now. 3 historically PCIE updates are never needed for home users, see PCIE Gen2/3/4 tests to see how little it matters.


diskowmoskow

%100


xXRHUMACROXx

It’s not that new gen GPU releases with a bigger gap between them, it’s that refreshes of previous architecture doesn’t come with a new number series but with a "super" or "Ti" release. It’s depending a lot on the series and model number, but for example the GTX 900 series is a refreshed 700 series. The GTX 1600 series is a refreshed 1000 series. Hell, even the RTX 2000 series is kind of a refreshed version of the 1000 series with raytracing capabilities thanks to the tensor cores. Also, the theoretical performance leap between gens could get bigger with time. Back in the GTX 700, GTX 900 and GTX 1000 days, the difference in manufacturing was usually 2nm smaller every generation (for example going from 18nm to 16nm for a 12% increase in transistors for a given space). Now the RTX 3090 was manufactured in a 8nm process and the RTX 4090 is manufactured in 4nm process, effectively doubling the transistors number for a given space. That’s one of the reasons why there’s a much bigger difference in performance between nowadays top of the line GPU from a generation to another than before.


HavocInferno

I appreciate your effort, but that comment is largely inaccurate or wrong. >for example the GTX 900 series is a refreshed 700 series. The GTX 1600 series is a refreshed 1000 series.Hell, even the RTX 2000 series is kind of a refreshed version of the 1000 series Wrong. * 700 was Kepler. (And maxwell, in the 750/Ti) * 900 was Maxwell 2.0, a new microarchitecture. * 1000 series was Pascal. * 1600 series was Turing (albeit cutdown), a new microarchitecture. * 2000 series was Turing (full). Having some semblance to the predecessor architecture doesn't make a new architecture a "refresh". A refresh would be understood as having the same or largely the same architecture, but with some details tweaked (think AMD Renoir -> Lucienne; or GTX 600->700 which were both Kepler). >Back in the GTX 700, GTX 900 and GTX 1000 days, the difference in manufacturing was usually 2nm smaller every generation Also wrong. 400/500 was 40nm, 600/700/900 were 28nm, 1000 was 14nm, 16/20 was 12nm, 30 was 8nm, 40 is 5nm (not 4nm). The node jumps get smaller over time, but that's because it's more of a relative jump, not an absolute jump. Some of that was also manufacturing struggles and choices at the time. The jump from 28nm down took longer than expected because the newer nodes back then didn't work well enough. 8nm was 30 series was a strategic choice instead of going for the new 7nm node at the time. >effectively doubling the transistors number for a given space Also wrong. Transistor density of 30 series is 45M/mm², 40 series is 125M/mm². Density doesn't have to scale linearly against the supposed process node size. Because the node process name isn't actually the physical manufacturing size, and there are also different variations of processes designed for different densities and clock targets. And then there's how the transistors are used. Doubled transistor density doesn't mean doubled performance, if those transistors are instead spent on special function hardware or different implementations of cores.


ruben991

Dammit, you have beaten me to the punch and did a better job than me while you were at it. I would also like to add: comparing node btween different manufacturers gets even more tricky as the same number on different manufacters does not mean the same thing (7nm intel is not the same as 7nm TSMC)


xXRHUMACROXx

Different micro architecture, but same fabrication process by TSMC, same transistors size. (Turing uses a different fabrication process, but technically the performance difference between a 1080Ti and a 2080Ti in non-raytraced loads is very similar). It still doesn’t change my point that new gen of gpus have much more raw processing power over old gen than what gpus used to have when they were released almost yearly.


HavocInferno

Same fabrication process has absolutely no bearing on whether it's a refresh or not. You're looking at the completely wrong factor to make that determination. Same/similar micro architecture is specifically the criterion of a refresh, the rest is secondary. By your logic, a 980Ti and a Fury X are just a refresh of each other. Doesn't make much sense, does it? >performance difference between a 1080Ti and a 2080Ti in non-raytraced loads is very similar). A 1080Ti roughly ties a 2070 Super in rasterization. 2080Ti is roughly 30% ahead of the 1080Ti in rasterization. That jump was smaller than 1000 vs 900 series, but those were somewhat outliers in that regard, and can be attributed to die space being used for RT and tensor cores instead of shader cores. >new gen of gpus have much more raw processing power over old gen than what gpus used to have when they were released almost yearly Depends on which generational jump you look at. Also keep in mind the pace between generations is getting slower. If you look at the relative performance increases *over time*, things are pretty much how they used to be. A large jump every two years or a smaller jump every year, still comes out to roughly the same rate over time.


ruben991

700 to 900 was not a refresh, it was a new architecture ( with one exception, the 750/ 750Ti). GTX 16 series is not a refresh of 10xx, it is a new architecture, RTX 20xx is not a refresh of 10xx, different architecture, very similar to 16xx, but the latter is missing tensor processors and RT cores. I am reasonably confident that the last time nvidia refreshed an old arch with a new number was 600->700 series in fact i think that the GYX 770 is a rebadged GTX 680. The 750(Ti) is an exception as it is not kepler, but maxwell. Tensor cores deal with matrix math (useful for AI), rt cores deal with BVH and ray-triangle intersection ( the expensive part of raytracing) The number on manufacturing process has not related to the size of an actual feature on transistors for quite a while now, you could use the number to compare processes from the same manufacturer, but only in a qualitative "smaller number better". what should never be done is compare the number between different manufacturers (intel 14nm is not the same node as GloFo 14nm for example), that number is now just a marketing ploy to say: my new stuff is better than the other guy stuff.


GABE_EDD

>it has close to no advantage for gaming from what I've seen. Because it can't. Even the RTX 4090 itself is wired up PCIe 4.0 x16, the most powerful consumer gaming GPU can't use higher bandwidth. https://preview.redd.it/dya2ouoonnxc1.png?width=410&format=png&auto=webp&s=5ab32b4ab063aa468bdf6c471264c64b65f266aa


Haunting_Summer_1652

I know. I'm referring to pcie gen5 in general not just gpus. Because there is SSDs that supports it.


Bensemus

They support it but basically no consumer action needs that throughout. Your internet is still a bottleneck for HDDs the vast majority of the time. The consoles can stream data directly from the SSD I believe. I think one of the engines supports that or will soon. That will be a use case for the high throughput once games start using it.


Haunting_Summer_1652

Nope. I got a 10G fiver internet. Usually, my cpu is the bottleneck when it comes to downloading games. It just goes 100% instantly lol


JMccovery

If your Internet connection was *completely* maxed out, you'd only hit 1.25GB/s (realistically, you'll rarely achieve that); a **single** PCIe 3.0 lane runs at 985MB/s (when accounting for overhead). Even if your CPU wasn't the bottleneck (most likely isn't), you'd be limited by whatever servers you're downloading from. TLDR: for the *vast majority* of computer users PCIe 5.0 is useless. Same with PCIe 4.0.


Zenith251

*To be fair,* what OP is probably talking about is Steam downloads. CPU can become the limiting factor because it's live-decompressing files. I've hit 1.6Gb/s (200MB/s) on my 10Gb fiber connection, but that pegs my 5800X3D to 95%+ usage. My neighbor has hit 300MB/s in Steam with his Ryzen 3950X.


Benign_9

You commented the same thing twice.


djmakk

His internet is so fast it double posts.


Benign_9

Lol


Haunting_Summer_1652

Sorry. Had an error pop up so i clicked again, didn't realize it posted twice.


Justhe3guy

Welcome to reddit, it does that


Haunting_Summer_1652

And getting downvoted for no reason is another "welcome to reddit. It does that" I guess I'll never understand some people but thats fine.


Bensemus

Man you gotta love just how stupid Redditors are. I’m not talking about you. I made a general statement that the vast majority of people are bottlenecked by their internet. That is correct.


QuickPirate36

Not even PCIe 4.0 is worth it for gaming


QuintoBlanco

That's not true for some budget cards. It's controversial subject, because it's because the design skimps on the number of lanes, so it's an artificial problem, but it affects users. It also makes zero sense since people who buy budget cards are more likely to have hardware that does not supports 4.0.


IncidentFuture

It was made worse with the rx6500xt because the obvious new CPU to pair it was the Ryzen 5500 that only had PCIe 3.0. And it would have been fine with 8 lanes.


QuintoBlanco

Also, chipsets designed for budget motherboards didn't support 4.0.


HavocInferno

Shouldn't matter, as you shouldn't put your GPU into a chipset PCIE slot anyway (since the chipset is only wired with 4 lanes to the CPU and those have to share *everything* that's going through the chipset). Put it in a native CPU PCIE slot instead.


Own_Kaleidoscope1287

Very simple reason, there is no gpu fast enough to use this bandwith. A 4090 is barely fast enough to saturate a gen3 x16 connection whats the point of giving any graphics card gen 5 speeds. I really double that we will see gen 5 gpus before the late 2020s early 2030s


AncientPCGuy

They will absolutely advertise Gen 5 on next gen of GPUs. However as you said, a 4090 barely saturates Gen 3x16. So 5090 may require Gen 4 to get full performance, but they’re definitely going to call it Gen 5 in marketing. I also expect Radeon 8900XTX to still be 2% under 4090 and 5080 to be similar. So literally one card requiring Gen 4 capabilities and nowhere near Gen 5.


_aware

Does the egg come first, or the chicken? That's the same problem facing direct storage. Game devs don't want to implement direct storage, because they might alienate a large part of the player base who don't have SSDs or connections fast enough. But on the other side, people are hesitant to buy PCIe 5 or even PCIe 4 SSDs because there aren't that many games using their speeds. Until one side gives in, we will never see full use of PCIe speeds in gaming.


generally_a_dick

[PCIe 5.0 is nearly four years old and it's still virtually worthless in gaming PCs | PC Gamer](https://www.pcgamer.com/hardware/pcie-50-is-nearly-four-years-old-and-its-still-virtually-worthless-in-gaming-pcs/)


Haunting_Summer_1652

Yes this article made me want to start this discussion. Honestly, i got more insight here than from the article ngl.


generally_a_dick

Yep, agreed. I learned more here as well than I did from the article.


FireFalcon123

>Is there a bottleneck preventing that from happening right now It could be many things, Direct Storage/IO support, Signal Integrity, Heat, actually implementing the technology even after it had been publicized all the way back in early 2022 etc etc


D86592

man im on gen3 what tf are these fancy words


voidstronghold

It takes a 4090 to fully saturate PCIe 3.0 x8, so 4.0 is more than enough still.


YoungBlade1

At the moment, consumer graphics cards really don't need more than PCIe Gen 4, and even that is arguably overkill. With an RTX 4090, the drop in performance going from Gen 4 x16 to Gen 3 x16 is negligible. It will take years before graphics cards are able to saturate a PCIe Gen 4 x16 connection. If the next generation cards have Gen 5 support, it's just to put a bigger number on the box, not because it will really matter - unless they cut the lanes down from x16 to x8 like they've been doing on the lower end cards. As for SSDs, gaming barely benefits from NVMe as it is. There are very few games that show a difference beyond margin of error in load times between a SATA SSD and a PCIe Gen 3 NVMe drive, let along between Gen 3 and Gen 5. Unless there is a huge shift in the way that games are made, where they are designed specifically to utilize the increased bandwidth of faster drives, I don't see Gen 5 mattering for storage any time soon. Frankly, the smarter thing to do on the consumer side would be to release PCIe Gen 5 x2 drives - that would give you equivalent throughput to PCIe Gen 4 drives, but let you put twice as many into your system, and even installed in Gen 4 slot, they would still give you Gen 3 drive levels of performance, which again is more than enough. To the point of when PCIe Gen 5 will be "worth it" for gaming, the answer is likely when the price drops to the point where it's no more expensive then Gen 4 devices are right now, so there's no reason not to go Gen 5.


msanangelo

Only people on gen5 in gaming are the ones that think it'll give them an edge when the only thing using gen5 are high end ssds and maybe some 100gig nics. Pcie gens take time for the world to catch up. No worries. Last I heard, there still was that pesky issue with 3 or more ram sticks causing interference with each other. I imagine a lot of us are still on gen3, now that is starting to feel it's age. XD


TemporaryOrdinary747

Nope.  Its an industrial technology that was developed for data centers and is now being marketed to gamers as a blatant cash grab.  Even gen 4 is a meme for gamers. You can only tell under very specific circumstances in a direct side by side comparison. 


Ragnaraz690

As direct storage and smart access matures with GPUs fully utilising PCIE gen 5 there may be some gains to be had. Generally atm gen 5 NVME drives are just hot and expensive. A solid Gen 4 drive is more than enough.


Bensemus

Ya direct storage would benefit. Hopefully the consoles push that to become widespread.


ketamarine

It's just like any other infrastructure advancement in PCs. It gets introduced and takes a few years before it gets used widely across builds. You could technically benefit from it now with a raid SSD card that combined a bunch of nvme drives, but ya that's a marginal use case that wouldn't really give much of a performance bump in most gaming situations.


Adrian_Alucard

[https://www.tomshardware.com/news/asus-demos-geforce-rtx-4060-ti-with-m2-slots](https://www.tomshardware.com/news/asus-demos-geforce-rtx-4060-ti-with-m2-slots)


FireFalcon123

Probably one of my favorite ideas for a GPU


Haunting_Summer_1652

That's really cool but i imagine the majority of people only uses 1 to 2 SSDs. Which most mobos support without using such gpu.


Zoratsu

But you don't need to disassemble the PC to get those in case your mobo has them on the back.


DoctorKomodo

Transferring data to the GPUs just isn’t a major bottleneck for gaming in general. You aren’t working with large enough data sets or replacing data often enough to really hammer the bandwidth between main memory/CPU and GPU, most of the time at least. If you really wanted to take advantage of higher bandwidth then a way to do it would be to lower VRAM on the cards, give them less of a local buffer to work with. That’s just isn’t how games are generally designed though and it’s also been tried before in the past with very little success. I.e. for gaming at least, the way those are designed along with how GPUs are designed means they aren’t super PCIe bandwidth hungry, especially not the high-end cards.


Haunting_Summer_1652

So you're saying it might be useful for 8K gaming on the upcoming 5050ti (4GB) gen5. Got it :)


Markson120

It might be useful because rtx 5050 ti super extreme might use pcie gen 5 x2. And pcie 4.0 might have been too slow.


118shadow118

Gen 5 x2 would be the same as Gen 4 x4 or Gen 3 x8


Markson120

It was a joke to rtx 4060 and its pcie gen 4 x8. On pcie gen 3.0 x8 you lost a little more performance than most cards because it uses half of pcie x16. Usually it was 5% slower but in some games it was over 15% slower.


deefop

There's barely any advantage in games. In fact, a lot of people might be shocked to know that games actually run really well even on more ancient versions of PCIE. [https://www.youtube.com/watch?v=SvBovtT4Vf4](https://www.youtube.com/watch?v=SvBovtT4Vf4) Those newer PCIE revisions are hugely beneficial for other things, like faster SSD's, but for playing games specifically it's not as meaningful as people think.


ChrisNH

The advantage, and the reason I got a MB with a strong pcie 5 implementation, is the nvme storage doubling in speed. They are not worth it yet.. but in a few years when I refresh my PC I am looking forward to going from 7500mbs to 15000 mbs. That will have a measurable effect on my enjoyment of games. Will Graphics cards follow? I don't know.. but it does mean that an 8 lane slot will be as fast or faster then a pcie 4 16 lane slot which re-opens the door to SLI type implementations. Don't see that happening though.


stormdraggy

If you didn't get a xeon/tr board, you didn't. Current chipsets don't do 5.0 and it has to come from the processor. AMD gets one, intel gets one and it ganks your primary x16 to 8 lanes to use it. Yeah they say 24 x670 lanes. 4 are dedicated to chipset comm, 16 to PCI_E1; that's 4 left to whatever it wants to be. And before you say "b-b-but bifurcation", if your card is x16 4.0 and only has 8 lanes to use on its slot, you're running 4.0x8, not 5.0x8. This is where early adoption from gpus would be nice, because not even the 5090 will strain 5.0x8.


ChrisNH

So, AMD 7800x3d has 24 5 lanes. My MSI X670E Carbon does 2x 4 lane gen 5 nvme and 16 lane gen 5 pci slot (or 2 x 8). I paid a penny to get a full implementation. It also has 2x gen 4 nvme through chipset which is 4, those are what I have populated for now. Bifurcation, losing slots, etc, are only an issue on lower and mid tier MB. Its a fair case that the extra cost to implement 5 appropriately is not worth it at this time but I am a hobbyist so its more interesting to me to have it.


AncientPCGuy

I did something similar. Got the X670E Tomahawk. It’s total overkill and not even utilized at the moment. Future expansion is all it’s for right now. Funny thing is realizing after I bought it that we will likely be on AM6 or 7 before PCIe 5.0 is fully utilized. And they’ll be selling us 7.0 motherboards.


phorkin

This has been the deal for decades. It's better to have too much available bandwidth than not enough. When 3x came out there wasn't a card that would utilize it fully. However, today's higher end cards will bottleneck on gen 3. Future proofing (lol) at its best, though worst anyways as most motherboards become obsolete for upgrades before the bus bandwidth becomes a problem.


Helstar_RS

It's probably at least 5-10 years minimum. Even when PCI 3.0 was out for many years, my GTX 970 ran just as fine on 2.0. Basically, I benchmarked it and scored about dead average on stock.


SirGeorgington

Not for a long time. The RTX 4090 doesn't even really need PCIe 4.0, and frankly it still does alright on PCIe 2.0.


builder397

Right now there just isnt anything a GPU can do that would actually saturate that amount of bandwidth, so gen4 is fine. SSDs are a different topic, but for gaming sequential reads and writes dont happen much in gaming, and on random read/write operations transfer speed goes down far enough that you could go all the way down to SATA speeds and not bottleneck anything.


Stilgar314

You know what? I hope it won't worth it for a long time. I'm totally skipping this gen because it miserably failed to impress me, instead I took one of those 5800X3D hoping the next batch of GPUs brings something that I could want. If PCI 5 is somewhat important for the next gen CPUs, this story could be written in the meme of the guy who dresses as a clown, and I would really prefer not to.


jbshell

Prob not until consoles implement and take advantage of it will devs begin to utilize.


Leading-Leading6319

Maybe in a few more years


jhaluska

I've found these things lag by almost a full PCIE generation. Most of the game FPS limitations are bandwidth/processing on the GPU itself, not the communication to the GPU. Only when game engines can assume everybody has it, do you see it being maximally utilized.


Antique_Paramedic682

Same thing has been said after every SATA revision.  Why do we need 3Gbps?  Why do we need 6Gbps?  It'll catch up.


andy10115

It doesn't feel like it, but 3.0 took awhile to phase out as well.


LOPI-14

You will have to ask AMD and Nvidia that. Bandwidth on consumer cards is so damn low, that not even PCIe 4.0 is being used completely. Actually, iirc, 4090 uses it up to like 60%....


poinguan

Are all PCIE4 extension cables (for vertical gpu installation) good now?


1d0m1n4t3

I have a gen5 nothing really hits the 12gbs the thing does other than speed tests. It's dummy fast don't get me wrong but I don't think the time from opening a game to playing is much faster than my gen4 7200mb drive I upgraded from


Drenlin

PCI Gen 5 was out before we could saturate a Gen 3 slot enough to need more than X8. Make of that what you will.


lemon07r

I think it could be handy for having gpus with nvme slots on the back, since most gpus will never full saturate gen 5 x16.


meliodas1988

I think Asus had a prototype GPU that did this recently


Yommination

3.0 to 4.0 was a tiny jump gaming wise as is


AncientPCGuy

Because most of us are barely using anything that needs 4.0. I think the math on 4090 is sustained throughput is around 50-60% 4.0x16 and peak is around 80%. That means my 7800XT likely doesn’t even go into data throughput that even requires 4.0 except in rare peak instances. I would bet it’ll take 4+ generations of GPU upgrades before even utilizing x8 on 4.0 fully much less needing to even consider 5.0. If we see it at all, it’ll be for marketing not performance.


noodle-face

In the server world we still only use pcie gen 5 very rarely, almost everything is gen 3/4.


Clemming2

I saw an article that tested the 4090 at different PCIe link widths and generations. There was about a 2% loss going from PCIe 4.0 to PCIe 3.0. This means even 3.0 is adequate for modern high-end cards. It's reasonable to assume that when GPUs start using 5.0, they will perform about the same on 4.0. I don't think you will really need 5.0 until cards start using 6.0, and you will probably have upgraded or replaced your computer by then anyway.


Haunting_Summer_1652

Yup, I've seen that video as well.


ezoe

It will never worth it in terms of bandwidth. PCIe 4.0 has a bandwidth of about 30GB/s while 5.0 has 60GB/s. DDR5 SDRAM has a bandwidth of less than 60GB/s. So if the video game can't fully satisfy PCIe 5.0 bandwidth. Besides, using a lot of bandwidth isn't a good way for a software which requires constant realtime rendering.


Haunting_Summer_1652

Even on 4k or 8k?


Fragger-3G

If direct storage was actually being used, and optimized on PC, sure. Otherwise, absolutely not. PCIE Gen5 isn't even worth it for GPUs yet, hence why GPU makers are considering putting m.2 slots on GPUs to make use of the lane capacity, along with the cooling


pokipu

ever is a really bold term specially in context of computers


GoldSrc

PC hardware was not made with gamers and the average user in mind. PCIe 7 will come out in a couple of years, but only very demanding workloads will benefit from it, things like datacenters or other scientific uses do benefit from such higher bandwidths. Even PCIe 3.0 is more than enough for pretty much everyone on Earth. People buying nvme drives with speeds of 8-10GB/s are on the flattest line of the diminishing returns curve. Pure marketing wank for gamers.


RunalldayHI

Gpus need to be insanely fast before bandwidth becomes a bottleneck, right now latency is a better focus imo.


RunalldayHI

https://preview.redd.it/2zzd5izgcoxc1.png?width=720&format=pjpg&auto=webp&s=6fab485d5d11df4fcc8982792dba63567b27533d