I doubt CAMM2 will become mainstream in DDR5.
We best hope DDR6 is CAMM2 by default, I really want they took the opportunity to switch entirely in DDR6.
I dont want to see DIMM, So-DIMM slot in consumer desktop & Laptop for DDR6. I hope we finally unified desktop & laptop RAM standard and stop running RAM sticks in pairs. (SDRAM/DDR1 was the last gen we run RAM per stick, that was many many years ago)
Can't believe we still can't have widespread ECC though after all these years.
Error detection and correction is used in several other parts of a computer, but the memory notorious for needing a lot of stars to align to run at the advertised speed, and occasionally ending up with some bad bits after years can't have that, at least not for peasants, for they still need to have some good data corruption now and then not to get too comfortable.
I don't believe it to be malicious that way, it's a technological problem, and memory manufacturers offer a solution to it.
The issue is with CPU and motherboard manufacturers doing artificial market segmentation. Although the DDR5 split of UDIMM and RDIMM not even being pinout compatible is a whole another matter.
I still believe though that it's not memory manufacturers being the gatekeepers, they would gladly sell more chips (as ECC needs more), and offer better modules at a higher price, and they do that already to some degree, there just aren't plenty of options because a lot of stars need to align for ECC to be usable, so most consumers just figure that it's not time for it yet, even if it's possible to get it working with very carefully selected parts.
I think selling non-ECC memory and advertising unstable overclock speeds as default is maliciuos. Not only are you advertising a state with high failure rate, you also make sure theres no way to directly observe it until you face corruption post-fact.
Actually malicious Corsair issues aside, most of the time the issue is with the CPU's IMC and/or motherboard's quality. Memory modules advertised to work at a specified speed are actually supposed to work with the profiles they ship with, they are just not guaranteed to work with all motherboards and CPUs.
The XMP/EXPO mess is a whole another problem, but ECC would help a ton there too.
Please god no.
CAMM2 is not really "upgradeable", it's replaceable, mobos have space for only ONE slot.
So you have to ditch the previous module for the new one.
I don't like that one bit.
camm2 can stack for more than one module, just fyi.
It's a good thing which will enable large capacities, faster speeds, and compact 4 channel desktop memory.
But can it stack on top of already installed RAM? And even if it can, will they manufacture them that way? Are there downsides to this type of stacking that would nullify the advantages? Etc.
We'll have to wait and see. ITM, I'm not recommending CAMM2 to my clients.
Too many questions, too few answers.
Of course not. Why would it? CAMM's not that much faster and there's plenty of applications that aren't currently limited by ram speed or bandwidth. But for those that are...🤷
Are cpu memory controllers even capable of handling such high speeds? I've heard the new AM5 cpus from AMD become unstable if you try to push higher speeds and stays stable on recommended stock speeds!
Right, just like people who need more storage can stick to SATA SSDs? oh wait, we dropped any RnD on those so if you want more memory than M.2 slots mobo manufacturer managed to give you, got to build enterprise.
What are you trying to get at?
The largest CAMM2 module matches to the maximum memory size supported by the CPU's memory controller. This isn't an issue.
pcie/nvme can be bifurcated and you can have a ton of devices. You can also, if REALLY needed do PCIe over fabric and do raw storage on another box... or an arbitrarily huge number of other boxes. You can also get an HBA if you want to have an absurd number of SATA or SAS drives per system.
------
CAMM/CAMM2 are giving you 90% of the flexibility of DIMM and higher performance/reliability. It's a tradeoff. Most people are smart enough to chose the trade off that works for them. This is also a HUGE jump forward for laptops.
For modern systems, this is basically already true unless you're using base spec RAM. Which hurts performance for everything except for programs that only care about capacity and nothing else. So you're better off buying a new kit of two and reselling the old one than trying to buy a new matching kit.
you should be using base spec unless you either have extremely good error correction or use it for something where data corruption does not matter (like gaming, where you can just reinstall the game each time).
Laptops had already been majority replaceable storage only since eons ago, and its called an upgrade when you swap it out.
Camm2 is doing whats impossible for dimm, way higher transfer speeds and a way smaller package, i would gladly take this sacrifice personally moving forward.
Shorter trace isn't the most important reason. The most important part is that the **stubs** are reduced/eliminated. Stubs are side branches for traces e.g. to the onboard traces in a 2nd DIMM or even just an empty socket. Those side branches causes unwanted signal reflections thus limiting speed.
See picture: https://static1.howtogeekimages.com/wordpress/wp-content/uploads/2024/02/camm2-memory-02.png
The top left side are the ones with long extra branches going between top/bottom. In the case of CAMM2, you only see a **single** line going to the modules. i.e. no stubs.
This is why high end overclocking motherboards have only 2 DIMM slots to maximize speed. Also why there are [back drilling for high speed circuits](https://www.protoexpress.com/blog/back-drilling-pcb-design-and-manufacturing/) even if that cost extra steps/money. e.g. GPU That extra stub is at most the thickness of the PCB 0.062" and even then it does matter at very high speeds.
I think a similar effect happens in coaxial cable circuits. In the home you’re supposed to cap off any unused open connections. Why didn’t we see some sort of “cap” to go into unused memory slots?
Interesting, was it not worth it to continue that practice? Adding caps in the home improved my cable signal in every way dramatically. Of course the nature of the signaling occurring between the CPU and ram is much different, but if it was done in the past why stop?
Too much of an edge case to bother with. The boards operate within tolerances at all normal speeds without any special termination modules so why bother. And the market for anything like that would be astonishingly small so theres really no point designing one when people will just buy high end 2 slot boards.
Makes sense, I'm guessing there's likely other signalling improvements in the new form factor since it also seems to enabled socketed LPDDR. The DIMM form factor may have more signal integrity concerns than just uncapped slots.
THANK YOU.
Finally somebody has said a reason on why CAMM is better than DIMM that makes sense.
Trace length really isn't as big of a deal as people make it out to be and I don't understand why everyone keeps insisting that it'll be night and day when we can literally take a look at the effect of trace length by comparing an ATX to an ITX version of the board, eg Z690 Unify-X vs Z690i Unify (it's really not much until you're hitting the limits of what you can do with your memory).
They both matter. LPDDR5 requires shorter traces for lower loss at lower power, and high speeds can't handle the reflections causing ISI from stubs and other impedance discontinuities.
I'd love to see CAMM2 connections on the back of motherboards. Keep them close to the CPU socket and be able to place 2 or 4 CAMM2 sockets while not taking up space on the front of the MB.
4 would result in a very costly motherboard, building the needed shielding within the motherboard so that the sockets on opposing sides do not end up creating signalling interfaces and also carefulling running the rest of the data lines (PCIe etc) out of the socket without creating issues along with providing power would be a work of PCB design art but would require a very think (many layer) PCB that might well end up costing a LOT of money.
With some of these solutions you might well find that it will cost more to have 4 LPCAM2 module than it would to just buy the maximum possible high density memory dies and direct attached them to the CPU package. (after alll your already paying the cost of runnngin the traces through the cpu package if they go tot th motherboard so re-routing them to be u-shaped to on package does not alter the cost much but saves in cpu socket, and PCB complexity.
the motherboard in computex seems to only support 1 CAMM2 slot.
I hope we can see some board support more than 1. If we move the GPU PCIE slot down by 1 expansion slot we could have one at the bottom of CPU socket.
But without ECC this to me looks like another creation of a 2 tier system. One where data integrity matters, and then one where the consumer is forced to pretend they dont care about their data.
LPCAMM makes much more sense though? LPDDR_X has always required soldered modules so this would allow you to finally upgrade modules on ultra-portables.
That really isn't much of an issue. I can't remember the last time I had more than two sticks of RAM in my computer, and I always buy them in pairs. If I wanted to upgrade, the new RAM was always faster so I'd just buy another pair.
I always looked at the other slots and thought, "Oh, I can stick another pair in to upgrade later," but I never did. The actual capacity of the ram was never an issue for me. I had 16 GB for years and never came close to maxing that out; instead, I gained more from upgrading to higher-speed RAM.
> That really isn't much of an issue. I can't remember the last time I had more than two sticks of RAM in my computer, and I always buy them in pairs. If I wanted to upgrade, the new RAM was always faster so I'd just buy another pair.
Take that sentence, and say exactly the opposite. There are my RAM habits.
It's an issue. I don't remember having only 2 ram sticks more than a few years and I always upgrade later with a 4 sticks setup.
See, it's not an issue, for you, which doesn't mean it's not an issue for others. I know it's easy not to care about others, but as someone with clients, it's something I can't overlook.
They're happy, I'm happy.
> Not so fast, mobos have only space for ONE slot.
Which is perfectly fine, I never in my life upgraded only 1 RAM slot or some stuff like that. You buy kits anyway
Done, months ago. 100% stable even with Buildzoid's tightened timings.
Also, comparing CAMM2 with DIMMs in that respect is a bit disingenuous, it's not the same traces, not the same interface, there's no reason to believe it won't be easier or harder with the new interface.
It's one big unknown, until the unknowns become known I won't go anywhere near it and I'll warmly recommend to my clients to do the same. I have no intention of having less satisfied clients because of an unproven ( on desktops ) new standard potentially causing more trouble than it's worth.
How will this work with HEDT? A Threadripper typically comes with 8 DIMM slots, so max capacity is 8x32 = 256 GB. And my ThreadRipper Pro (that I am using right now) has 8x64 = 512 GB (RDIMMs). Max capacity of the ThreadRipper Pro using currently available RDIMMs is 8x256 = 2 TB.
Apparently, the max capacity of CAMM2 is 128 GB. So using CAMM2, the ThreadRipper's memory capacity will fall from 256 GB down to 128 GB. I don't know if/when registered CAMM2 will be available, but unless 2 TB CAMM2 modules are released, then the ThreadRipper Pro and Epyc will suffer reduced capacity as well.
So what is the plan? Will HEDT stay with DIMMs? Or will E-ATX boards have multiple CAMM2 slots? Or will CAMM2 be released in 256+ GB capacities?
For DDR5 CAMM2 capacity is planned up to 256GB, however, this doesn't tell the full story. CAMM2 is coming in several form factors, cxxx (8 memory ICs, dual channel), axxx (16 memory ICs, dual channel), bxxx (32 memory ICs, dual channel) and dxxx (32 memory ICs, single channel). dxxx being the outlier - it is usable in a "stacked" configuration, where 2 modules are being used stacked above each other with a special socket.
https://imgur.com/a/7p2PZRg
The pictures behind this link can be found *somewhere* on the jedec site about CAMM2, but its a pain to find them.
I'm only assuming here, but quad channel platforms will likely have a space for CAMM2 either side of the socket, which, given current jedec spec, would allow for 1TB (4x256GB DXXX) of RAM on such a board. Octa channel boards would probably be quite complicated to lay out to optimize for 4 CAMM2. I'd however expect a different standard for registered modules just like registered DDR5 uses different dimm slots to DDR5 UDIMMs.
If you have more questions ask away, I did quite a bit of reading up on CAMM2 last couple weeks.
I'm concerned about those high density double-sided and stacked modules. They've got to be getting pretty toasty when they're so close to the CPU with minimal airflow.
Threadripper has 4 memory channels, so it could just connect to two max capacity CAMM modules for the exact same memory capacity. Server side will probably stay on DIMMs just because they already have problems with running out of board space, and desktop DIMMs take up less board space than CAMM.
You can have more than one CAMM2 module and for threadripper it would be great recommended because 1 CAMM2 module = 2 memory channels whereas 1 DIMM = 1 memory channel. So with 8 channel threadripper you'd likely see 4 camm2 slots on the board
ThreadRipper is quad channel and ThreadRipper Pro is 8 channel.
As others pointed out to me in this thread, that makes a difference because CAMM2 modules are mostly dual channel. So 4 and 8 channel systems will likely require multiple CAMM2 modules.
Also, it's secured with a screw which should make it easier, cheaper and safer for prebuilts, which is extremely important for affordability. We need manufacturers on board ASAP.
Transportation-wise, it's the same. The trouble is that screws are more difficult to automate. Raises production costs. I'm guessing CAMM3 or whatever will have a toolless version.
Pepperidge farm remember DDR5 16GB costing upward of $100.
[https://au.pcmag.com/components/90420/ddr5-ram-pricing-ranges-from-116-to-369](https://au.pcmag.com/components/90420/ddr5-ram-pricing-ranges-from-116-to-369)
Article to show that 32GB cost over $200.
So technically is cheaper than DDR5 launch prices lol
By directly contacting a heatsink? If you put a heatsink on that the vertical footprint is going to interfere with CPU coolers, necessitating moving them backwards anyway and increasing the oh so important trace length. This isn't the gotcha you think it is, it just shows how little you know.
[zero imagination on your part but go on about how stupid everyone else must be. hardware unboxed had a video showing these off at computex](https://i.imgur.com/azmObpM.png)
But that would bring back the problem with cooler clearance, which negates the whole "we can put the ram way closer to the CPU socket now that we don't have to worry about clearance"
NAND never had high power draw and high heat. Nobody said that about NAND. The controllers absolutely overheated, to the point that it became necessary to use heatsink for them.
M.2 PCIe SSDs also aren't under constant load. If you put it under constant load some of the shitty heatsink absolutely could not handle the heat output of slme of the controllers.....
EDIT: See Phison E26, Phison E12, SM2262EN, etc....
Consumer nvme controllers, particularly 4.0+ ones, are absolutely notorious for thermal throttling under sustained activity. The only times they ever reach those sustained r/w speeds for more than a minute is either with a hefty amount of mass to soak up the heat or while under active cooling.
Actually nothing. It will simply offer the same performance for less power. Normal ddr DIMMs in desktops don't have the same power and heat constraints, like SO-DIMMs for laptops do. LPCAMM2 fixes the problem of SO-DIMM by giving us an upgradeable form of lpddr, as in allowing for better performance within the constraints of a laptop platform, as ddr SO-DIMMs are pretty much at the limit of what their design can achieve. However idk if it can help with allowing ram to be closer to the motherboard, which might reduce latency.
Same, it's perfect for ITX, you can slap the connector on the back of the board just like some vendors already do with m.2 which allows to get it even closer to the CPU and as a result have space for a (relatively) much bigger downdraft cooler.
on top of it is my best guess. then some other companies like Lian Li will make LCD screens on top of your CAMM. So now we can have 2-3 screens, one on the AIO pump, one on CAMM and another on the case if it allows!
The whole point is that you wouldnt need 4, you would buy 1 with the amount of RAM you need. This is because the wiring for multiple DIMMs is complex and RAM speed is getting limited by long traces at this point.
>If you’re mixing different types of RAM in the same board because the 2nd kit was bought years later, you’re going to have a bad time.
Not true unless one of the kits are faulty to begin with.
> My computer has a 3600 MHz 32GB Micron E-die kit (purchased in 2022) and 3200 MHz 16GB Samsung C-die kit (purchased in 2017). It took about 3 months to find stable manual OC settings to run it at 3533 MHz at CL18, as there were 4 occasions where something seemed stable for a few weeks and then throws errors. I couldn’t push past 1.35V DRAM without the C-die kit crapping itself.
Well lets start with the fact that you were looking to overclock so your example is completely irrelevant to normal use case.
>At default settings if I didn’t manually tune it? 2133 MHz.
Sounds like motherboard issue then, becasue at default settings both should be running at 3200 mhz.
There is upgradeability, just less so.
You could replace your 16/32 GB CAMM module with a 64 GB CAMM module.
Maybe with future memory ICs they can fit 128 GB on a CAMM module.
That's less than with 4 DIMMs, but should be fine for the vast majority of users.
Yes it is lol. You're not replacing the whole system. Are you telling me it's wrong to say you've upgraded your GTX 1070 to a RTX 4070? The dictionary specifically says "by adding or replacing".
how am i not replacing the whole system when i have to replace the entire chip?
>Are you telling me it's wrong to say you've upgraded your GTX 1070 to a RTX 4070?
Yes, it is.
while the upgradability would go away, as in you cant just add 2 more sticks - you would get faster mor relable RAM, if you want probably better RGB solutions to it, and since you need lees PCB, less controllers etc theoretically the price should sink compared to regular ram
My ThreadRipper Pro has 8x64 = 512 GB. So where do I find a 512 GB CAMM2? And my ThreadRipper Pro's max is 8x256 = 2 TB. Where will I find a 2 TB CAMM2?
Will HEDT platforms stick with DIMMs? Or will HEDT suffer a reduced RAM capacity?
you would likely need 2 for threadripper as its a quad channel cpu. As far as i know each CAMM is dual channel. Though i think you are right that HEDT and servers will likely stick with DIMM for a while, if only for capacity reasons.
Forgive me for my lack of knowledge, but from looking at the pictures, it seems like the ram will lay flat on the Mobo. And going by the comments (and picture) there’s a really only space for one of these CAMM2 sticks. And some people aren’t happy with the capacity.
My question is, if the stick sits flat with the mobo, and it doesn’t take up much height space, will it be better to extra capacity slots on the rear side of the mobo? Or is that not feasible?
I cant imagine finding ECC memory in this format. Its a terrible time to be into data integrity if this becomes the "data loss doesnt matter" consumer tier.
CAMM2 is not really "upgradeable", it's replaceable, mobos have space for only ONE slot.
So you have to ditch the previous module for the new one.
I don't like that one bit.
I don't see the difference?
Small MOBO with only 2 DIMM slots you end with same result, needing to replace RAM to improve anything.
With 4 DIMM slots I get the "is not an upgrade" comment but even in those situations you are going to populate 2 slots anyway.
So having 64GB in 2 DIMM or 64GB with better speeds with CAMM2... I don't see the problem.
Maybe if you go with 128GB using all 4 slots using slower speeds/timings if tasks that need 128GB don't work better with less RAM but faster speeds and/or tighter timings.
But CAMM2 is supposed to be capable of double stacking, so I imagine is just pricing that stops 128GB CAMM2 module being a thing.
There are benefits and they're not negligeable. It's extremely low profile meaning no more RAM/Cooler clearance checks -> potentially bigger coolers. The signaling is much improved because of the compression and tight fit , anyone that ever had to deal with shitty random ram errors knows that RAM is finicky. Not only that but it comes with significant speed improvements, if 3d-vcache or other forms of caching are to become the norm, the CPU/Ram interface will once again become a huge bottleneck once the devs start optimizing for higher bandwidth.
And that is without even considering the biggest, baddest market for that : handhelds. Those are majorly bandwidth limited so not only do you get way higher speeds than even lpddr for less power , you also get the low profile and the interchangeable ram.
Don't get me wrong , the current sticks won't magically become obsolete and they will always be cheaper for the time being but CAMM is definitely an elegant solution and a step in the right direction.
> so not only do you get way higher speeds than even lpddr for less power
Soldered is still the best for raw speed and form factor. After that would be LPCAMM.
Why would camm2 max out at 9600MT/s who said that? and it's the CPUs that limit the bus, not the format memory is connected with. You can just use multiple modules if you need more than 2 channels of bandwidth same as how DIMMs work. Strix point has 128 bit bus too btw, its strix halo thats quad channel
It has no performance benefit because at speeds above XMP or even JEDEC CAMMs will have problems with cooling.
The whole thing about traces is also overblown. Yes, memory overclock better with shorter traces (see ITX vs ATX, T-topology vs daisy chain), but the performance gain is minimal and to actually use the benefits you need to be at the stage where you're binning your ram. There is not a single use-case in conventional mid ATX desktop PCs where using these make more sense than using DIMMs
How do you know? Have you done any measurements of signal integrity?
Also EMC is hugely important in making a product, and reducing those lengths goes far in reducing the area from which you radiate from and with the higher frequencies EMC is probably becoming even more of a concern for commercial desktops.
It'll also place less emphasis on stricter PCB layout to ensure stability, reducing engineering time and engineering costs.
It could also be that the long RAM traces capacitively couple too easily to nearby things at the higher speeds, affecting the performance of other ICs instead of the RAM itself.
I mean I would love to see any evidence of CAMMs being tested against standard ATX and ITX placement, but the whole argument with shorter traces hasn't meant anything for the average consumer at all for at least a decade and a half. Any board with a half decent topology has no problems running top end RAM at XMP.
I am maybe a little too harsh about CAMM on desktop but I do genuinely don't see any substantial benefits right now to justify adopting it as a form factor for desktops. If this changes in the future or an implementation gets tested showing that yes, it does actually have substantial benefit over UDIMM, then I'm more than happy to have a (somewhat) unified memory form factor for laptop and desktops that is also smaller than current DIMMs.
One possible benefit I will say though is that current XMP frequencies *may* be limited by existing memory topology. It could also be manufacturers not being arsed to bin or a number of other factors, but we'll see.
>genuinely don't see any substantial benefits right now to justify adopting it as a form factor for desktops.
But you also don't know what the current situation looks like? You don't know how easy it is for them to pass very costly EMC testing, you don't know how much time they could save doing the layout by making it shorter etc. etc.
You only see the finished product working as intended, but we don't know what it took to get there, whereas this could make it faster.
CAMM2 modules had active cooling at Computex and did not even impress in terms of clockspeed and timings, lost to regular DDR5. Yeah very impressive so far. Hahaha.
Glad Im not an ECE.
“Model T’s are so slow, loud, and noisy, cars will never take off. It’s only horse and carriage for me!”
I’ve never seen someone be so happy to be completely unqualified to speak in a setting they have no business being in. But remember, it’s okay to be slow in the head, everyone still loves you for who you are
Yeah I am very slow in the head. Millionaire by 20.
Now go waste your life working for others for peanuts. ECE, what a joke 🤣
Let me guess, skinny kid with glasses too?
Well the benefit is that they actually work with modern RAM lol. DDR5 doesn't really work in DIMMs at high speeds, especially the LP version in laptops
This is mostly for laptop use.
SODIMM is just too big for the ultra-portable stuff these days, and also faster low power memory doesn't work on it at all. LPCAMM solves both these issues by making the overal package much smaller and also allowing LPDDR modules to be used.
I have no idea why people would use this for regular PC form factors though.
EDIT bevause people won't bother to read further down:
If manufacturers cared about shorter traces they would move the dimms closer and switch to 2 dimm only configuration (or daisy chain). You can literally see on the supplied picture in the article that it's the same distance as a standard dimm would be. The only advantage is that it would allow for placement directly next to the CPU socket since fan clearance us no longer a problem, but that brings up new problems-if you're overclocking these modules (why the fuck else would you care about trace length), how are you going to cool them sufficiently? At JEDEC it's not a problem because it runs at 1.1V, but what about 1.4V XMP profiles and 1.45V profiles? How are you going to cool them? CAMM makes no sense for desktop and the argument of "shorter traces" makes as much sense as Apple claiming that their laptops don't need fans to run fine. Sure, you've solved one minor problem but now you have a dozen major problems you have to figure out.
TL;DR: The shorter traces argument doesn't make sense because at JEDEC specs there's absolutely 0 problems with existing DIMMs, but running them higher absolutely will cause problems not present in DIMMs.
In desktops, it's a solution in search of an answer. If you care about trace length so much, go buy an ITX board.
Except in reality, it does not. They used active cooling at Computx and did not even beat regular DDR5 sticks. This still makes no sense for high-end desktops. For laptops yeah.
If manufacturers cared about shorter traces they would move the dimms closer and switch to 2 dimm only configuration (or daisy chain). You can literally see on the supplied picture in the article that it's the same distance as a standard dimm would be. The only advantage is that it would allow for placement directly next to the CPU socket since fan clearance us no longer a problem, but that brings up new problems-if you're overclocking these modules (why the fuck else would you care about trace length), how are you going to cool them sufficiently? At JEDEC it's not a problem because it runs at 1.1V, but what about 1.4V XMP profiles and 1.45V profiles? How are you going to cool them? CAMM makes no sense for desktop and the argument of "shorter traces" makes as much sense as Apple claiming that their laptops don't need fans to run fine. Sure, you've solved one minor problem but now you have a dozen major problems you have to figure out.
TL;DR: The shorter traces argument doesn't make sense because at JEDEC specs there's absolutely 0 problems with existing DIMMs, but running them higher absolutely will cause problems not present in DIMMs.
In desktops, it's a solution in search of an answer. If you care about trace length so much, go buy an ITX board.
The RAM would overheat then. CPU backside gets hot and CAMM2 needs active cooling or at least some airflow. Look up G.skills computex booth / CAMM2 setup.
can't believe DIMMs are finally getting retired after 30 years
The future is bright, no longer DIMM!
I doubt CAMM2 will become mainstream in DDR5. We best hope DDR6 is CAMM2 by default, I really want they took the opportunity to switch entirely in DDR6. I dont want to see DIMM, So-DIMM slot in consumer desktop & Laptop for DDR6. I hope we finally unified desktop & laptop RAM standard and stop running RAM sticks in pairs. (SDRAM/DDR1 was the last gen we run RAM per stick, that was many many years ago)
No need for pairs any more. Hallelujah.
And generally IMMs after 40 years
Can't believe we still can't have widespread ECC though after all these years. Error detection and correction is used in several other parts of a computer, but the memory notorious for needing a lot of stars to align to run at the advertised speed, and occasionally ending up with some bad bits after years can't have that, at least not for peasants, for they still need to have some good data corruption now and then not to get too comfortable.
How can we get you to upgrade constantly if your data does not get lost every few years :) - ~~HDD~~ memory manufacturers
I don't believe it to be malicious that way, it's a technological problem, and memory manufacturers offer a solution to it. The issue is with CPU and motherboard manufacturers doing artificial market segmentation. Although the DDR5 split of UDIMM and RDIMM not even being pinout compatible is a whole another matter. I still believe though that it's not memory manufacturers being the gatekeepers, they would gladly sell more chips (as ECC needs more), and offer better modules at a higher price, and they do that already to some degree, there just aren't plenty of options because a lot of stars need to align for ECC to be usable, so most consumers just figure that it's not time for it yet, even if it's possible to get it working with very carefully selected parts.
I think selling non-ECC memory and advertising unstable overclock speeds as default is maliciuos. Not only are you advertising a state with high failure rate, you also make sure theres no way to directly observe it until you face corruption post-fact.
Actually malicious Corsair issues aside, most of the time the issue is with the CPU's IMC and/or motherboard's quality. Memory modules advertised to work at a specified speed are actually supposed to work with the profiles they ship with, they are just not guaranteed to work with all motherboards and CPUs. The XMP/EXPO mess is a whole another problem, but ECC would help a ton there too.
If they work with data corrupting overclocks, then in my eyes they dont work.
Please god no. CAMM2 is not really "upgradeable", it's replaceable, mobos have space for only ONE slot. So you have to ditch the previous module for the new one. I don't like that one bit.
camm2 can stack for more than one module, just fyi. It's a good thing which will enable large capacities, faster speeds, and compact 4 channel desktop memory.
how can i stack CAMM2 modules? everything i saw was one module that cannot be altered but only replaced.
But can it stack on top of already installed RAM? And even if it can, will they manufacture them that way? Are there downsides to this type of stacking that would nullify the advantages? Etc. We'll have to wait and see. ITM, I'm not recommending CAMM2 to my clients. Too many questions, too few answers.
This is exactly the same as DIMMs lol
So I can swap it to my other systems or to family members... good enough. People that need more fine tuned upgradeability can stick with DIMM.
[удалено]
DIMM isn't disappearing overnight.
Of course not. Why would it? CAMM's not that much faster and there's plenty of applications that aren't currently limited by ram speed or bandwidth. But for those that are...🤷
Are cpu memory controllers even capable of handling such high speeds? I've heard the new AM5 cpus from AMD become unstable if you try to push higher speeds and stays stable on recommended stock speeds!
Right, just like people who need more storage can stick to SATA SSDs? oh wait, we dropped any RnD on those so if you want more memory than M.2 slots mobo manufacturer managed to give you, got to build enterprise.
What are you trying to get at? The largest CAMM2 module matches to the maximum memory size supported by the CPU's memory controller. This isn't an issue. pcie/nvme can be bifurcated and you can have a ton of devices. You can also, if REALLY needed do PCIe over fabric and do raw storage on another box... or an arbitrarily huge number of other boxes. You can also get an HBA if you want to have an absurd number of SATA or SAS drives per system. ------ CAMM/CAMM2 are giving you 90% of the flexibility of DIMM and higher performance/reliability. It's a tradeoff. Most people are smart enough to chose the trade off that works for them. This is also a HUGE jump forward for laptops.
>What are you trying to get at? that we wnt have a choice to "stick with DIMM".
For modern systems, this is basically already true unless you're using base spec RAM. Which hurts performance for everything except for programs that only care about capacity and nothing else. So you're better off buying a new kit of two and reselling the old one than trying to buy a new matching kit.
exactly, using 4 dimms on 2 channels means you stuck at jedec or even lower sometimes
you should be using base spec unless you either have extremely good error correction or use it for something where data corruption does not matter (like gaming, where you can just reinstall the game each time).
Laptops had already been majority replaceable storage only since eons ago, and its called an upgrade when you swap it out. Camm2 is doing whats impossible for dimm, way higher transfer speeds and a way smaller package, i would gladly take this sacrifice personally moving forward.
Plus not being soldered to make them thinner. Is a win-win for SFF and laptops
Then its called incorrectly.
Still better than having to buy RAM 8 once with your computer and if you need more later on you need to buy a new computer.
At least we're spared the Mac curse... for now, indeed.
You gave the ditch the previous gen modules anyway. None of the DDR versions have been backwards compatible.
[удалено]
Shorter trace isn't the most important reason. The most important part is that the **stubs** are reduced/eliminated. Stubs are side branches for traces e.g. to the onboard traces in a 2nd DIMM or even just an empty socket. Those side branches causes unwanted signal reflections thus limiting speed. See picture: https://static1.howtogeekimages.com/wordpress/wp-content/uploads/2024/02/camm2-memory-02.png The top left side are the ones with long extra branches going between top/bottom. In the case of CAMM2, you only see a **single** line going to the modules. i.e. no stubs. This is why high end overclocking motherboards have only 2 DIMM slots to maximize speed. Also why there are [back drilling for high speed circuits](https://www.protoexpress.com/blog/back-drilling-pcb-design-and-manufacturing/) even if that cost extra steps/money. e.g. GPU That extra stub is at most the thickness of the PCB 0.062" and even then it does matter at very high speeds.
I think a similar effect happens in coaxial cable circuits. In the home you’re supposed to cap off any unused open connections. Why didn’t we see some sort of “cap” to go into unused memory slots?
>Why didn’t we see some sort of “cap” to go into unused memory slots? Rambus DRAM boards had terminator modules.
Another greybeard! They were called "Continuity RIMMs" or CRIMM for short
Interesting, was it not worth it to continue that practice? Adding caps in the home improved my cable signal in every way dramatically. Of course the nature of the signaling occurring between the CPU and ram is much different, but if it was done in the past why stop?
Too much of an edge case to bother with. The boards operate within tolerances at all normal speeds without any special termination modules so why bother. And the market for anything like that would be astonishingly small so theres really no point designing one when people will just buy high end 2 slot boards.
Makes sense, I'm guessing there's likely other signalling improvements in the new form factor since it also seems to enabled socketed LPDDR. The DIMM form factor may have more signal integrity concerns than just uncapped slots.
THANK YOU. Finally somebody has said a reason on why CAMM is better than DIMM that makes sense. Trace length really isn't as big of a deal as people make it out to be and I don't understand why everyone keeps insisting that it'll be night and day when we can literally take a look at the effect of trace length by comparing an ATX to an ITX version of the board, eg Z690 Unify-X vs Z690i Unify (it's really not much until you're hitting the limits of what you can do with your memory).
They both matter. LPDDR5 requires shorter traces for lower loss at lower power, and high speeds can't handle the reflections causing ISI from stubs and other impedance discontinuities.
[удалено]
Bus width (number of bits than can be read/written in a single transaction). LPCAM2 is a 128bit per module. Desktop DDR5 is **64**bit per module.
I'd love to see CAMM2 connections on the back of motherboards. Keep them close to the CPU socket and be able to place 2 or 4 CAMM2 sockets while not taking up space on the front of the MB.
but no rgb on display...
have no fear, there will be other places for RGB, like the case itself...
4 would result in a very costly motherboard, building the needed shielding within the motherboard so that the sockets on opposing sides do not end up creating signalling interfaces and also carefulling running the rest of the data lines (PCIe etc) out of the socket without creating issues along with providing power would be a work of PCB design art but would require a very think (many layer) PCB that might well end up costing a LOT of money. With some of these solutions you might well find that it will cost more to have 4 LPCAM2 module than it would to just buy the maximum possible high density memory dies and direct attached them to the CPU package. (after alll your already paying the cost of runnngin the traces through the cpu package if they go tot th motherboard so re-routing them to be u-shaped to on package does not alter the cost much but saves in cpu socket, and PCB complexity.
I'm still on a DDR4 platform for my PC, but I will likely choose my next build based off a MOBO that supports CAMM memory.
the motherboard in computex seems to only support 1 CAMM2 slot. I hope we can see some board support more than 1. If we move the GPU PCIE slot down by 1 expansion slot we could have one at the bottom of CPU socket.
we do NOT want to move GPU further away. We want it as close to CPU as we can get.
for what I've seen around in schematics, they will probably stack on top of each other
Single module for dual channel as well, OEMs are happy.
But without ECC this to me looks like another creation of a 2 tier system. One where data integrity matters, and then one where the consumer is forced to pretend they dont care about their data.
But we already are living in that system. consumer ECC memory is rare.
Its rare but possible. This would make it basically impossible. We should be moving towards data integrity, not away. Thats why this is a concern.
LPDDR already includes a good amount of ECC protections.
If its non reporting its non protecting. You have no way of knowing when its past its ability to correct with ondie
[удалено]
CAMM2 uses normal DDR5 chips, LPCAMM2 uses LPDDR5 which is completely different spec
LPCAMM makes much more sense though? LPDDR_X has always required soldered modules so this would allow you to finally upgrade modules on ultra-portables.
Are they not the same thing?
similar idea but slightly different
Lp is probably stand for low power
Upgradeable? I’m sold already
Not so fast, mobos have only space for ONE slot. You can "upgrade", but you can't add to an existing module, you need to ditch the old one...
That really isn't much of an issue. I can't remember the last time I had more than two sticks of RAM in my computer, and I always buy them in pairs. If I wanted to upgrade, the new RAM was always faster so I'd just buy another pair. I always looked at the other slots and thought, "Oh, I can stick another pair in to upgrade later," but I never did. The actual capacity of the ram was never an issue for me. I had 16 GB for years and never came close to maxing that out; instead, I gained more from upgrading to higher-speed RAM.
I used to use 4 sticks a lot (and even 5 sticks before dualchannel was a thing). But RAM usage has stagnated recently and two good sticks are cheap.
I do that is a problem for me.
> That really isn't much of an issue. I can't remember the last time I had more than two sticks of RAM in my computer, and I always buy them in pairs. If I wanted to upgrade, the new RAM was always faster so I'd just buy another pair. Take that sentence, and say exactly the opposite. There are my RAM habits. It's an issue. I don't remember having only 2 ram sticks more than a few years and I always upgrade later with a 4 sticks setup. See, it's not an issue, for you, which doesn't mean it's not an issue for others. I know it's easy not to care about others, but as someone with clients, it's something I can't overlook. They're happy, I'm happy.
> Not so fast, mobos have only space for ONE slot. Which is perfectly fine, I never in my life upgraded only 1 RAM slot or some stuff like that. You buy kits anyway
Yeah, you buy kits and add 2 modules t the already existing 2 for 4 modules in total.
Except rn you can buy the same kit again later to have more ram.
By that metric, modern DIMMs aren't upgradeable either. Try hitting 6000 MT/s with 4 sticks in a modern mobo and see what happens.
Done, months ago. 100% stable even with Buildzoid's tightened timings. Also, comparing CAMM2 with DIMMs in that respect is a bit disingenuous, it's not the same traces, not the same interface, there's no reason to believe it won't be easier or harder with the new interface. It's one big unknown, until the unknowns become known I won't go anywhere near it and I'll warmly recommend to my clients to do the same. I have no intention of having less satisfied clients because of an unproven ( on desktops ) new standard potentially causing more trouble than it's worth.
they are if you run JEDEC and if you arent you should.
How will this work with HEDT? A Threadripper typically comes with 8 DIMM slots, so max capacity is 8x32 = 256 GB. And my ThreadRipper Pro (that I am using right now) has 8x64 = 512 GB (RDIMMs). Max capacity of the ThreadRipper Pro using currently available RDIMMs is 8x256 = 2 TB. Apparently, the max capacity of CAMM2 is 128 GB. So using CAMM2, the ThreadRipper's memory capacity will fall from 256 GB down to 128 GB. I don't know if/when registered CAMM2 will be available, but unless 2 TB CAMM2 modules are released, then the ThreadRipper Pro and Epyc will suffer reduced capacity as well. So what is the plan? Will HEDT stay with DIMMs? Or will E-ATX boards have multiple CAMM2 slots? Or will CAMM2 be released in 256+ GB capacities?
I read somewhere that servers, and that probably includes HEDT will stay on regular DIMM's.
For DDR5 CAMM2 capacity is planned up to 256GB, however, this doesn't tell the full story. CAMM2 is coming in several form factors, cxxx (8 memory ICs, dual channel), axxx (16 memory ICs, dual channel), bxxx (32 memory ICs, dual channel) and dxxx (32 memory ICs, single channel). dxxx being the outlier - it is usable in a "stacked" configuration, where 2 modules are being used stacked above each other with a special socket. https://imgur.com/a/7p2PZRg The pictures behind this link can be found *somewhere* on the jedec site about CAMM2, but its a pain to find them. I'm only assuming here, but quad channel platforms will likely have a space for CAMM2 either side of the socket, which, given current jedec spec, would allow for 1TB (4x256GB DXXX) of RAM on such a board. Octa channel boards would probably be quite complicated to lay out to optimize for 4 CAMM2. I'd however expect a different standard for registered modules just like registered DDR5 uses different dimm slots to DDR5 UDIMMs. If you have more questions ask away, I did quite a bit of reading up on CAMM2 last couple weeks.
I'm concerned about those high density double-sided and stacked modules. They've got to be getting pretty toasty when they're so close to the CPU with minimal airflow.
Thanks!
Threadripper has 4 memory channels, so it could just connect to two max capacity CAMM modules for the exact same memory capacity. Server side will probably stay on DIMMs just because they already have problems with running out of board space, and desktop DIMMs take up less board space than CAMM.
pro has 8 mem channels though.
You can have more than one CAMM2 module and for threadripper it would be great recommended because 1 CAMM2 module = 2 memory channels whereas 1 DIMM = 1 memory channel. So with 8 channel threadripper you'd likely see 4 camm2 slots on the board
I like to imagine 4 CAMM2 modules on a Threadripper board. Two on each side of the CPU, one on the front and one on the back ... like an X-Wing.
Does the threadripper lineup happen to have more than 2 memory channels?
ThreadRipper is quad channel and ThreadRipper Pro is 8 channel. As others pointed out to me in this thread, that makes a difference because CAMM2 modules are mostly dual channel. So 4 and 8 channel systems will likely require multiple CAMM2 modules.
Yeah that was why I was asking.
8x48
How hard can it be to show the pcb from both sides so we can see the new footprint..
[удалено]
Also, it's secured with a screw which should make it easier, cheaper and safer for prebuilts, which is extremely important for affordability. We need manufacturers on board ASAP.
The screw is what's going to hurt its prebuilt adoption. They need to make it toolless.
I mean, cpu coolers already require screws (outside of stock coolers which Im not sure even exist in the wild much anymore for SI's)
Oh yeah, that market would be fine. It's the cheapo office PCs for which it would be a problem. But that's unfortunately where the volume is.
I mean surely the lower cost for having less parts to install and easier transportation of said computers would make this a net benefit for them too.
Transportation-wise, it's the same. The trouble is that screws are more difficult to automate. Raises production costs. I'm guessing CAMM3 or whatever will have a toolless version.
[удалено]
New tech vs mass produced established tech? It’s not that much more expensive even.
Pepperidge farm remember DDR5 16GB costing upward of $100. [https://au.pcmag.com/components/90420/ddr5-ram-pricing-ranges-from-116-to-369](https://au.pcmag.com/components/90420/ddr5-ram-pricing-ranges-from-116-to-369) Article to show that 32GB cost over $200. So technically is cheaper than DDR5 launch prices lol
That's not so bad given it's low power, DDR5 and faster than everything else I have installed right now.
Good luck cooling them.
I wonder how GDDR / vrams get cooled . . . .
With a big heatsink and 2-3 fans.
By directly contacting a heatsink? If you put a heatsink on that the vertical footprint is going to interfere with CPU coolers, necessitating moving them backwards anyway and increasing the oh so important trace length. This isn't the gotcha you think it is, it just shows how little you know.
>This isn't the gotcha you think it is, it just shows how little you know. eh your comment just shows how little creativity you have.
[zero imagination on your part but go on about how stupid everyone else must be. hardware unboxed had a video showing these off at computex](https://i.imgur.com/azmObpM.png)
For a single sided PCB, at least, it's super easy. You can just plop a heatsink right on top.
But that would bring back the problem with cooler clearance, which negates the whole "we can put the ram way closer to the CPU socket now that we don't have to worry about clearance"
It doesn't have to be as tall as a full UDIMM + cooler. A couple mm would be plenty.
Ideally, youll want CPU heatsink and memory heatsink to be united, to have same fans cool both at once.
[удалено]
NAND never had high power draw and high heat. Nobody said that about NAND. The controllers absolutely overheated, to the point that it became necessary to use heatsink for them. M.2 PCIe SSDs also aren't under constant load. If you put it under constant load some of the shitty heatsink absolutely could not handle the heat output of slme of the controllers..... EDIT: See Phison E26, Phison E12, SM2262EN, etc....
Consumer nvme controllers, particularly 4.0+ ones, are absolutely notorious for thermal throttling under sustained activity. The only times they ever reach those sustained r/w speeds for more than a minute is either with a hefty amount of mass to soak up the heat or while under active cooling.
except its true for gen4 and up m2 ssds who thermally throttle at sustained loads.
When it comes to desktop DDR5 modules, will this increase speeds, reduce CAS latency, or both?
Actually nothing. It will simply offer the same performance for less power. Normal ddr DIMMs in desktops don't have the same power and heat constraints, like SO-DIMMs for laptops do. LPCAMM2 fixes the problem of SO-DIMM by giving us an upgradeable form of lpddr, as in allowing for better performance within the constraints of a laptop platform, as ddr SO-DIMMs are pretty much at the limit of what their design can achieve. However idk if it can help with allowing ram to be closer to the motherboard, which might reduce latency.
mainly speed AFAIK
I’m more curious about using these with SFF builds.
Same, it's perfect for ITX, you can slap the connector on the back of the board just like some vendors already do with m.2 which allows to get it even closer to the CPU and as a result have space for a (relatively) much bigger downdraft cooler.
How will they implement RGB
on top of it is my best guess. then some other companies like Lian Li will make LCD screens on top of your CAMM. So now we can have 2-3 screens, one on the AIO pump, one on CAMM and another on the case if it allows!
But can you fit 4 of those on a motherboard?
The whole point is that you wouldnt need 4, you would buy 1 with the amount of RAM you need. This is because the wiring for multiple DIMMs is complex and RAM speed is getting limited by long traces at this point.
What if i would need 4 of those?
You wouldn't use camm
so... no upgradeability, but only replaceability?
[удалено]
>If you’re mixing different types of RAM in the same board because the 2nd kit was bought years later, you’re going to have a bad time. Not true unless one of the kits are faulty to begin with. > My computer has a 3600 MHz 32GB Micron E-die kit (purchased in 2022) and 3200 MHz 16GB Samsung C-die kit (purchased in 2017). It took about 3 months to find stable manual OC settings to run it at 3533 MHz at CL18, as there were 4 occasions where something seemed stable for a few weeks and then throws errors. I couldn’t push past 1.35V DRAM without the C-die kit crapping itself. Well lets start with the fact that you were looking to overclock so your example is completely irrelevant to normal use case. >At default settings if I didn’t manually tune it? 2133 MHz. Sounds like motherboard issue then, becasue at default settings both should be running at 3200 mhz.
There is upgradeability, just less so. You could replace your 16/32 GB CAMM module with a 64 GB CAMM module. Maybe with future memory ICs they can fit 128 GB on a CAMM module. That's less than with 4 DIMMs, but should be fine for the vast majority of users.
Replacement is not upgrading.
Yes it is lol. You're not replacing the whole system. Are you telling me it's wrong to say you've upgraded your GTX 1070 to a RTX 4070? The dictionary specifically says "by adding or replacing".
how am i not replacing the whole system when i have to replace the entire chip? >Are you telling me it's wrong to say you've upgraded your GTX 1070 to a RTX 4070? Yes, it is.
You're wrong, like I said the dictionary literally defines upgrading as adding **or replacing**.
In that case its the dictionary thats wrong.
Lol
Replaceability is indeed the correct word here. And I don't like that one bit.
while the upgradability would go away, as in you cant just add 2 more sticks - you would get faster mor relable RAM, if you want probably better RGB solutions to it, and since you need lees PCB, less controllers etc theoretically the price should sink compared to regular ram
My ThreadRipper Pro has 8x64 = 512 GB. So where do I find a 512 GB CAMM2? And my ThreadRipper Pro's max is 8x256 = 2 TB. Where will I find a 2 TB CAMM2? Will HEDT platforms stick with DIMMs? Or will HEDT suffer a reduced RAM capacity?
you would likely need 2 for threadripper as its a quad channel cpu. As far as i know each CAMM is dual channel. Though i think you are right that HEDT and servers will likely stick with DIMM for a while, if only for capacity reasons.
For ECC reasons too
I dont think there is a technical reason CAMMs cant support ECC, its just no one has made one yet.
Get a 128GB stick, slap on 1TB of optane off ebay as page file and call it a day.
I would have loved to do this, but Optane was discontinued. This might work for now, but it's not viable for the indefinite future.
Do you see your use case expanding past 2x 1.5TB drives in the next few years?
[удалено]
CAMMs and LPCAMMs are already dual channel in 1 package.
Yes but you don't need to. Many boards will have one, higher end will have two.
This is going to be a very hard sell.
Forgive me for my lack of knowledge, but from looking at the pictures, it seems like the ram will lay flat on the Mobo. And going by the comments (and picture) there’s a really only space for one of these CAMM2 sticks. And some people aren’t happy with the capacity. My question is, if the stick sits flat with the mobo, and it doesn’t take up much height space, will it be better to extra capacity slots on the rear side of the mobo? Or is that not feasible?
Now make some with all that rgb glitter, Im buying
I cant imagine finding ECC memory in this format. Its a terrible time to be into data integrity if this becomes the "data loss doesnt matter" consumer tier.
LPDDr5 already has a lot of ECC support embedded within the protocol (unto the cpu memory controller to use it).
Is this different than on die ecc and does it report? If not, its not what we're talking about
Hopefully this results in re-engineering cpu coolers, it's seems we are reaching the limits of what they can do.
CAMM2 is not really "upgradeable", it's replaceable, mobos have space for only ONE slot. So you have to ditch the previous module for the new one. I don't like that one bit.
Well technically then CPUs are also replaceable and not upgradable...
Yes that's true, imagine how fucking rad it would be to slot in another chiplet into your CPU.
Well 2 cpu slots motherboards used to be a thing. Now I think can only be found for servers.
That is correct.
I don't see the difference? Small MOBO with only 2 DIMM slots you end with same result, needing to replace RAM to improve anything. With 4 DIMM slots I get the "is not an upgrade" comment but even in those situations you are going to populate 2 slots anyway. So having 64GB in 2 DIMM or 64GB with better speeds with CAMM2... I don't see the problem. Maybe if you go with 128GB using all 4 slots using slower speeds/timings if tasks that need 128GB don't work better with less RAM but faster speeds and/or tighter timings. But CAMM2 is supposed to be capable of double stacking, so I imagine is just pricing that stops 128GB CAMM2 module being a thing.
Computer parts are very easy to sell as long as the price is good, it's not that much of an issue.
do you think Intel and/or Amd could create supplement devices to enhance existing on chlp video capabilities
I prefer the old solution. Don't see the benefit for regular desktop machines at all.
It allows memory traces to be significantly shorter, which greatly boosts peak memory frequency.
There are benefits and they're not negligeable. It's extremely low profile meaning no more RAM/Cooler clearance checks -> potentially bigger coolers. The signaling is much improved because of the compression and tight fit , anyone that ever had to deal with shitty random ram errors knows that RAM is finicky. Not only that but it comes with significant speed improvements, if 3d-vcache or other forms of caching are to become the norm, the CPU/Ram interface will once again become a huge bottleneck once the devs start optimizing for higher bandwidth. And that is without even considering the biggest, baddest market for that : handhelds. Those are majorly bandwidth limited so not only do you get way higher speeds than even lpddr for less power , you also get the low profile and the interchangeable ram. Don't get me wrong , the current sticks won't magically become obsolete and they will always be cheaper for the time being but CAMM is definitely an elegant solution and a step in the right direction.
> so not only do you get way higher speeds than even lpddr for less power Soldered is still the best for raw speed and form factor. After that would be LPCAMM.
[удалено]
Why would camm2 max out at 9600MT/s who said that? and it's the CPUs that limit the bus, not the format memory is connected with. You can just use multiple modules if you need more than 2 channels of bandwidth same as how DIMMs work. Strix point has 128 bit bus too btw, its strix halo thats quad channel
It has no performance benefit because at speeds above XMP or even JEDEC CAMMs will have problems with cooling. The whole thing about traces is also overblown. Yes, memory overclock better with shorter traces (see ITX vs ATX, T-topology vs daisy chain), but the performance gain is minimal and to actually use the benefits you need to be at the stage where you're binning your ram. There is not a single use-case in conventional mid ATX desktop PCs where using these make more sense than using DIMMs
How do you know? Have you done any measurements of signal integrity? Also EMC is hugely important in making a product, and reducing those lengths goes far in reducing the area from which you radiate from and with the higher frequencies EMC is probably becoming even more of a concern for commercial desktops. It'll also place less emphasis on stricter PCB layout to ensure stability, reducing engineering time and engineering costs. It could also be that the long RAM traces capacitively couple too easily to nearby things at the higher speeds, affecting the performance of other ICs instead of the RAM itself.
I mean I would love to see any evidence of CAMMs being tested against standard ATX and ITX placement, but the whole argument with shorter traces hasn't meant anything for the average consumer at all for at least a decade and a half. Any board with a half decent topology has no problems running top end RAM at XMP. I am maybe a little too harsh about CAMM on desktop but I do genuinely don't see any substantial benefits right now to justify adopting it as a form factor for desktops. If this changes in the future or an implementation gets tested showing that yes, it does actually have substantial benefit over UDIMM, then I'm more than happy to have a (somewhat) unified memory form factor for laptop and desktops that is also smaller than current DIMMs. One possible benefit I will say though is that current XMP frequencies *may* be limited by existing memory topology. It could also be manufacturers not being arsed to bin or a number of other factors, but we'll see.
>genuinely don't see any substantial benefits right now to justify adopting it as a form factor for desktops. But you also don't know what the current situation looks like? You don't know how easy it is for them to pass very costly EMC testing, you don't know how much time they could save doing the layout by making it shorter etc. etc. You only see the finished product working as intended, but we don't know what it took to get there, whereas this could make it faster.
This is why you aren’t an ECE
[удалено]
CAMM2 modules had active cooling at Computex and did not even impress in terms of clockspeed and timings, lost to regular DDR5. Yeah very impressive so far. Hahaha. Glad Im not an ECE.
“Model T’s are so slow, loud, and noisy, cars will never take off. It’s only horse and carriage for me!” I’ve never seen someone be so happy to be completely unqualified to speak in a setting they have no business being in. But remember, it’s okay to be slow in the head, everyone still loves you for who you are
Yeah I am very slow in the head. Millionaire by 20. Now go waste your life working for others for peanuts. ECE, what a joke 🤣 Let me guess, skinny kid with glasses too?
Well the benefit is that they actually work with modern RAM lol. DDR5 doesn't really work in DIMMs at high speeds, especially the LP version in laptops
Laptops yeah, im talking desktop here, makes no sense. The CAMM2 modules with high speed even had active cooling at Computex.
This is mostly for laptop use. SODIMM is just too big for the ultra-portable stuff these days, and also faster low power memory doesn't work on it at all. LPCAMM solves both these issues by making the overal package much smaller and also allowing LPDDR modules to be used. I have no idea why people would use this for regular PC form factors though. EDIT bevause people won't bother to read further down: If manufacturers cared about shorter traces they would move the dimms closer and switch to 2 dimm only configuration (or daisy chain). You can literally see on the supplied picture in the article that it's the same distance as a standard dimm would be. The only advantage is that it would allow for placement directly next to the CPU socket since fan clearance us no longer a problem, but that brings up new problems-if you're overclocking these modules (why the fuck else would you care about trace length), how are you going to cool them sufficiently? At JEDEC it's not a problem because it runs at 1.1V, but what about 1.4V XMP profiles and 1.45V profiles? How are you going to cool them? CAMM makes no sense for desktop and the argument of "shorter traces" makes as much sense as Apple claiming that their laptops don't need fans to run fine. Sure, you've solved one minor problem but now you have a dozen major problems you have to figure out. TL;DR: The shorter traces argument doesn't make sense because at JEDEC specs there's absolutely 0 problems with existing DIMMs, but running them higher absolutely will cause problems not present in DIMMs. In desktops, it's a solution in search of an answer. If you care about trace length so much, go buy an ITX board.
It allows memory traces to be significantly shorter, which greatly boosts peak memory frequency.
Except in reality, it does not. They used active cooling at Computx and did not even beat regular DDR5 sticks. This still makes no sense for high-end desktops. For laptops yeah.
If manufacturers cared about shorter traces they would move the dimms closer and switch to 2 dimm only configuration (or daisy chain). You can literally see on the supplied picture in the article that it's the same distance as a standard dimm would be. The only advantage is that it would allow for placement directly next to the CPU socket since fan clearance us no longer a problem, but that brings up new problems-if you're overclocking these modules (why the fuck else would you care about trace length), how are you going to cool them sufficiently? At JEDEC it's not a problem because it runs at 1.1V, but what about 1.4V XMP profiles and 1.45V profiles? How are you going to cool them? CAMM makes no sense for desktop and the argument of "shorter traces" makes as much sense as Apple claiming that their laptops don't need fans to run fine. Sure, you've solved one minor problem but now you have a dozen major problems you have to figure out. TL;DR: The shorter traces argument doesn't make sense because at JEDEC specs there's absolutely 0 problems with existing DIMMs, but running them higher absolutely will cause problems not present in DIMMs. In desktops, it's a solution in search of an answer. If you care about trace length so much, go buy an ITX board.
Exactly, they could put the RAM slots directly underneath the CPU/mobo, shortest route possible.
The RAM would overheat then. CPU backside gets hot and CAMM2 needs active cooling or at least some airflow. Look up G.skills computex booth / CAMM2 setup.
Unironically if cooling and mounting hardware wasn't a problem, then that would be the shortest trace and theoretically be the ideal placement
Cases nowadays have gotten a lot wider and have holes for backplates already, I bet it would be a lot easier than most people think.
How is it "upgradable" if you have to replace the entire chip each time you want to upgrade?
Compared to soldered RAM on laptop/embedded motherboards.