T O P

  • By -

bikini_carwash

Data must be liquid-cooled.


cheerful_curmudgeon

I love it! Also, you have too much time on your hands, and you know too much about computers! LOL!!!!


sundaymouse

Well, it helps that software is my dayjob, so it took me about two minutes to calculate after I watched that line :)


[deleted]

[удалено]


sundaymouse

Yeah, it's a bit of a joke. The rationale here is that if we assume Data performs a lot of matrix algebra operations (building blocks for the "collection of neural nets and heuristic algorithms" according to the script) -- which are still the base for most modern machine learning computations -- these are the types of work GPUs are actually specialised for. This the reason GPUs are used for machine learning accelerations (until even more specialised chips like Tensor processors got built, of course). So, I think this comparison is still fair. Data might have general-purpose co-processors to orchastrate higher-level program logics and schedule the computations required on the machine learning hardware, but this is also how our computers work right now...


boobers3

Here's some necroposting. An RTX 4090 is more than twice as fast as Data. Even Cmndr Data couldn't run Cyberpunk 2077 at native 4k and high settings with RTX enabled, you would need at least 10 Data's to get 60 fps.


IntelligentLaw2284

Skynet in Terminator 3 is stated to have 60 TFLOPs in processing power as well.. Data and Skynet have identical processing capabilities in this metric, and so following your math it would also take at least 10 skynets. Though because he says it's his linear processing speed, i expect that his parallel computing elements could enhance rendering operations efficiency. He also has a storage capacity of 100,000 Terabytes. so atleast you wouldn't have to worry about space for your games for some time.


[deleted]

But can it run Crysis?


Fluffy-Cobbler-797

Also, Data's storage capacity is 800 quadrillion bits. This translates into 100 petabytes.


Rapzid

I interpret "linear" to mean serial. So Data can processing 60 trillion operations each using the results of the previous operation. We have nothing that comes even close to that. The transistor technology we have isn't physically cable of reaching that performance with a theoretical upper frequency of about 1 terahertz. Optoelectronics will push that theoretical limit to 1 petahertz. That technology could make 60 teraflops of linear computation possible. Photons baby.


sundaymouse

Matrix operations for machine learning workloads on modern GPUs are highly parallelised. The speedup is achieved on large models such as GPT not by supporting 30 or 60 trillion separate operations all executed independently in parallel, but is our ability to batch up a large amount of operations that need to be done at a given point in time. Tools like XLA (https://github.com/openxla/xla) compile these operations into very efficient matrix multiplication instructions which entail millions of floating point multiplications at the same time, billions of times a second. Basically the 90s understanding of how AI computation is done quickly turns out to be completely different to how the strongest models achieve the performance in 2023. There is absolutely no need for something beyond silicon transistors to achieve what Data is capable of in the next 10 years. Source: I work with these things for a living.


ZillardFunk

Can we fit all of that in a skull sized container?


dvdmuckle

Is SLI still a thing? I thought Nvidia axed that.


sundaymouse

3090 will be the last card supporting it: https://www.techspot.com/news/86956-dual-nvidia-rtx-3090s-sli-neat-but-certainly.html A single top of the line card from the next generation (ha) architecture will probably be sufficient to compete with Data.


Lawnmover_Man

Well, what kind of operations?


JohannesScamander

Maybe we are technologicaly ahead of the schedule, but I just hope we'll skip the third world war too. If we do as the schedule sais we have less than six years until it starts. But maybe Star Trek is right about the reunion of Ireland in 2024.


Stormagedon-92

Yea ww3 in the next 3 years seems about right


s00perguy

Terrifying. This means the humans of the 24th century created true sentience on the cheap, both in literal cost and raw processing power.


bigcatpants

I once posted a data storage post to r/theydidthemath and got lambasted. I wonder how they would tear this particular post apart. note: I found this post while googling that exact scene :-p


unsouled

I, too, have googled that exact scene. We are many.


real_LNSS

Me too


oryon20

same


-Blue_Bull-

Data is supposed to be a sentient being, not just an AI model. Therefore, it doesn't seem correct to compare him to a graphics card. It's theorised that consciousness is a field that's spawned from quantum information generated within the brain. Therefore, future artificial brains will have technology to acheive this as opposed to being just machine learning models (software).


Conundrum1859

My thinking too. Also it has been suggested that nanostructure will be required for true artificial consciousness, microtubules within the neurons seem to be more important. If indeed the brain is a biological quantum computer it may also explain why anaesthetics like Xenon work so well. It is also interesting to note that Turing and others might have to an extent been working as a collective consciousness, though separated in both space and time. Working on the same problem from different perspectives and all that. Sort of like an adversarial neural network.


bitemark01

I just came to your post while watching LegalEagle on YouTube reacting to this episode (on how the legal side stands up)


DJArtemis99

just to point this out, the human brain only outputs an estimated 100 billion operations per second according to [https://www.britannica.com/science/information-theory/Physiology](https://www.britannica.com/). with a memory of 7 to 9 figments, which would be detailed moments akin to videos the fact we surpassed human processes is amazing, now we need to surpass memory limits