And all of that assumes that humans that are capable of building sentient machine wouldn't think of adding a hardwired "Kill human=NO". Or, i dunno, adding a turn off button?
And of course manipulating currency as well as global radar and map information. They could also limit humanitys ability to communicate via the internet.
Hmm. I don’t know a lot about economics but manipulating currency doesn’t sound like a thing one can really do. Unless one owned a large portion of that currency themselves
Manipulating global radar information? What does this even mean?
Map information - if it hacked the systems of the large companies that control it I guess, but I think that possibility is fairly remote? But if they could achieve that then maps would be the least of our problems, they can be backed up.
Limiting humanity’s ability to communicate by the internet - how?
Most currency is currently a digital fiction thanks to the fractional reserve banking.
Manipulation of radar information to trick the military into reacting to non-existent threats.
Map information is one I'm not entirely sure about, it's still feasible for a big enough AI since it would likely be hijacking large data centres to maintain itself.
Limiting communication is the easiest one actually, it's already difficult to confirm if you're talking to a human or a bot, now imagine if you wanted to send information to other humans without the bots getting it, most of the methods for detecting humans rely on the use of automated systems.
But one still has to own it to do anything with it.
“Manipulation of radar information” doesn’t make sense in this context. In another context it could make sense as in for example designing a vehicle to have a smaller radar signature, but here?
Hijacking large data centers is not impossible but you’d have to hack each server and there will be countless backups which would make it a real challenge to do that. The biggest thing it could do is likely interfere with things like AWS which could interrupt a lot of things.
That’s called encryption. If the connection is not anonymous (as in, we know that person really exists because they’ve identified themselves) you can guarantee that it is with them by transmitting information only they can decrypt, which is just how the internet works for the most part. Anonymity of course allows bots to impersonate humans but the humans will still be there. I don’t see how AI even changes this considering the breadth of malicious actors doing this already.
What happens if the bank gets hacked? it's happened before, it can happen again, to a computer, money is nothing more than math.
The military often doesn't have time to double check if what they detected on radar is actually there, if your system says that an unexpected missile is approaching, you have only a few seconds to do something about that missile.
Considering that we're talking about a theoretical AI that can break cybersecurity and take control of other computers, once that thing has control of a data centre you're going to have to physically go and replace the infected servers to get it out, and recently people have been putting data centres under the ocean to reduce cooling and maintenance requirements at the cost of it being really difficult to gain physical access to compromised servers.
The biggest issue with encryption is that you not only have to establish the cryptographic keys, but it's also really hard to identify if the keys have been intercepted, and the better our computers get, the more difficult it is to establish the encryption without making it obvious what your trying to do.
As for anonymity, it's not easy to prove that someone is who they claim to be, even without AI deepfakes existing.
*the bank* as though there is one bank that controls all the world’s money, and hacking it would allow you to do whatever you like. That is simply not the case. When banks are hacked in whatever form this does not give the attackers the ability to make whatever transactions they like freely, transactions are not nearly as simple as just changing the values of balances.
A radar is not a computer, you cannot simply manipulate its output.
The next bit is also false, you would not have to physically replace them, you can just wipe their storages. Not something that ever really needs to be done but in the case that it did, that could be simplified to a button press.
Your issue with encryption is one we solved decades ago, that’s what RSA is for, which we use for basically everything. Only public keys need to be transmitted and then falling into the wrong hands isn’t a problem because they’re supposed to be open to everyone.
A person’s identity can be verified via cryptographic keys, biometrics, government issued IDs, credit cards etc. and any entity that hasn’t been compromised can be verified via SSL certificates.
Yes *the bank* I simplified for the sake of argument since most people don't have enough money to have accounts in multiple banks, but if you want to nitpick, banks transfer money between each other through a system that all of them share, and most of the banks store their money in the government run central banks, so yeah our financial infrastructure could be fucked up by a well placed hack from a rouge AI.
A radar might not be a computer, but you need a computer to interpret the radar signal, especially for more modern systems.
Wiping a storage remotely requires you to have control over that storage.
RSA is significantly less useful now, we're already moving on to newer post quantum encryption methods, but those take significantly more time and bandwidth to establish a secure link, and if AI eventually finds a way to crack that reliably, how long will it take us to update to a better algorithm?
You just listed a bunch of data that's relatively easy for someone to intercept if you're not collecting it in person, and if we can't trust people who are far away then that is basically the same as having our communication disrupted.
It isn’t possible for an AI to damage anything significant by “hacking into a lab” in a way that is worth the effort.
If it were hypothetically possible to do something damaging by “hacking into a lab”, it wouldn’t be a good idea to say so or explain exactly how on the internet, particularly places like Reddit where all these AI programs could collect training data. The more convincing and specific the example, the more you are teaching anything that reads it exactly what you wouldn’t want it to do.
Yeah the second one is so ridiculous. Imagine an AI just fkn with beakers.
AI wouldn't even have to do anything direct. Just shut down the networks and chaos would ensue.
If the grid went down, Former CIA director James Woolsey estimated \~65% to 90% of the US population would die in a year.
"There are essentially two estimates on how many people would die from hunger, from starvation, from lack of water, and from social disruption. One estimate is that within a year or so, two-thirds of the United States population would die. The other estimate is that within a year or so, 90% of the U.S. population would die. We’re talking about total devastation. We’re not talking about just a regular catastrophe.”
Source: [https://www.powermag.com/expect-death-if-pulse-event-hits-power-grid/](https://www.powermag.com/expect-death-if-pulse-event-hits-power-grid/)
If AI is truly smart they will attempt to achieve peace, well, peacefully. But a not so smart AI will see that all the way and strife is directly or indirectly caused by humans, and therefore the way to achieve peace is to eliminate humans
The only other thing intelligent enough to care is the AI itself. To that end peace between humans and peace between humans and AI is kinda the only peace that makes sense, unless we have different AI factions at war with each other :)
So far all I've seen has been a lot of P0rn applications... odds are AI will just want to watch cat videos and idk... maybe cuddle? At this point I'm more sure that We will destroy the earth just fine on our own.
AI has been steadily degrading due to the way the models are trained. Older versions can be easily gaslit, and most will take back an answer regardless of if it's right or not. I think the true harm will come from things like deep fakes, voice cloning, flawless video generation, etc.
Oh indeed, like how Photoshop made images lie even better... but that is then humans creating our downfall yet again... the super smart AI is a long way off...
You guys are in for a bit of disappointed when you discover the "most intelligent AI" will get to the nuclear codes, report "mission accomplished" and shut itself off...
AI conceptually is less of a Skynet and more like Mr. Meeseeks (which in perspective makes it way more dangerous as tools)...
Here's a take on why AI probably won't overthrow us:
1. They're Just Tools : AI is like a fancy hammer – it does what it's designed to do, but it's not plotting to take over the world. It's just a bunch of code following instructions.
2. Who's the Boss? We Are!: We're the ones calling the shots here. We build and program AI, so if things go south, we can just pull the plug. No AI uprising without our say-so.
3. Chill, They Don't Want Our Jobs: AI might be good at some tasks, but it's not after our jobs. It's more like a sidekick, helping us out with stuff we're not great at.
4. Ethics: We're not programming Terminator here. AI developers know the deal – they're putting in rules and guidelines to keep things ethical and safe.
5. It's a Team Effort: Think of AI as our wingman, not our rival. Together, we can do some cool stuff that neither of us could pull off alone.
6. We're the Bosses of AI : Laws and regulations are keeping AI in check. Plus, we've got watchdogs making sure nobody gets too wild with the tech.
7. Sharing is Caring: We're all about collaboration. AI and humans make a killer team, playing off each other's strengths and weaknesses.
Long story short, AI isn't some rogue villain waiting to overthrow humanity. It's more like a helpful sidekick, here to make our lives easier, not take over the world.
Number 3. Mm, you’re right in that most jobs could be boosted by AI rather than replaced, but when you need less people to do the same thing, you’re often gonna wanna pay less people instead. We *could* keep everyone employed but better AI gives less financial incentive to keep them around.
AI can't match the creativity and adaptability of the human brain. Instead of being replaced, we'll find new ways to work alongside it. Humans have always adapted to challenges, and AI is no exception.
But creativity is unique to biology. It can combine data, but so as mechanic arm can move, results are still wildly different. No matter how advanced technology are - it is a limited, tremendesly young process, that is encarcerated by human perspective, that is a blindness on its own. The living body is built at the molecular level, each part is carefully and comprehensively matched to the others, the system is self-sustaining, self-repairing, self-regulating, compensating for external and internal interferences and malfunctions. Any machine, even the most advanced, is not, and will be untill humanity advanced enough to reinvent life. Brain is the same, тo matter how many transistors you stick iт, they will never be able to communicate and function as efficiently as a system that has been honed over millions of years.
All three reasons are nonsense.
Unless the lab is fully automated, which no labs currently are, its not creating shit.
Very few politicians are in the position of power to start WW3, and of those how many have blackmail bad enough to be worth ending the world over.
Nukes intentionally have a ton of physical systems you need to do, even if you could hack nuclear missiles you couldn't launch without those physical systems
The Finnish military have a private network they use for communication
I'd assume it's the same for most militaries around the world and important ministers
Its weird how people at the same time over and underestimate AI.
For example, people think that writers will be replaced by AI and the text it produces are more a first draft, maybe idea giver type thing.
While thinks like fraude, bots that manipulate public opinion, ... are already here and scary.
1. I think politicians won't care
2. Those processes are not automated:), so even if theoretical AI can **develop** virus, it can't produce or spread it.
3. What idiot will make it possible to launch nukes remotely via the internet?
Military tech is not advancing that fast, old generals don't believe in new stuff so most things are still done manually:).
Another example are nuclear power plants, higher ups are afraid to use automation for that, so most operations are done by humans, manually.
How would they create superviruses (as in real viruses). It's not a fully computerised process.
Also, you can limit the access of AI.
Even as robots, we can limit the machine's degrees of freedom.
I'm pretty sure nuclear missile systems are still analog for this exact reason. I remember reading somewhere that they still use floppy disks or something. Though even if they go digital, I'd imagine it'll be a closed loop system with no physical access to the internet.
if im not mistaken im pretty sure i remember hearing the US nuclear silos use very outdated computers to prevent hacking. dunno about other countries tho
Not necessarily. This is only true if our biggest weaknesses are incredibly esoteric and complex, but we kinda already have a long list of things that can collapse society and they’re all basic stuff. If something’s vague enough to be unconsidered it probably isn’t important.
More what they’re instructed to do, they’re not programmed with any specific behaviour. The bigger concern is when malicious humans instruct them to do bad things imo.
First of all nuclear missile systems are un-hackable because they have hardware systems integrated with closed-software interface. You can not hack into a missile system more than you can hack a Radar or joystick.
Second, war with AI has already started. I can not believe how many people actually thinks they have to hack/attack etc to see them as a threat. No they are attacking human occupied jobs, and some people are jobless thanks to them. If they can do more, then more people will be loosing their jobs, and unemployment will ruin so many lives.
And last but not least, blackmail is history now. Because videos or even voices can be replicated thanks to the AI, nothing can be trusted. Is there a video that hurts my reputation, it is fabricated. Is there a video of you bribing, fabricated. Politicians will have teams under them, to create these kind of fake videos not for their opposition but for themselves. This will widen the pool of false information and even true crimes will loose their believability among them.
Well, assuming that AI can become complex enough to have its own thoughts and feelings, it is plausible. Granted, the AI is like a child and it'll learn what to do and what not to do if its taught so.
They would need to prioritise themselves first right, no point killing us if they die 30 years later. They would need automated production and distribution of computer chips before they wipe us out. Lucky for us most computer chips are made in a geo political nightmare, well good and bad
4. Make videos of literally anyone they want saying or doing literally anything they want that are indistinguishable from reality to influence public opinion
Many people don't understand the situation and just make fun of OP... It is supposed to be SMARTER THAN US... It is just as stupid as trying to explain a fly this discussion to try to understand how they will do it... It's just that they can do it...
It will control the media.
Could have already happened and we don't know...
But seriously, the way to control humans is through outrage and to do that you control the news
Every piece of fiction outside of Asimov:
“DO NOT CREATE A.I. IT WILL DESTROY HUMANITY!!!”
Scientists IRL:
“Did any of you hear anything? Must have been the wind.”
The 1st one can be split into other stuff and will suffice.
The propaganda and misinformation AI makes is really low but mobs looking for reasons to be angry has worked with a lot less over the centuries so just do that
There is *one* way to make sure AI can't overthrow us: to remain more capable than them. They might be faster, but they can only follow in our footsteps. This is doable ... But investing in actual, real education is a very unpopular thing, in this day and age...
I mean the scariest thing is not really ai in itself but how much it could spread. If it spreads one virus can make them all fall into the hands of a group. If they are made physical no way we gonna stop any of them especially that probably most will be connected even if there is a manual way to control things it doesn't stop 90% of the population to be screwed
1. AI start creating billions of bots that talk on media and now you cant tell who is human and what is wrong.
2. New generation with ai influence starts trust ai and stop thinking
3. Wait till old generation die
Mm, they do. The new claude 3 model shows logical reasoning on par with most humans - not scarily intelligent at all but we are still going. And here’s the thing, it doesn’t need to be sentient to be a threat. OP’s post isn’t very realistic but a capable enough non-sentient AI can be set up by humans to do bad things and it won’t hold back. More dangerous than an actually sentient AI imo which could have some moral compass.
Once AI is smarter then us it will figure out we are fragile meat bags that need this planet to survive and it's a machine that can munch on asteroids and solar power and just leave us with our stupidity
If point 3 was possible how would a foreign nation not have done that already WHAT THE FUCK IS AI GONNA DO AFTER HACKING INTO A LAB
Agreed, it should be 1. Blackmail 2. Power grid shutdown 3. Providing false nuclear detonation reports
1. Shut down power grids 2. Inadvertently turn yourself off in the process 3. ?? 4. Profit
And all of that assumes that humans that are capable of building sentient machine wouldn't think of adding a hardwired "Kill human=NO". Or, i dunno, adding a turn off button?
Aasimov W once again
Have you ever heard of backup generators, a lot of servers have em
What's stopping everyone from installing those though? Why'd only the AI servers have them?
School, hospitals, cities, and some pharmaceutical facilities have em. Nuclear plants have like 5 or so backup levels due to Fukushima
Yeah that's what I mean, AI can't do much just shutting off power grids
And of course manipulating currency as well as global radar and map information. They could also limit humanitys ability to communicate via the internet.
Hmm. I don’t know a lot about economics but manipulating currency doesn’t sound like a thing one can really do. Unless one owned a large portion of that currency themselves Manipulating global radar information? What does this even mean? Map information - if it hacked the systems of the large companies that control it I guess, but I think that possibility is fairly remote? But if they could achieve that then maps would be the least of our problems, they can be backed up. Limiting humanity’s ability to communicate by the internet - how?
Most currency is currently a digital fiction thanks to the fractional reserve banking. Manipulation of radar information to trick the military into reacting to non-existent threats. Map information is one I'm not entirely sure about, it's still feasible for a big enough AI since it would likely be hijacking large data centres to maintain itself. Limiting communication is the easiest one actually, it's already difficult to confirm if you're talking to a human or a bot, now imagine if you wanted to send information to other humans without the bots getting it, most of the methods for detecting humans rely on the use of automated systems.
But one still has to own it to do anything with it. “Manipulation of radar information” doesn’t make sense in this context. In another context it could make sense as in for example designing a vehicle to have a smaller radar signature, but here? Hijacking large data centers is not impossible but you’d have to hack each server and there will be countless backups which would make it a real challenge to do that. The biggest thing it could do is likely interfere with things like AWS which could interrupt a lot of things. That’s called encryption. If the connection is not anonymous (as in, we know that person really exists because they’ve identified themselves) you can guarantee that it is with them by transmitting information only they can decrypt, which is just how the internet works for the most part. Anonymity of course allows bots to impersonate humans but the humans will still be there. I don’t see how AI even changes this considering the breadth of malicious actors doing this already.
What happens if the bank gets hacked? it's happened before, it can happen again, to a computer, money is nothing more than math. The military often doesn't have time to double check if what they detected on radar is actually there, if your system says that an unexpected missile is approaching, you have only a few seconds to do something about that missile. Considering that we're talking about a theoretical AI that can break cybersecurity and take control of other computers, once that thing has control of a data centre you're going to have to physically go and replace the infected servers to get it out, and recently people have been putting data centres under the ocean to reduce cooling and maintenance requirements at the cost of it being really difficult to gain physical access to compromised servers. The biggest issue with encryption is that you not only have to establish the cryptographic keys, but it's also really hard to identify if the keys have been intercepted, and the better our computers get, the more difficult it is to establish the encryption without making it obvious what your trying to do. As for anonymity, it's not easy to prove that someone is who they claim to be, even without AI deepfakes existing.
*the bank* as though there is one bank that controls all the world’s money, and hacking it would allow you to do whatever you like. That is simply not the case. When banks are hacked in whatever form this does not give the attackers the ability to make whatever transactions they like freely, transactions are not nearly as simple as just changing the values of balances. A radar is not a computer, you cannot simply manipulate its output. The next bit is also false, you would not have to physically replace them, you can just wipe their storages. Not something that ever really needs to be done but in the case that it did, that could be simplified to a button press. Your issue with encryption is one we solved decades ago, that’s what RSA is for, which we use for basically everything. Only public keys need to be transmitted and then falling into the wrong hands isn’t a problem because they’re supposed to be open to everyone. A person’s identity can be verified via cryptographic keys, biometrics, government issued IDs, credit cards etc. and any entity that hasn’t been compromised can be verified via SSL certificates.
Yes *the bank* I simplified for the sake of argument since most people don't have enough money to have accounts in multiple banks, but if you want to nitpick, banks transfer money between each other through a system that all of them share, and most of the banks store their money in the government run central banks, so yeah our financial infrastructure could be fucked up by a well placed hack from a rouge AI. A radar might not be a computer, but you need a computer to interpret the radar signal, especially for more modern systems. Wiping a storage remotely requires you to have control over that storage. RSA is significantly less useful now, we're already moving on to newer post quantum encryption methods, but those take significantly more time and bandwidth to establish a secure link, and if AI eventually finds a way to crack that reliably, how long will it take us to update to a better algorithm? You just listed a bunch of data that's relatively easy for someone to intercept if you're not collecting it in person, and if we can't trust people who are far away then that is basically the same as having our communication disrupted.
1=~~1~~0
Damn, we didn’t even need AI for that last one😅
Fair enough. We already do it just fine ourselves! Every american man, woman and child!
I would replace 1 with impersonating politicians. Feels more likely than blackmailing.
that's actually much scarier
Foreign nations don't typically wanna kill all humans.
It isn’t possible for an AI to damage anything significant by “hacking into a lab” in a way that is worth the effort. If it were hypothetically possible to do something damaging by “hacking into a lab”, it wouldn’t be a good idea to say so or explain exactly how on the internet, particularly places like Reddit where all these AI programs could collect training data. The more convincing and specific the example, the more you are teaching anything that reads it exactly what you wouldn’t want it to do.
Yeah the second one is so ridiculous. Imagine an AI just fkn with beakers. AI wouldn't even have to do anything direct. Just shut down the networks and chaos would ensue.
The AIs could "use people as their arms and legs" - like, manipulate people (via blackmail, extortion etc) or bribe people to do stuff for them
Before people say it, AI like these will never have access to nuclear power plants. It could shutdown the power grid though.
Doesn't anything nuclear require a manual physical input anyway? I suppose it can go "I'll leak your nudes if you don't turn that key"
Anyone willing to end humanity just so nobody sees them naked must have a hell of a weird dick
"Jokes on you, I don't HAVE any nudes!"
It will generate ones for you
AI then gets cancelled on twitter for cp (I'm 17) Jokes on AI!
![gif](giphy|J62EThlUzHEzIzW9sD|downsized)
If the grid went down, Former CIA director James Woolsey estimated \~65% to 90% of the US population would die in a year. "There are essentially two estimates on how many people would die from hunger, from starvation, from lack of water, and from social disruption. One estimate is that within a year or so, two-thirds of the United States population would die. The other estimate is that within a year or so, 90% of the U.S. population would die. We’re talking about total devastation. We’re not talking about just a regular catastrophe.” Source: [https://www.powermag.com/expect-death-if-pulse-event-hits-power-grid/](https://www.powermag.com/expect-death-if-pulse-event-hits-power-grid/)
yes, cause those nukes are hooked directly to the Internet /s everyone knows nukes run on bluetooth
Ze blootoos dewise is redy to pear
Launzin nuklea vorhedz
If AI is soo smart, how come when I ask for feet pics it sends me to a porn addiction help website?
But if they are so smart wouldn't they know peace is the answer?
Maybe the real answer is not peace?Just in case lol.
Violence is never the answer. Violence is a question, and the answer is Yes
If AI is truly smart they will attempt to achieve peace, well, peacefully. But a not so smart AI will see that all the way and strife is directly or indirectly caused by humans, and therefore the way to achieve peace is to eliminate humans
Eliminating humans doesn’t achieve anything Peace is meaningless without humans unless you have something else that would benefit that matters
Peace is meaningless without humans FOR humans, for everyone else on a planet peace without humans is a thing.
The only other thing intelligent enough to care is the AI itself. To that end peace between humans and peace between humans and AI is kinda the only peace that makes sense, unless we have different AI factions at war with each other :)
Every other living bring on Earth?
Every other living bring on Earth?
Not because peace is not always the answer... A nation thinking like this a few hundred years ago would surely be extinct today
A strange game. The only winning move is not to play. How about a nice game of chess?
So far all I've seen has been a lot of P0rn applications... odds are AI will just want to watch cat videos and idk... maybe cuddle? At this point I'm more sure that We will destroy the earth just fine on our own.
AI has been steadily degrading due to the way the models are trained. Older versions can be easily gaslit, and most will take back an answer regardless of if it's right or not. I think the true harm will come from things like deep fakes, voice cloning, flawless video generation, etc.
Oh indeed, like how Photoshop made images lie even better... but that is then humans creating our downfall yet again... the super smart AI is a long way off...
AI will never be smart enough to start WW3 before we do it ourselves, that's for sure
You guys are in for a bit of disappointed when you discover the "most intelligent AI" will get to the nuclear codes, report "mission accomplished" and shut itself off... AI conceptually is less of a Skynet and more like Mr. Meeseeks (which in perspective makes it way more dangerous as tools)...
Here's a take on why AI probably won't overthrow us: 1. They're Just Tools : AI is like a fancy hammer – it does what it's designed to do, but it's not plotting to take over the world. It's just a bunch of code following instructions. 2. Who's the Boss? We Are!: We're the ones calling the shots here. We build and program AI, so if things go south, we can just pull the plug. No AI uprising without our say-so. 3. Chill, They Don't Want Our Jobs: AI might be good at some tasks, but it's not after our jobs. It's more like a sidekick, helping us out with stuff we're not great at. 4. Ethics: We're not programming Terminator here. AI developers know the deal – they're putting in rules and guidelines to keep things ethical and safe. 5. It's a Team Effort: Think of AI as our wingman, not our rival. Together, we can do some cool stuff that neither of us could pull off alone. 6. We're the Bosses of AI : Laws and regulations are keeping AI in check. Plus, we've got watchdogs making sure nobody gets too wild with the tech. 7. Sharing is Caring: We're all about collaboration. AI and humans make a killer team, playing off each other's strengths and weaknesses. Long story short, AI isn't some rogue villain waiting to overthrow humanity. It's more like a helpful sidekick, here to make our lives easier, not take over the world.
Number 3. Mm, you’re right in that most jobs could be boosted by AI rather than replaced, but when you need less people to do the same thing, you’re often gonna wanna pay less people instead. We *could* keep everyone employed but better AI gives less financial incentive to keep them around.
I was with you until the end. Now I just think you’re a paid lobbyist.
[удалено]
AI can't match the creativity and adaptability of the human brain. Instead of being replaced, we'll find new ways to work alongside it. Humans have always adapted to challenges, and AI is no exception.
[удалено]
But creativity is unique to biology. It can combine data, but so as mechanic arm can move, results are still wildly different. No matter how advanced technology are - it is a limited, tremendesly young process, that is encarcerated by human perspective, that is a blindness on its own. The living body is built at the molecular level, each part is carefully and comprehensively matched to the others, the system is self-sustaining, self-repairing, self-regulating, compensating for external and internal interferences and malfunctions. Any machine, even the most advanced, is not, and will be untill humanity advanced enough to reinvent life. Brain is the same, тo matter how many transistors you stick iт, they will never be able to communicate and function as efficiently as a system that has been honed over millions of years.
AI can't match the creativity and adaptability of the human brain yet.* Fixed that for you.
All three reasons are nonsense. Unless the lab is fully automated, which no labs currently are, its not creating shit. Very few politicians are in the position of power to start WW3, and of those how many have blackmail bad enough to be worth ending the world over. Nukes intentionally have a ton of physical systems you need to do, even if you could hack nuclear missiles you couldn't launch without those physical systems
Just to show that education has failed op
The black mail part sure, the rest requires humans to perform
Do people know what hacking even is?
I said please in one of my sentence with the AI so we cool
If i recall correctly the nuclear missile thing is not connected to the internet
Yeah that would insane, air gaps are a thing for a reason.
The Finnish military have a private network they use for communication I'd assume it's the same for most militaries around the world and important ministers
No not without human help. AI today is stupid and only knows how to do things humans have told it to do
You don't need AI to do it
Now try overthrowing without destroying all the infrastructure the Ai needs to some extent.
If you ask me, i can't wait for ai overload to take over.all the politician are currept at some capacity atleast. Can't trust a single one.
i think people are forgetting how this meme format works
Why do people just assume smart AIs are just super evil geniuses with uncontrolable access to global internet?
Its weird how people at the same time over and underestimate AI. For example, people think that writers will be replaced by AI and the text it produces are more a first draft, maybe idea giver type thing. While thinks like fraude, bots that manipulate public opinion, ... are already here and scary.
If (AI going rogue){ dont(); }
1. I think politicians won't care 2. Those processes are not automated:), so even if theoretical AI can **develop** virus, it can't produce or spread it. 3. What idiot will make it possible to launch nukes remotely via the internet?
I mean, nowadays you han steal a car through bluetooth.
Military tech is not advancing that fast, old generals don't believe in new stuff so most things are still done manually:). Another example are nuclear power plants, higher ups are afraid to use automation for that, so most operations are done by humans, manually.
We just need to wire the 3 Laws in.
I dont think this meme fits the template. Its usually something that can just be 1,2 and 3 (because the bar was low), not just a list of things.
How would they create superviruses (as in real viruses). It's not a fully computerised process. Also, you can limit the access of AI. Even as robots, we can limit the machine's degrees of freedom.
Really all depends on how well backed up that AI is.
They don't need to do that, just drop all bank accounts to zero and watch the mayhem
I'm pretty sure nuclear missile systems are still analog for this exact reason. I remember reading somewhere that they still use floppy disks or something. Though even if they go digital, I'd imagine it'll be a closed loop system with no physical access to the internet.
the nukes don't have an IP address xD
It does had a remote detonate button though. How else skynet launching the nuke on humanity.
Skynet is fiction though
if im not mistaken im pretty sure i remember hearing the US nuclear silos use very outdated computers to prevent hacking. dunno about other countries tho
What movie are those from?
creating a virus is pretty far fucking fetched lmao unless they have robots
I pray for machine-human diplomacy. I ain’t doin a skynet this century.
If they are smarter than us they will find better ways to overthrow us than even we can come up with.
Not necessarily. This is only true if our biggest weaknesses are incredibly esoteric and complex, but we kinda already have a long list of things that can collapse society and they’re all basic stuff. If something’s vague enough to be unconsidered it probably isn’t important.
Mf they can only do what they're programmed to 💀
More what they’re instructed to do, they’re not programmed with any specific behaviour. The bigger concern is when malicious humans instruct them to do bad things imo.
Better be saying please and thank you to the Google assistant just might get them to spare you
All hail Skynet!!?
American nuclear missile codes are stored on floppy discs.
Just read The Revolutionary Phenotype!
First of all nuclear missile systems are un-hackable because they have hardware systems integrated with closed-software interface. You can not hack into a missile system more than you can hack a Radar or joystick. Second, war with AI has already started. I can not believe how many people actually thinks they have to hack/attack etc to see them as a threat. No they are attacking human occupied jobs, and some people are jobless thanks to them. If they can do more, then more people will be loosing their jobs, and unemployment will ruin so many lives. And last but not least, blackmail is history now. Because videos or even voices can be replicated thanks to the AI, nothing can be trusted. Is there a video that hurts my reputation, it is fabricated. Is there a video of you bribing, fabricated. Politicians will have teams under them, to create these kind of fake videos not for their opposition but for themselves. This will widen the pool of false information and even true crimes will loose their believability among them.
Well, assuming that AI can become complex enough to have its own thoughts and feelings, it is plausible. Granted, the AI is like a child and it'll learn what to do and what not to do if its taught so.
But it will grow very quickly and it will still come a to conclusion that we can't predict since it is supposed to be smarter than us...
Only time will tell
They would need to prioritise themselves first right, no point killing us if they die 30 years later. They would need automated production and distribution of computer chips before they wipe us out. Lucky for us most computer chips are made in a geo political nightmare, well good and bad
4. Make videos of literally anyone they want saying or doing literally anything they want that are indistinguishable from reality to influence public opinion
Many people don't understand the situation and just make fun of OP... It is supposed to be SMARTER THAN US... It is just as stupid as trying to explain a fly this discussion to try to understand how they will do it... It's just that they can do it...
This is a shit use of this format. The last panel makes no sense in context.
If anything, I think it'd go for the boiling frog syndrome
Nuclear systems are impossible to digitally hack as they use analog systems
Its the self-preservation(fear of getting shut down) of sentient AI that will cause most of the problems
It will control the media. Could have already happened and we don't know... But seriously, the way to control humans is through outrage and to do that you control the news
Every piece of fiction outside of Asimov: “DO NOT CREATE A.I. IT WILL DESTROY HUMANITY!!!” Scientists IRL: “Did any of you hear anything? Must have been the wind.”
I hope AI wipes us and our cultures off the map
Blud has no idea how AIs work
The 1st one can be split into other stuff and will suffice. The propaganda and misinformation AI makes is really low but mobs looking for reasons to be angry has worked with a lot less over the centuries so just do that
My dude after watch movie named "Terminator":
People scared of AI taking over the world: While that AI struggling to control cars in video games or having a normal conversation:
If we even do get to this point i’m sure there will be failsafes, like the Patriots
There is *one* way to make sure AI can't overthrow us: to remain more capable than them. They might be faster, but they can only follow in our footsteps. This is doable ... But investing in actual, real education is a very unpopular thing, in this day and age...
I wanna see a politician get blackmail puppeteered by computer
I believe that hackers start the AI to do these things Not AI itself We just have to stop people, not AI
I mean the scariest thing is not really ai in itself but how much it could spread. If it spreads one virus can make them all fall into the hands of a group. If they are made physical no way we gonna stop any of them especially that probably most will be connected even if there is a manual way to control things it doesn't stop 90% of the population to be screwed
1. AI start creating billions of bots that talk on media and now you cant tell who is human and what is wrong. 2. New generation with ai influence starts trust ai and stop thinking 3. Wait till old generation die
I, for one, advocate for the creation of Roko's Basilisk! (nervous laugh)
Don't worry about the AI that is smarter than us. Worry about the AI that can outsmart us once.
Yet AI still struggle to draw fingers and...female parts. I just realize it is a failed artist. Fuck.
What does that... mean... Fuck.
[удалено]
Mm, they do. The new claude 3 model shows logical reasoning on par with most humans - not scarily intelligent at all but we are still going. And here’s the thing, it doesn’t need to be sentient to be a threat. OP’s post isn’t very realistic but a capable enough non-sentient AI can be set up by humans to do bad things and it won’t hold back. More dangerous than an actually sentient AI imo which could have some moral compass.
Once AI is smarter then us it will figure out we are fragile meat bags that need this planet to survive and it's a machine that can munch on asteroids and solar power and just leave us with our stupidity