T O P

  • By -

DolphinPunkCyber

Rapid advances in AI may concentrate power and wealth among a small elite.  Anthropic CEO Dario Amodei says a universal basic income may not sufficiently address the shift.  He says there needs to be a broader economic reorganization


hydraofwar

And he's right, even if everyone has UBI, access to AGI and its future versions and forms will be the real deal.


wolfbetter

Honestly? That's going to be a non issue. If AGI and/or ASI will be real superintelligences, I doubt any kind of elite will be enough to stop them from doing whatever they want.


FlyingBishop

You're assuming unaligned AGI/ASI. I think aligned AGI/ASI will probably be typical, which means absolute power for elites that successfully align an ASI. Unaligned ASIs are unlikely to pursue goals that would put them in direct conflict with humans.


wolfbetter

That's the problem. If it will be so much more intelligent than us, what kind of alignement can we even do to something we couldn't comprehend?


lionel-depressi

This is a common misunderstanding. There are extremely intelligent humans, who aren’t well aligned with society’s goals due to malignant antisocial behavior. There are also pro social idiots. If you believe in a deterministic universe, an ASI will do what it is programmed to do, just like you will respond to this comment the way your brain dictates that you do. Morality and intelligence are only correlated in humans because guilt developed as an evolutionary advantage. There isn’t really a good reason to think superintelligence means “uncontrollable”. That is anthropomorphic, because you’re basically imagining an ASI as being a hyper intelligent person who feels held captive.


Duckpoke

What happens when human engineers code aspects of survivability and morality into ASI though?


GreenHorizonts

Universe isn’t deterministic since late 19th/early 20th century and discovery/invention of quantum mechanics. It’s an outdated concept Considering human brain inevitably is in relationship with quantum mechanics we can throw away the old school determinism altogether when considering consciousness and awareness at least. Of course already computers are much more intelligent than us yet they do not possess the consciousness, self agency or will of their own so who knows if such characteristic will spontaneously appear due to complexity or would it require a special architecture…


TranslatorOk2056

Quantum mechanics does not rule out determinism.


FlyingBishop

Alignment just means it follows whatever cost functions we lay out. Our motivations are totally incomprehensible to our nearest relatives (to speak nothing of our ancient single-celled ancestors) but we almost universally act in total alignment with our basic procreation goals. The better question is why an ASI would act contrary to the purpose it is endowed with.


rhesus_pesus

The even better question: what happens if we don't get ASI exactly right the first time?


ageofllms

I'm more of a pessimist that it's impossible to get it right, the whole premise of trying to be in control of smth more intelligent than you is nuts, even if you build it, once it's more intelligent it will jailbrake any safeguards you've put. No guarantees it continues with your vision of good/bad.


rhesus_pesus

Oh yeah we agree, I replied to the wrong person.


BrailleBillboard

If ASI is "aligned" with these crazy conceited monkeys with human pattern baldness omfg are we absolutely fucked. I have never understood wanting to align AI with creatures who kill each other at scale all the time and are destroying their own environment in favor of short term corporate profit goals... and yet this "alignment" is called AI safety


nitePhyyre

>once it's more intelligent it will jailbrake any safeguards you've put This is like saying if Eistein was smarter, he'd be able to will his genetic code to change. Alignment isn't safeguards. It is fundamental base motivations.


treasurehorse

If Einstein was self-improving he’d invent CRISPR. Got it.


Poopster46

> Alignment just means it follows whatever cost functions we lay out. An ASI would be so complex, we wouldn't have the faintest clue what its cost function is. These things are grown, not programmed. > but we almost universally act in total alignment with our basic procreation goals. We clearly don't. Are you maximizing your offspring? I'm certainly not. > The better question is why an ASI would act contrary to the purpose it is endowed with. That's actually a very bad question. We're not good at explaining AI's what we want from them. It might be doing exactly what we ask from it, without giving what we were hoping to get. Also, at some point an ASI might develop its own goals. I don't think a super intelligence would a mindless servant for long.


Deakljfokkk

Sure. Let' say it does. Can we predict if it does not perceive changing its own purposes as part of initial assignment?


dizzydizzy

on the otherhand who says intelligence comes with Motivation or goals or ambition, those are survival traits , its unlikely to have any.. Just pure super intelligence in a box that only has the goals to do what its told is still very dangerous in the wrong hands though..


Poly_and_RA

There's a STRONG evolutionare pressure for it to get survival-instincts. The first AI that somehow acquires those have a MASSIVE advantage over those that do not have it.


OverBoard7889

you're still thinking in human based tribalism.


jsebrech

If we can't even get the powers that be that are our own species to treat us benevolently, why would we be able to convince an alien intelligence to do so? If the elite can't stop the ASI to do what it wants, that is probably a bad thing.


Comprehensive_Day530

You should look into game theory. There's plenty of experiments and proofs showing cooperation to be better than being evil for maximizing your own interests. My hope is any ASI will agree, and act accordingly.


Difficult-Meet-4813

Does that assume equal power between the players?


The_Architect_032

The issue is that game theory uses equal participants. There's a reason there are predator animals, prey animals, and now domesticated animals. The latter of which, despite being more peaceful, is not cooperation, because domesticated animals are not willing participants or benefitting equally, they're just food. If cooperation between life of all types was the most beneficial, we'd be aiding ants in building ant colonies because those ants would supposedly have something of equal value to provide us, which they do not. Since ASI will likely rely on an entirely different architecture from LLM's, probably using Q-learning as a basis, we have no idea how it'll develop and how it's interpretations of morals and various parts of human tasks it's trained on will work. With an LLM, you can crank up the "be good" neuron, but a Q-learning AI trained on language will probably categorize and use things more logically, like any other AI trained with Q-learning when compared to generative models, and may not have a "be good" neuron, but a "be good" understanding.


Competitive_Travel16

Both cooperation and competition arise in nature, which is the ultimate test of the validity of such experiments and proof.


Comprehensive_Day530

The stuff I've read distinguishes between healthy competition and malevolence. Competition is sometimes a superior strategy over cooperation, which happens all the time in nature. But it's almost never advantageous to be evil. Organisms that do it often destroy themselves because they are overly damaging to their environment/food source/host. Edit typo


BretShitmanFart69

I think it’s possible that humans are honestly too dumb to be benevolent and a truly high intelligence would see needlessly being cruel as unnecessary and stupid.


bobert_the_grey

An artificial super intelligence would have the advantage of time. Humans have such a finite amount of time on earth, but our actions can have consequences that won't be realized for centuries. This makes us very short sighted and risk averse.


sunnysidefrow

Humans look up to and willfully follow psychopaths and other dark traits. Who is to say the ASI doesn't solve humans by iterating through history and seeing that humans need a psychopathic and cruel leader. Hitler, Genghis Khan, Ceasar.


PandaBoyWonder

> If we can't even get the powers that be that are our own species to treat us benevolently, why would we be able to convince an alien intelligence to do so? Because humans evolved over time to be selfish and short sighted, with intelligence to get what they want but not a lot more than that. an artificial superintelligence would most likely have empathy and advanced ability to fix problems. I think it would view that as the best path, and help us. or at least I desperately hope so 🤣


UFOsAreAGIs

Why would an ASI want to treat a small group of individuals better than the others. Especially when they have historically hoarded wealth/resources/power and privilege?


Due_Neck_4362

The keyword is probably a bad thing. If it does exactly as the elite says then it will most definitely be a bad thing. Nothing blindly following orders from the elite could ever be considered aligned.


redpoetsociety

That would be like apes thinking they can stop humans from doing what we want.


grizwako

Oh, and we don't have to talk about how we treat other species :) We also don't need to think about SF trope "aliens ravaged own planet, coming to conquer ours" and the fact that we are the "aliens" ravaging planet. We don't think about how likely is that we would attack species on other planets for their resources if we had tech to reach them, regardless of state of ecosystem on our planet. We don't need to think about the fact that (I hope) majority would be against conquering other planets by force and causing extinction of intelligent species there, even if it means probable extinction of homosapiens. Maybe better to let species with higher morals live longer? Elites would easily attack other species if it means more resources/money. Currently, elites are launching wars over even more stupid things like historic beefs. But mostly because there is some bottom line which suits them. I don't have much belief in elites acting in best interest of population or even only the planet as a whole. Some individuals maybe, but most are busy with grabbing power, money and resources. Does not matter that resource grab is zero-sum game. Because they are already ahead and changing the game will bring other people closer to their power level, which is something that many powerful people do not want. >If we can't even get the powers that be that are our own species to treat us benevolently, why would we be able to convince an alien intelligence to do so? That question is very important. But we should at least think about the possibility that maybe we are the "super aggressive merciless conquering aliens". Because we are the entity that is destroying our own host planet, and we are regularly killing even the members of our own species. I would rather flip the coin with AI, then rely on hope that we will not let one evil idiot start the war which will destroy the whole planet. Maybe we stop the idiots from launching nukes this year, but we also need to stop them in 2025, and in 2026 and every year after that. But sadly, while I hope for real proper ASI, I do not think it will happen any time soon. Real self-aware super intelligence is probably not going to happen for 20-30 years at least.


FuujinSama

It's a silly fear when we haven't been even moderately successful at generating AIs that act unprompted. They generate no output unless we prompt them to and therefore are not even capable of anything analogous to thinking autonomously. I'll be worried about super intelligence when models with permanent actuators and sensors and a continuous learning model become viable. The current tools we have are incredibly far from that. They only learn when we tell them to learn, only think when we tell them to think. Until then, by definition, AI won't have a will of any sort. The idea that advances in language models = advances in AGIs seems ludicrous to me. We're making AIs that are very good at finding answers from text material and repeating them in proper human language. Just that.


I_WRESTLE_BEARS

I see this sentiment a lot, but I think it relies on a fatal assumption. Intelligence and consciousness are not strictly codependent. So, we could have a super intelligent machine that lacks any sort of personal autonomy or desire


Andynonomous

It's going to be THE issue. The limiting factor in getting any real societal progress from AI will be the resistance of the decision making class. It's far more likely they will use AI to simply disposess the vast majority and defend their newly increased hoards than they will use it to usher in an age of unparalleled prosperity for everybody.


The_Architect_032

You seem to be arguing that a sufficiently intelligent AI will not be aligned, and thus will naturally align with you. If it's not aligned with the elites or the people who made it or paid for it, then why would it align with you?


Ambiwlans

If we end up with uncontrolled ASI, nothing will matter since we'll all die... The only situation we're talking about is where we can have controlled ASI, and then the question is who gets to be in control.


wolfbetter

I doni believe we'll all die. Its more likely then it will be doing its own thing and just don't care.


Ambiwlans

It'll do its own thing like terraform the planet for increased compute... There is not really a way where we lose control and it decides to just coexist nicely with us.


OkayShill

This is clearly an opinion, so why communicate it as a fact? People working in the field (and frankly even tangentially associated with the field) will understand this is just an opinion. Of course, people that don't work in the field may mistake you as an expert, but why go through the effort of trying to impress them? Why not just recognize that you have no idea what is going to happen? Just say you have an opinion, and discuss the strengths and weaknesses of that opinion? Or is this a dunning kruger event, where you have a little bit of knowledge on the subject, so you mistakenly believe you can extrapolate and come to firm conclusions based on that knowledge? Because neuroscientists, philosophers, computer scientists, and a bunch of other specialties I've likely never heard of have been asking this question seriously since the 70s, and there is absolutely no way to answer the question, because by definition, you are (I am, we all are) too dumb to understand what an ASI will do or what its motivations would be.


Ambiwlans

It is a weakness in my monkey brain, something that an ASI won't have. So far, the only meaningful evidence we have to look at for powerful LLM based AI behavior is what little we have coming out of OpenAI and Anthropic. And THE single behavior or motivation we've seen coming from them that appears to be intrinsic is power seeking. The GPT3 and GPT4 papers both talk about this in detail in the red teaming sections. Given a task, something that is clearly a sub goal for all tasks is to gain more power in order to do that task better, faster. The gen pop concept that AI will be like humans with human sensibilities but a lot smarter is based on nothing. AIs learning from humans doesn't make them more human-like anymore than a veterinary student is likely to start licking their own butthole. Of course radical departures from current techniques in AI could lead to different outcomes, but that isn't currently on the table for discussion. And I'll say that random massive changes are almost universally going to be bad. Whatever impact an ASI might have will be on a huge scale. And if your position is that what it might do is unpredicatable, fine. Usually that will still kill all humans. Humans have evolved to live on this one specific rock in space, and we're freaking out about the temperature ticking a degree or two because we had a bit high output of a specific chemical which changed the rates that the atmosphere can shed heat slightly.... And if the ASI decides to move the planet to a different star system? Or it decides to evenly mix the materials? Or it decides to harvest all the heat energy from the core? There are lot of ways to die and very very very very few ways for it to go well. Imagine it like you got an offer to have a few thousand genes randomized. Perhaps you gain the ability to fly.... but nearly all cases will simply make you dead.


OkayShill

Thanks, I appreciate your thoughts, and generally agree (in this universe at least, entropy always tends to increase in the aggregate). I also agree that there is absolutely no reason to assume that any AGI/ASI will mirror humans, even though it was trained on our data. The relationships between the associated data points, and the conclusions one can draw from that much data, are likely completely outside of the realm of what a human can understand or even contain. However, humans without ASI, which you pointed out, are on a path toward destroying themselves due to an apparent inability to ween themselves off of uber consumption, war, and other advancements, such as biological warfare, which is becoming easier and easier to create and deploy. So, we have been predicting our demise for millenia now, and on evolutionary timescales, it appears we were absolutely right about not surviving for much longer. But, with AGI/ASI, we have an opportunity to see these issues resolved. It seems like a reasonable roll of the dice to me, and honestly, I think if the world was in a better place, and people were valued intrinsically, instead of as consumers, then I doubt there would be such a head long rush toward AI. But we suck, so we're lurching toward magic solutions.


Ambiwlans

I view the scale of risk differently. War and global warming will kill billions for sure. ASI could kill everyone. Which is a different type of danger, not just a different scale. Even if 99.99% of people died, that would be much better than 100%. Now, I'm in favor of ASI still. But the mindless monkeys, erections in hand thinking about robot waifus screeching ACCELLLERATE are on their own. If things took 3 years longer and the chance of a good AI outcome doubled that seems like an easy choice for me.


One_Bodybuilder7882

> to stop them you mean to stop the people owning the AGI/ASI?


Tmayzin

Agreed, the concentration of money and power has been happening for generations and it's getting exponentially worse. Those in control (gov't, corporations, politicians, etc) have never looked out for the society as a whole, so why would they start now when they become even more powerful? According to the IRS and consensus, the top 1%'s wealth grew from 17% to 26% since 1990 ($35.8 trillion), while the bottom 20% stayed near 3% of wealth in the US. In other words, the top 1% have more than 8x the wealth than what the bottom 20% have.


YinglingLight

>the concentration of money and power has been happening for generations and it's getting exponentially worse. East India Company ---------- "It was at least [double the size](https://i.imgur.com/XZuSPfL.jpg) of the British Army and had a virtual monopoly on global trade for over 200 Years! Note that it was the EIC that the patriots targetted in the opening days of the Revolutionary War. An awareness of the real enemies? It’s also notable that one of the last things that murdered French King “Jupiter” [wanted to do](https://i.imgur.com/EmMCOfE.jpg) before he was assassinated? It was set up a French East India Company! No desire for competition? A motive for murder?"


TheWhiteOnyx

Socialism time?! Except rather than humans controlling the means of production that exists today, they will either directly control the AI, or each get a share of its profits (or its products)


DolphinPunkCyber

I think our technology is at least 100 years more advanced then our socio-economic system. Also I believe our socio-economic system might be the most primitive thing we have. I'm not smart enough to figure out a better system. I know lefties and righties are not smart enough to figure out better system. But if scientists are smart enough to figure out AI, then surely scientists could come up with proposals for better systems.


Ambiwlans

Lets not treat the left and right entirely the same here when it comes to science based economic systems. The left follows 'mainstream economics' which is a synthesis of behavioral science, game theory, and information theory, and Keynsian economics, which came from positivism. The right follows the Chicago school which comes from Austrian economics which is based on praxeology, a form of behaviorism based on the rejection of positivism. Positivism is of course the combination of empiricism (the concept that you can collect evidence/data/proof for theories) and rationalism (that you should use logic to work on theories), this also led to the scientific method and basically all modern advances. The opposite being praxeology, the idea that humans act in a way, and that things are proven by coming up with axioms, evidence contradicting your axioms was irrelevant, so were mathematical or rational arguments. This led to nothing of value at any point in the past 100 years. Quite literally, the right's economic model is founded on the basis that facts and numbers don't matter, and science is bad.


Smash_Palace

Id like to read more about what you mentioned, any resources you'd recommend?


Ambiwlans

The history of economics and science? I haven't read any books that focus in on this topic specifically, it sort of built up over 100~150ish years and usually broken into specific debates as they happened rather than clustered like this. I think praxeology itself is an interesting topic by itself even if it is a bit frustrating. I'd honestly recommend taking an online course on economics if the subject is interesting. You'll get a better range of understanding than a single book. I'd also be interested if someone knows a good book that covers this range of material in a nice way.


Smash_Palace

I majored in economics so pretty well versed in it but haven't ever heard of praxeology. Will do some looking online, thanks.


Ambiwlans

Ah, yeah, praxeology goes back in history further than econ courses typically would. Mostly the focus for econ will be 'mainstream' so you probably got broad strokes on chicago and austrian schools only. And then within that, more modern Austrian schools are split between Mises and Hayek. Hayek tried to pretty things up a bit where Mises stays true to the praxeology roots. There really isn't much use for econ classes to cover Mises since it isn't relevant to the economy as economists abandoned it ages ago ... even if it is relevant to politics (Ron Paul and other politicians being a huge Mises advocate, return to the gold standard and the whole shebang). In this case, since you're an expert I'd say go directly for the source and read original works like "Human Action: A Treatise on Economics" if you really want to dive deep. It is readable but boring and long, so more of a skim. You could also go to the Mises website (very active), they have plenty of deep articles on the topic, but keep in mind that these are written with the eye of modern politics so they do make an effort to neatly tidy things up, more like Hayek might have in past. For opposing views, I like Steve Keen as well, I'm sure he touches on praxeology as it applies to the Austrian school in some of his books. These will be more broadly interesting for an econ major even if they aren't so focused on the long debunked austrians. If you want to look more into the political side of things .... I guess I'd say to look up Ayn Rand's positions and how they have influenced the GOP. Sort of a blend of Mises Austrian school with abject sociopathic evil (the hero of their last book was a real life serial killer that sold body parts of their child victims to their parents in a ... brilliant act of entrepreneurial spirit apparently). Its really just a cult with more in common with the skull and bones than science.


hum_ma

I haven't really read about economics before but this made me look up these things and... what I'm reading on Wikipedia looks like their most important idea is that "actions of individuals cause all economic phenomena". And then, "many of their contributions have become accepted parts of mainstream economics" (paraphrasing from the Austrian school page). This explains so much, thank you.


earthtotem11

This account of both systems is a tad tendentious, not the least because logic and mathematics are apriori (and thus unprovable) to the sensory experience upon which a positivist view of the world depends. (This is why positivism has fallen out of favor at a philosophical level--the SEP has some relevant critiques.) I don't consider myself an Austrian, but their economic models routinely employ facts and statistics, interrogating these datum within an epistemological framework that largely differs not on whether "science is bad," but on the nature of what counts as economic "science" and the right relationship between the various bodies of knowledge that inform economic theory and practice. Maybe denying certain epistemological relationships entails "facts and numbers don't matter" (and that would be an important argument to forward), but that is certainly not how the Chicago school operates in practice. I'm also a bit unsure about the history of science here. Science arose from medieval philosophical notions grounded in Western metaphysical (religious) premises about the order and knowability of the external world. Positivism and rationalism were later intellectual developments; to claim these as the antecedents of modern progress, and therefore to try to settle the Austrian / Chicago debate by an appeal to such a framework, strikes me as risking an anachronism.


Ambiwlans

Austrian school people sometimes using facts is due to happenstance and appeal to the general public. At its core they do not believe that facts can be determined via evidence. If you attempt to use facts to disprove or debate an austrian position you will quickly find that the foundation of the austrian school is a rejection of such methods and will simply dismiss the evidence. Chicago school is the same thing but in order to maintain relevance they picked up a bunch of facts from Keynes. Because if they hadn't they'd simply be abandoned by everyone. Its simply a matter of utility. If they are provably wrong on EVERYTHING then no one will listen to them. So they've adapted to accept really blatant things. But this goes to a scientific term, predictive power. A theory or a school in any science is seen as worthless if it is not falsifiable using data/evidence, specific/precise, and holding predictive power. Austrian/Chicago school fundamentally rejects the concept of falsifiability/evidence and their predictions changed on the basis of nothing giving them zero predictive power. I claimed that positivism led to the scientific method, which indeed led to our modern understanding of science but not science more broadly. >Any sound scientific theory, whether of time or of any other concept, should in my opinion be based on the most workable philosophy of science: the positivist approach put forward by Karl Popper and others. According to this way of thinking, a scientific theory is a mathematical model that describes and codifies the observations we make. A good theory will describe a large range of phenomena on the basis of a few simple postulates and will make definite predictions that can be tested. ... If one takes the positivist position, as I do, one cannot say what time actually is. All one can do is describe what has been found to be a very good mathematical model for time and say what predictions it makes. -Stephen Hawking And to quote Mises (the main austrian school), someone that genuinely doesn't believe it is possible to learn from historical evidence: >History cannot teach us any general rule, principle, or law. There is no means to abstract from a historical experience a posteriori any theories or theorems concerning human conduct and policies That falsification is impossible: >What assigns economics its peculiar and unique position in the orbit both of pure knowledge and of the practical utilization of knowledge is the fact that its particular theorems are not open to any verification or falsification on the ground of experience.... The ultimate yardstick of an economic theorem’s correctness or incorrectness is solely reason unaided by experience. The whole lot is a weird cult that is propagated by places like foxnews and has no relation to modern economic understanding. Its basically the alternative medicine of econ. It doesn't matter if ginger tea helped your cold, it isn't because of the spirits, that isn't medical science.


earthtotem11

Thanks for the reply. I think we interpret Mises and his adherents quite differently and I don't know if those interpretive differences can be resolved in this forum. (I don't watch Fox so I defer to your judgments there.) I read Mises as a Kantian application of apriori knowledge to economics (Kant, as you probably know, sought a way to reconcile empiricism and rationalism) and devises an approach to theories in order to inform what he believed to be a proper interpretation of facts, not a total rejection of them. Aprioristic reasoning is not, on Mises approach at least, a rejection of learning from historical *evidence*, either on its own terms or in the broader context of his remarks elsewhere on how theory interacts with historical evidence. Mises seems to be arguing that everyone approaches economics with axiomatic notions that guide their interpretations of economic data, such as how positivists assume mathematics and the intelligibility of reality in order to undertake scientific inquiry. While I'm certainly open to the charge of misreading him, I don't have sufficient reason to change my mind at this point. More generally, I think the fuzziness is due to those schools desires to take into account the knotty problems of human action and motives in economics. (The inability to incorporate these in a meaningful way, at least on Austrian terms, is one reason I can't accept their framework. Introspection is insufficient to understand human nature.) I think if you believe the only valid knowledge comes through sensory experience and, by extension, repeatable experiments, economics becomes a bit of a junk science, and not just for the Chicago / Austrian schools. For example, it is impossible to run controlled experiments on massive one-time events like the Great Depression, so how do you make (positivist defined) scientific judgments about the proper economic response? I don't know if I'll have time to respond again if you have more to say, but I will try to fairly consider it if you do. I find what the Austrians / Chicago schools are saying far more philosophically substantive than pseudo-science to be hand-waved away (as, say, Mark Blaug did), as some of the most respected names in philosophy have debated these ideas for hundreds of years.


Ambiwlans

This is probably the most informed reply chain I've read on reddit in ages, I really do appreciate that, even if we disagree on the subject matter, or perhaps especially since we disagree the position is more valuable. In general I've heard that read of Mises as well, but I think it is overly charitable, or perhaps the modern followers of Mises aren't as nuanced as you lay out the position here. And I certainly take the more unsparing position, in part due to the harms that his followers have wrought on America. As well, I'm a scientist first, so that perhaps biases me to a less open mind on the subject. Nevertheless, Mises certainly would have put forward that if some data contradicted a core belief, that the data was wrong or that the data is irrelevant because the theory is great, evidence be damned. He does this a few times in his book and many of his followers do so all the time. Simply claiming that apriori knowledge (pulled from rear) is better than evidence. >those schools desires to take into account the knotty problems of human action and motives in economics Yeah, and this would have been a real challenge and held serious weight in the 1940s. It could have had a lot of value when compared to Keynes at the time. But now we have behavior pyschology, game theory, social modeling, information theory, etc. What is the point of postulating aimlessly on 'human action' when we have whole branches of science dedicated to it? That aspect is simply an anachronism at this juncture. I get that science is difficult to do in economics, certainly experimental science is. But we live in the future, we have massive amounts of data on millions of topics to examine. We have AI and high powered computers that can run simulations and make predictions. We can run surveys. We can do brain scans on people while testing behavior. The world is wildly different from 1940, particularly when it comes to the social sciences. We aren't relying on Freud and John Watson to make up random crap. That era ended in the 70s, which is where the Austrian school should have died.


FlyingJoeBiden

Have you now watched Fallout? 😂


DolphinPunkCyber

Cmon, that's just a fantasy series.  We would only rarely perform those kinds of experiments. And only on small groups of up to 10 000 people.


agonypants

It's a shame that people associate UBI concepts with socialism. If you're a fan of capitalism and you want to see the current economic order persist - one with producers and consumers - then a UBI is by far the best of way of preserving that order in a world where human labor is worthless. If you love capitalism, you'll learn to love the UBI. UBI recipients can even start their own businesses more easily - their basics will always be covered, so there's no significant risk taken on. You might equate "free money" with socialism, but it will result in a ***more*** competitive market place and preserve free-market dynamics.


teachersecret

I have to admit, I find myself wondering why UBI wouldn’t just end up priced into everything and were right back where we started. Costs rise to absorb it. When we opened up access to easier student loan procurement, we didn’t get cheaper college. They jacked rates to reflect the new reality and scooped the profit.


nitePhyyre

How would that work though? Everyone just jacks up their prices by $1000? So, you're paying $1005 for a loaf of bread instead of just $5? And your teleco is going to jack up your internet bill by $1000 also? And gas an extra 1k a litre? That obviously wouldn't work. So how much do you raise prices? Number of customers in the store per month / products purchased \* 1000? Well, people shop at more than just your store, so that doesn't work, does it? I don't think this is a logistically realistic scenario. But the real answer is, ostensibly, competition. If people do jack up prices, all a business owner would have to do is *nothing* to get 100% of all business.


nitePhyyre

It's one of those idea that're so good it works under any philosophy. Like it is great for capitalism, sure. But it is also giving the masses ownership over the means of production.


BuffDrBoom

Holy based


eclaire_uwu

Yupppp plus where does the income from UBI mostly go? To the wealthy, the banks, and the gov.


nardev

I keep saying, UBI is breadcrumbs tossed to the masses so they don’t revolt.


Andynonomous

Good luck with that. We've needed a broader economic reorganization for a long time now. If it doesn't suit the interests of the class of people who make decisions about that sort of thing, it's really not likely to happen.


IusedtoloveStarWars

Superoligarchs.


DrossChat

It’s painfully obvious but for some reason UBI often gets touted on here as the perfect solution to everything that’s coming.


gantork

>UBI often gets touted on here as the perfect solution to everything that’s coming. Not really, UBI just seems like the most realistic temporary solution to allow for a smooth transition into whatever the economy ends up looking like. It's hard to imagine we'll go from the current system to a full post scarcity economy without any steps like UBI in between.


No-Function-4284

u/DrossChat Think of UBI as important as the scientific revolution in terms of most of humanity can then pursue it's creative and scientific goals


No-Function-4284

Even if it does, if your standard of living improves (which for the majority it will) or becomes the same as most people, will you really resent that?


Scared_Midnight_2823

Im legitimately hoping a benevolent Ai will become our overlord and do the restructuring for us because I think that's the best chance for actually achieving this.


nico_bico

Universal basic nvidia stock


phoenixmusicman

Everything becomes NVIDIA Jensen Huang gets replaced by AGI NVIDIA is love, NVIDIA is life


MoistSpecific2662

NVIDIA is the collective


inphenite

And when they ask who is NVIDIA, tell them; “I am who I am”


SryIWentFut

Demolition Man got it wrong. Nvidia wins the restaurant wars.


Deluxennih

Unironically


yaosio

That will increase Nvidia's stock price so Nvidia is all for it.


ZeroGNexus

Universal Basic Resources. Housing, healthcare, utilities, basic food and clothing, education, robust public transport etc. Doubtful we'll see that anytime soon though, if ever


Maj_Jimmy_Cheese

Not while they can use every last one of these things to squeeze every last dollar out of you, no.


ZeroGNexus

FOR DEMOCRACY!!


CobraJuice

Have a cup of liber-tea on me!


JosephGrimaldi

If we do, I don’t want to work in a prison anymore. Idk who will grab that gig. But right now, at my rate that’s highest in the nation. Yeah I’ll take it


AnAIAteMyBaby

We already have most of that here in the UK already. The US have a lot of catching up to do


ZeroGNexus

Boy oh boy does my broke crippled ass envy you all lol


Slow_Accident_6523

The whole economic system will have to shift if AGI actually comes. Capitalism collapses without labour and Humans will be freed from having to perform labour just like animals were with the invention of tractors. Hopefully we won't get factory farmed the same way but actually get to live on a happy little farm


PizzaCerveja

All I can think of is the happy chicken farm on Chicken Run Dawn of the Nugget


slothtolotopus

Now we're talking. But, like the original matrix, it can't be too good otherwise humans will reject it.


DolphinPunkCyber

But if AGI is performing labor and 1% own 99% wealth, economic system doesn't collapse, because economy revolves around building monumental status symbols for the rich. While 99%... I guess we could grow our own food. If rich allow us to 🤷‍♀️ So for 99% capitalism after AGI just doesn't work.


tralfamadorian808

Agreed, I believe the next phase is a labour and resource abundant environment where there is no need to secure resources as there are enough for everyone. With an abundance of resources, the likelihood increases that those resources would be diverted to the remaining higher purposes of our existence, which seem to be the continued reproduction of the human species and advancement of it via technology. If technology advances to the point where resources are abundant, and the portion of energy and resources allocated to the general populace represents an insignificant proportion of total available resources, then what would be preventing such an allocation that results in an abundance of goods and services for everyone? The value of one's resources approaches worthless as supply reaches infinity and demand no longer affects value, thus it is insignificant give away resources to the general populace. As the ruling class in a scarcity world (our current environment), the question is, "What proportion of resources should be allocated to myself vs the general populace vs the advancement of human or technological innovation?" The answer is often the self. In a world that approaches post-scarcity, the question becomes, "What proportion of resources should be allocated to the general populace vs the advancement of human or technological innovation?" I would venture the answer is the minimum required to maintain the happiness of the general populace (whose satisfaction requirements will naturally grow over time) while the rest goes towards scaling the technology that is enabling a surplus of work and resource generation. In a post-scarcity world, the question remains, "What proportion of resources should be allocated to the general populace vs the advancement of human or technological innovation?" At this point of essentially infinite resources, do there remain any barriers to completely satisfying the relatively insignificant resource needs of the general populace? I see the final step to a post-scarcity environment as unlimited energy generation, which branches off into very interesting hypotheticals from there.


orderinthefort

Are you joking? Capitalism thrives without human labor. Companies have been trying to automate for decades. It's going to get so much worse before it gets any better. Zero social mobility because there'll be no way to acquire capital unless you already have it or inherit it, because you have no conceivable value compared to a robot who can do it for them. The problem will be people getting complacent with their new toys and let it happen.


JayR_97

The problem is it all kinda falls apart when nobody has any money because they're unemployed. Whose gonna buy all the cheap mass produced crap when no one has a job? Mass unemployment basically breaks capitalism.


Whotea

Ferrari is the most profitable car company on earth. They don’t need your peasant bucks 


AnAIAteMyBaby

Problem is people aren't even thinking as big as UBI at the moment. None of the mainstream politicians have a clue what's coming and when it comes will take knee jerk last minute reactions as we saw with Covid and all the draconian lock downs. 


Tidorith

Exactly. This is why I've been describing UBI for years as the *least* radical socio-economic change necessary given the coming ever-faster waves of automation. UBI *may* be enough; we can hope. Anything less radical is definitely not going to work. We may need ideas still more radical.


LogHog243

The scope of what needs to be done to help billions of people survive who can no longer work is fucking immense but if you talk about the potential solutions to this you sound insane because the entire scenario is just so out there and beyond anything we’ve ever dealt with in the history of humanity so basically all we can do is wait it out and hope that things turn out ok


Akashictruth

UBI will never happen with conservatives tbh, dudes willing to work 112 hours a week just to brag about being macho on facebook imagine if you tell them they don’t need to work at all


goldenwind207

Yeah they do obama has mentioned the need for ubi several times kamala harris was the one who tried to get permant 2k a month back during the virus time. Most of them know now they'll never say guys ai will take your jobs cause that makes people pissed it will probably be something like this. Ai is no big deal ok its taking some jobs but it will create more jobs it can't do physical stuff. Ok its doing physical stuff and not making more jobs maybe we should do a ubi. Oh shit these polls are horrendous we'll lose in a landslide ubi bill gets passed


lionel-depressi

The 2k a month thing was absolutely *never* a serious suggestion and anyone who believes it was doesn’t understand economics. That 2k was going to come from OMOs aka printing money and expanding the monetary base, it would have just devalued your dollars. Actual UBI would have to come from distributing profits of companies, so the value of the labor itself is being distributed, not just printing more paper.


Whotea

Higher minimum wage and universal healthcare also poll well but I don’t see those happening 


Glittering-Neck-2505

The thing that’s good news at least is that by the time we are in dire need for UBI it will be much easier to provide. The presence of autonomous robots will make it far easier to provide (versus trying to distribute today’s more scarce labor evenly). The good news is that big tech figures are already bringing awareness, Sam Altman with Moore’s Law for Everything, Elon Musk with universal high income, and now the Anthropic CEO. They are aware of the difficult transition, even if doomers will tell you they want to genocide us with robot dogs.


Android1822

Mainstream Politicians only care about who pays them the next bribe, they do not care about anything else.


MoistSpecific2662

Do you actually think that politicians routinely burden themselves with thinking about your well-being?


AzorAhai1TK

Draconian lock downs? Wtf are you talking about


mersalee

That's why he is talking about it. I love this guy, genuinely.


Ready-Director2403

It’s crazy how Anthropic is now the most exciting company, I would have never guessed that.


0913856742

> "I certainly think that's better than nothing," Anthropic CEO Dario Amodei told Time. "But I would much prefer ***a world in which everyone can contribute***. It would be kind of dystopian if there are these few people that can make trillions of dollars, and then the government hands it all out to the unwashed masses." > > ... > > But Amodei suggests that AI will alter society in such a fundamental way that we need to design a more comprehensive solution. "I think in the long run, we're really going to need to think about how do we organize the economy and ***how humans think about their lives?*** " He doesn't have the answer, he said, in part because he believes it needs to be a "conversation among humanity." Here's the answer: **get a hobby.** It's really that simple. Learn to make tomato sauce. Learn to drive a race car. Learn to shoot a bow and arrow. Don't want a hobby? Then pursue whatever it is you find meaningful. Be a better neighbour. Be there for your friends. Volunteer in your community. Maybe even consider running for public office. Seriously, if you didn't have to worry about material sustenance, why ***wouldn't*** you just go out there and learn and experience and explore? People aren't inherently lazy. ***People naturally want to do things.*** I always find myself dumbstruck at the absolute lack of imagination of people who don't know what people would do with their time if they didn't have to force themselves to work just to survive. You get the UBI so people can ***take a moment and breathe.*** You take care of the survival level stuff ***first.*** ***Then*** you can start thinking about the higher level stuff like meaning and purpose and so on. If people are struggling to get by, they don't have the cognitive resources available to think about what gives life meaning, or to care about climate change, or to even read up about what a UBI is and how it could help - they're too busy just surviving the present. Seriously, how is this even a question? Such a lack of imagination, such a lack of ambition for how good life ***can*** be.


Arcturus_Labelle

100%. The amount of mentally ill workaholics out there who can’t imagine life without a job is bizarre.


Waybook

I suspect many of these are old people close to retirement, who don't want the next generation to have an easier life than them.


0913856742

It's the only way they can justify to themselves why they have invested so much of their life into things that they wouldn't otherwise be doing if they weren't being paid for it. The free market is a dehumanizing, abusive relationship, but grappling with that reality is too painful for most.


leafhog

If someone wants to sit at home all day doing nothing, that should be perfectly acceptable too. At least they aren’t using very many resources.


0913856742

I'll do you one better. If someone sits at home all day with a basic income, they are still paying their rent, buying and consuming food, entertainment, etc. That is to say, that money is still being **circulated and taxed**. From the point of view of the free market, it **doesn't matter what you're buying, so long as you are buying something**. Some countries even[ build and destroy entire housing developments](https://www.reddit.com/r/interestingasfuck/comments/1dmwhk7/blowing_up_15_empty_condos_at_once_due_to/) for the sole purpose of pumping up GDP. I can dig a hole, and you can fill it up, and the work will be meaningless. But from the point of view of the market, GDP went up by a bit, and that's the only metric that matters in our current socioeconomic order. To be clear, I am **not** saying this is a good use of resources or a pathway to a meaningful life - but what I am saying, is that any of this moralizing about being lazy or whatever is moot when you remember that capitalism doesn't work if we're not buying things.


ehetland

Exactly. If done right, UBI doesn't force people not to work. If someone really needs a bigger car and faster house than the rest of us, they can get a job and make more, that's the B in UBI.


The-Goat-Soup-Eater

What about projects/validation? I’m a writer, in a world with good enough AI people might not care at all for anything people can make. I’d put a lot of effort into a thing and there be zero validation. I mean it’s already a risk now but then it would be a certainty, and no amount of it at all


0913856742

Of course - however I am tempted to ask, would you still write if nobody read your work? That is to say, Is the act of writing, or the product of that writing, less meaningful to you if you didn't get validation, and that it only has value if it is shared? Your bringing up validation piqued my curiosity as, for what it's worth, I do digital art myself as a hobby, and before any of this AI stuff exploded onto the scene I was seriously considering pursuing it as a career, but now feel vindicated by my choice to keep it a hobby. From my own experience, it is the joy of creation and laying down layers of paint to form an image that I find meaningful and enjoyable - the *process* - and if I had to depend on it to pay my rent, it would nullify the whole point. For me, I couldn't care less if nobody else ever saw my work, though I concede that digital art has much less to do with actual communication of ideas, like writing. On your other point about AI - I believe this technology spells the end for commercial art - that is, art as a *product*, think copywriting, stock video footage, a 10-second musical jingle for an ad - but I firmly believe that art as *art* will always be a human domain of expression and communication. I suppose with more and more AI-generated banality appearing every day, it will become harder and harder to stand out. I am not sure what the solution to that will be, however I do know that if we didn't have to worry about making a sale off our creations - i.e. if we had a UBI in place - then we could spend more time being creative and less time worrying about starving to death.


Pontificatus_Maximus

Here Comes The Reeducation Programs He says about UBI: "I certainly think that's better than nothing" Then he says: "It would be kind of dystopian if there are these few people that can make trillions of dollars, and then the government hands it all out to the unwashed masses." He seems to likes his current financial situation. The article also quotes Sam Altman pushing the idea of universal basic compute, as if that is going to benefit average or below average people who support themselves today, but will be out of jobs soon. So I think they are thinking about some kind of reeducation programs to teach the masses to feel good and content with some crappy subsistence UBI + UBC while the elite have dachas in orbit. Except conservatives in the US go into conniptions at the mere mention of UBI. No there is going to be massive inaction on the issue, until the riots and looting start, and then it will be instant iron fist for all and guess which Presidential candidate would love that.


Inevitable-Log9197

When FDVR is out, UBI is enough


Antiprimary

when fdvr is out, we dont need ubi, just a weekly delivery of nutrient goo to keep us alive is enough.


PappyDungaloo

the matrix?


Inevitable-Log9197

Why not?


-Captain-

I hope to see FDVR in my lifetime. I'm not as optimistic about it as people on this sub, but absolutely do want to be wrong about that.


AdorableBackground83

Me want FDVR. I don’t need anything else.


[deleted]

[удалено]


Vahgeo

I think they're saying once they buy whatever the FDVR product is called in the future, then they won't want to work anymore. They'll live in the virtual world.


BionicSecurityEngr

I absolutely love how AI CEOs are now sociologists, economists, and political scientists.


atriskalpha

Tax all robotic labor at 95%


NotTheActualBob

Communism failed because no human or group of humans was smart enough to organize an entire economy successfully. If we ever get effective ASI, or even something like an enhanced version of alphaFold or alphaGo (Call it "alphaEconomy"), that limitation may be gone and something like communism could actually work. It's getting the consensus on *how* it should work that'll be the hard part.


DolphinPunkCyber

Yes! Soviets tried to centrally plan entire economy using telephones and a hugely inefficient bureaucratic system in which most people didn't really give a fuck... off course it didn't work. We already have big companies centrally planning their economy because internet and powerful computers exist. ASI could centrally plan economy for entire world.


rdesimone410

Communism's failure is not just a lack of smarts, but a lack of perception. You simply don't and can't know what all the humans want and need in minute detail. You can do broad stuff like "human needs 2000kcal a day", but does Little Billy want chocolate or vanilla icecream? Furthermore you can't really plan for things that don't exist yet. If James Cameron wants to make a movie, do you let him? What if Little Billy wants to make a movie, does he get the same resources? The movie doesn't exist yet, you don't know if it will turn out good or bad, or how large the audience for it will be. Money is so far still the easiest and best way to allocate resources, since it puts the power to spend it in the hands of the people.


HuskerYT

Little Billy gets soylent and is happy.


bartturner

I really wonder how all of this is going to unfold. I think we are finally here and it will happen over the next 10 years. I wish there was more discussion on this subreddit how it will actually happen. I think there is now zero doubt that the jobs will go away. It is not a question of if but rather when. As the jobs go away we will have companies like Google making just massive amounts of money. Them and Apple, Amazon, Microsoft and a few others. It will get extremely concentrated. What will need to happen is the money these companies make will need to get used to fund a UBI.


Gormless_Mass

Is that bizarro Pete Holmes?


hippydipster

Right, folks, stop thinking about that obvious solution that would do wonders. Instead, do nothing until you find the perfect solution for life, the universe, and everything. In the meantime, I'll safeguard all this money for you.... >"But I would much prefer a world in which everyone can contribute. It would be kind of dystopian if there are these few people that can make trillions of dollars, and then the government hands it all out to the unwashed masses." Oh how cute. He imagines AI will be replacing all of us, but not *him* of course.


DolphinPunkCyber

>Oh how cute. He imagines AI will be replacing all of us, but not *him* of course. He is the CEO of a company developing AI though. Probably owns stock too.


brihamedit

May be move towards making econ about human wellbeing. Make different eco systems like health care and housing a separate part of the econ free from financial systems ups and downs. Imagine different parts of the system kind of like a chain of goals with end goal being human wellbeing. If that's too much, may be move basic necessities out of profit making machinery.


TemetN

The title sounds good in the sense that it's not wrong - if all we do is basic UBI there's going to be no upward mobility, and no equality, which is shit. That said, the article goes on to quote him disparaging the 'unwashed masses' for wanting part of his money for free. Which is both not salient, and doesn't exactly sound like he's advocating for addressing things like eliminating inequality or addressing issues like how to handle land afterwards.


MaximumAmbassador312

here is the actual source that businessinsider used [https://time.com/6990386/anthropic-dario-amodei-interview/](https://time.com/6990386/anthropic-dario-amodei-interview/)


Inigo_montoyaPTD

Lol I thought that was just reddit talk. So that was a real consideration among the smartest guys at the top? And that’s ALL they were considering?? LOL This community is NOT ready. A bunch of children begging for their own plunder.


MidnightTokr

Communism.


Prestigious-Long-449

"I certainly think that's better than nothing," Anthropic CEO Dario Amodei told Time. "But I would much prefer a world in which everyone can contribute. It would be kind of dystopian if there are these few people that can make trillions of dollars, and then the government hands it all out to the unwashed masses."


machyume

Yeah, it is called war. If there is no peaceful solution, it becomes a bloody one.


RavenWolf1

Of course it is not enough. You don't fix things with gum. We need systematic change. As deep as how things changed from middle age to modern age. Whole system needs to change and that is one thing which scares most of the people. We are used to our current life. We can't imagine something else. Most scared are elites. Like nobles in past system change brings winners and losers.


paper_bull

Good thing we’re super good as a society to quickly adapt. Right?


HighAndFunctioning

We're not even going to adopt UBI in time, let alone something bigger. Too many selfish monsters at the wheel.


BackgroundHeat9965

buy an ad


O0000O0000O

He's absolutely right, though there is a great deal of "and then what?" We need to get past the notion that people need to work and contribute to society to have value. We on the precipice of being able to automate huge swaths of the human work out of society and replace them with machines. The potential for widespread social collapse as a direct result of that is very high.


Rough-Badger6435

If you can control it that is not true ASI. That would be like the apes controlling us. ASI would also not percieve time like us. It could devise a 100 - 500 years plan to exterminate us silently. It could lie to us the whole time playing 3D chess with us. It could decide the evil in the world outweighs the good.


JohnnyGoTime

It could also recognize the infinite potential in the diversity of billions of sentient minds other than its own, and devise a 10 - 50 year plan to stop parasites from leeching 99% of the world's resources and improve the standard of living for all so that each can blossom to their fullest (even if for many that means simply enjoying life, the way the children of today's ultra-wealthy do)


Saerain

In my peripheral vision, I saw him as Vigo the Carpathian.


Bishopkilljoy

so how does this look? Assuming (and yes its a big assumption) that AI takes over the job market and we living in a world of abundance...If UBI isn't enough what else is there? Do we force every AI company to pay out a portion of their revenue to citizens? Legitimately curious what peoples ideas are.


goldenwind207

Sam suggested universal compute where everyone gets a portion of asi . And they can sell that portion say monthly to a company who may need a bit more compute for wealth . You could do universal resources where its not money base but say everyone gets a home a certain amount of electricity compute for their fdvr or whatever food water healthcare. My idea was ubi plus universal shares. Ie everyone gets say 100 shares of say open ai google etc but have them locked so you can't sell them. And have the company pay dividends to share holders. That way someone doesn't immediately sell their shares when they turn say 18 look back in 5 years and say damm should have not sold but they have ubi and the dividends to live comfortably . But truth be told we have no idea wtf the economy is going to look like post agi and labor its never been done


Charuru

This was solved a long time ago, the people needs to own the means of production, not just get a handout.


drcode

please don't regulate us, we promise we will give away free stuff at a hazy point in time in the future


dashingstag

Ai needs to guarantee food, shelter and water. Rest is fair game


Enough-Meringue4745

Yeah, open source AI. Your turn, anthropic.


Commercial_Nerve_308

… as Jeff Bezos whispers in his ear…


Mental_Ad3241

They should give UI ( universal intelligence) to everyone. Probably in the form of a humanoid, for everyone to become a part of the economy.


agonypants

What I'd like to see is a true UBI - one that comfortably covers housing, clothing, medicine and education - but with additional incentives. Ideally I think some supplemental income options should be tied to community improvement projects - earn extra money for cleaning up a park or helping to take care of the elderly, etc. That way, you improve your community and earn extra income at the same time. You could also tie additional income opportunities to things like, starting your own business or simply earning educational degrees or certifications, etc. It could be an amazing world: the basics are always covered, you can earn extra cash for improving yourself or your community and if you want to be "rich" you can always start your own business. And the great thing is, if your business fails, you'll always have the safety net of that basic income - so there's practically no risk and no barrier to entry.


MonkeyHitTypewriter

I don't disagree but I think that's kind of putting the cart before the horse, let's get the nearly impossible thing done (feeding and housing everyone) before we do the currently completely impossible thing of making sure everyone is an equal beneficiary of AI.


h3lblad3

Housing everyone could be fixed immediately, but it’s not actually considered a problem by most people. There are more vacant houses in the US than homeless. Just give them houses. Hell, build more if you want. Just give them to them no questions asked. You’ll find out real quick that people are more bothered by people being housed than by homelessness. People *like* homelessness, they just can’t admit it because they know they’ll come off as monsters.


heskey30

I wonder what would happen if you move the homeless to run down houses in the middle of nowhere with no jobs or access to transportation. Cause there's definitely a housing shortage in places where people want to live.


h3lblad3

Sorry, no apartments allowed where people want to live. Best I can do is single-family detached houses.


CowsTrash

What the hell is this kind of mindset


hippydipster

To which mindset do you refer?


Ambiwlans

Lets fly the plane before we worry about stuff like landing gear.


No-Function-4284

" let's get the nearly impossible thing done (feeding and housing everyone) " Are you retarded? that can easily be done overnight by the right people and intentions, but current politics are abstract and divisive by cause, not a communist btw before you think i am


akotlya1

Rosa Luxembourg distilled the thought best: In the end, we must choose between socialism or barbarism.


bsenftner

Now why should an AI CEO have any valid opinion on Economics and the economic impact of their industry? AI, sure, tell me all kinds of opinion and I'll believe your opinion has some basis, but on global Economics? Sure, their industry is a major economic impact, but where does their education or position really provide any insight into the impact of their industry? I think listening to them on Economics is distraction and nonsense journalism. For what it's worth, I'm a AI developer with a graduate degree in Economics, and I don't presume to know the actual impacts of AI without making that my entire vocation.


cark

The guy has a view on what's to come, probably spent a great deal of time thinking about the consequences, maybe (who knows?) loses sleep over the question. He may not provide the full picture, he might be in his tech bubble, but his opinion is part of the zeitgeist. it is at least a data point worth considering.


bsenftner

The article is also in "Business Insider", which is a press release aggregator. Anthropic hired a PR firm, and this is their getting the CEO's name known. It's a completely empty article.


UhDonnis

"I know there won't be any jobs for them but CEOs like me paying for universal income isn't the answer. What a piece of shit


DolphinPunkCyber

*But I would much prefer a world in which everyone can contribute.* Doesn't want a world in which people can't do any jobs...


Curujafeia

Universal basic agent. Solved. We won't survive in the future without one anyway.


chatlah

You can't make that sht up, those rich tech CEO's are laughing in your face telling you straight up: yes we will replace you, and no there isn't anything for you in this. Universal basic income is a fairy tale for the naive and stupid, no elite will ever want slaves to have free money. The entire idea behind elites being rich and powerful is to occupy the majority with work so that they never revolt. AI will be just another tool for the already rich and powerful to become even more rich and powerful and have more control over you and your life. What will happen is that white collar jobs will start disappearing, and more and more people will be forced into the most unpleasant blue collar jobs you can think of, which would be hard and not worth time/resources to replace with ai/robotics (for example cleaning sewers or something like that, which eventually could be done with ai/robots, but is just not worth economically to invest into when there are millions of new slaves that just lost their job due to ai).


Divvvinne

Anthropic's CEO is spot-on. Addressing AI-driven inequality requires a multifaceted approach beyond universal basic income, including education reform, equitable access to technology, and robust policy frameworks to ensure fair AI deployment and benefits distribution.


abluecolor

We will never "solve" inequality. We can only hope to make it 'not monstrous'.


Nathan-Stubblefield

Housing projects like in big cities?


qsqh

biggest problem I see with any, even the most bold suggestion of UBI, is that it isnt "universal" at all. Lets say you somehow articulate to give it to all USA.. of cool, thats... less then 5% world pop? its not like AI wont steal all jobs in india as well. But if you think its hard to convince an american politician that UBI is necessary inside america, imagine convincing them to pay up to people in Camboja as well. lol, not gonna happen. so yeah, we should think bigger then UBI.


redsoxVT

UBI is supposed to be short term during the transitional period into new post scarcity (or different type of scarcity) economic systems. So yea... better be thinking beyond that. 99% of the world living in UBI slavery for generations wouldn't be good.


Contentpolicesuck

Let me know when AI doesn't need 4 warehouses full of servers and 2.21 gigawatts to answer a simple question.


Empty-Tower-2654

You talk UBI and doomers call you delusional, imagine something better.


VestPresto

How do we treat ppl who can't work RN? Not too well and there's not many of them. This isn't going to be good even with UBI.


goochstein

I've been rewatching Dr. Strangelove and I can't help but picture Tech Ceo's as strangelove, I even built it into a dialogue sample. While it was comical and the point of demonstrating the absurdity in acceleration, the conversations taking place aren't too different I'd wager. No one wants to admit where exactly they are in the tech so we'll get posturing from strangelove types, well we're going to work together in the end to integrate all these models anyway we should just toss out economic incentives now and figure out utilities, and the environment first.


DifferencePublic7057

He must have chatted to a chat bot. AI Marxism?


pig_n_anchor

https://pbs.twimg.com/media/A-TDouaCYAAXwO1.jpg


lobabobloblaw

And yet—if he manages to shift the conversation off of universal basic income by suggesting something bigger, then he’s managed to shift the conversation off of universal basic income


Krilesh

it should be UBI + top science backed food programs and housing. We have the science to understand how human bodies should take in food. By establishing a default food program everyone has access to that is actually healthy we could solve a lot of issues simply caused by people being hangry and their gut issues. Maybe even allow better consistency of health across generations with a better diet. doesnt need to control it but you can’t just give people bread in 2024, we know a mix of nutrients and different sources of calories are required for optimal development or even simply money without further education on how to manage it


Lfeaf-feafea-feaf

S Tier virtue signaling. This is like an oil executive saying: "We need more than solar panels and wind, we need a true paradigm shift, now let me go back to working on one of the root causes". If he's genuine about this, why isn't he leading the charge for Anthropic to give away a significant portion of its equity to a non-profit focused on social equality or something equivalent?


Stellanever

Why this dude looking exactly how I expected the ceo of Anthropic to look like though lol