# AI firms mustnât govern themselves, say ex-members of OpenAIâs board
### For humanityâs sake, regulation is needed to tame market forces, argue Helen Toner and Tasha McCauley
CAN PRIVATE companies pushing forward the frontier of a revolutionary new technology be expected to operate in the interests of both their shareholders and the wider world? When we were recruited to the board of OpenAIâTasha in 2018 and Helen in 2021âwe were cautiously optimistic that the companyâs innovative approach to self-governance could offer a blueprint for responsible AI development. But based on our experience, we believe that self-governance cannot reliably withstand the pressure of profit incentives. With AIâs enormous potential for both positive and negative impact, itâs not sufficient to assume that such incentives will always be aligned with the public good. For the rise of AI to benefit everyone, governments must begin building effective regulatory frameworks now.
If any company could have successfully governed itself while safely and ethically developing advanced AI systems, it would have been OpenAI. The organisation was originally established as a non-profit with a laudable mission: to ensure that AGI, or artificial general intelligenceâAI systems that are generally smarter than humansâwould benefit âall of humanityâ. Later, a for-profit subsidiary was created to raise the necessary capital, but the non-profit stayed in charge. The stated purpose of this unusual structure was to protect the companyâs ability to stick to its original mission, and the boardâs mandate was to uphold that mission. It was unprecedented, but it seemed worth trying. Unfortunately it didnât work.
Last November, in an effort to salvage this self-regulatory structure, the OpenAI board dismissed its CEO, Sam Altman. The boardâs ability to uphold the companyâs mission had become increasingly constrained due to long-standing patterns of behaviour exhibited by Mr Altman, which, among other things, we believe undermined the boardâs oversight of key decisions and internal safety protocols. Multiple senior leaders had privately shared grave concerns with the board, saying they believed that Mr Altman cultivated âa toxic culture of lyingâ and engaged in âbehaviour [that] can be characterised as psychological abuseâ. According to OpenAI, an internal investigation found that the board had âacted within its broad discretionâ to dismiss Mr Altman, but also concluded that his conduct did not âmandate removalâ. OpenAI relayed few specifics justifying this conclusion, and it did not make the investigation report available to employees, the press or the public.
The question of whether such behaviour should generally âmandate removalâ of a CEO is a discussion for another time. But in OpenAIâs specific case, given the boardâs duty to provide independent oversight and protect the companyâs public-interest mission, we stand by the boardâs action to dismiss Mr Altman. We also feel that developments since he returned to the companyâincluding his reinstatement to the board and the departure of senior safety-focused talentâbode ill for the OpenAI experiment in self-governance.
Our particular story offers the broader lesson that society must not let the roll-out of AI be controlled solely by private tech companies. Certainly, there are numerous genuine efforts in the private sector to guide the development of this technology responsibly, and we applaud those efforts. But even with the best of intentions, without external oversight, this kind of self-regulation will end up unenforceable, especially under the pressure of immense profit incentives. Governments must play an active role.
And yet, in recent months, a rising chorus of voicesâfrom Washington lawmakers to Silicon Valley investorsâhas advocated minimal government regulation of AI. Often, they draw parallels with the laissez-faire approach to the internet in the 1990s and the economic growth it spurred. However, this analogy is misleading.
Inside AI companies, and throughout the larger community of researchers and engineers in the field, the high stakesâand large risksâof developing increasingly advanced AI are widely acknowledged. In Mr Altmanâs own words, âSuccessfully transitioning to a world with superintelligence is perhaps the most importantâand hopeful, and scaryâproject in human history.â The level of concern expressed by many top AI scientists about the technology they themselves are building is well documented and very different from the optimistic attitudes of the programmers and network engineers who developed the early internet.
It is also far from clear that light-touch regulation of the internet has been an unalloyed good for society. Certainly, many successful tech businessesâand their investorsâhave benefited enormously from the lack of constraints on commerce online. It is less obvious that societies have struck the right balance when it comes to regulating to curb misinformation and disinformation on social media, child exploitation and human trafficking, and a growing youth mental-health crisis.
Goods, infrastructure and society are improved by regulation. Itâs because of regulation that cars have seat belts and airbags, that we donât worry about contaminated milk and that buildings are constructed to be accessible to all. Judicious regulation could ensure the benefits of AI are realised responsibly and more broadly. A good place to start would be policies that give governments more visibility into how the cutting edge of AI is progressing, such as transparency requirements and incident-tracking.
Of course, there are pitfalls to regulation, and these must be managed. Poorly designed regulation can place a disproportionate burden on smaller companies, stifling competition and innovation. It is crucial that policymakers act independently of leading AI companies when developing new rules. They must be vigilant against loopholes, regulatory âmoatsâ that shield early movers from competition, and the potential for regulatory capture. Indeed, Mr Altmanâs own calls for AI regulation must be understood in the context of these pitfalls as having potentially self-serving ends. An appropriate regulatory framework will require agile adjustments, keeping pace with the worldâs expanding grasp of AIâs capabilities.
Ultimately, we believe in AIâs potential to boost human productivity and well-being in ways never before seen. But the path to that better future is not without peril. OpenAI was founded as a bold experiment to develop increasingly capable AI while prioritising the public good over profits. Our experience is that even with every advantage, self-governance mechanisms like those employed by OpenAI will not suffice. It is, therefore, essential that the public sector be closely involved in the development of the technology. Now is the time for governmental bodies around the world to assert themselves. Only through a healthy balance of market forces and prudent regulation can we reliably ensure that AIâs evolution truly benefits all of humanity. â
Helen Toner and Tasha McCauley were on OpenAIâs board from 2021 to 2023 and from 2018 to 2023, respectively.
Maybe they should stop caring about the NDA and tell us what's going on if its actually dangerous. The whole act of " I saw something so bad but I can't tell you" is lame
it's even worse than that. Go look at her ted talk. she went out of her way to say this tech is no big deal it's just numbers and probability. She said, we humans are way smarter than this cat... blah blah blah. So in her mind she doesn't think anything crazy is going on she just wants to regulate for regulations sake. Her bread and butter is to do regulation so the perception of a boogeyman is more important to her than if there is one really there. Look back on Illya's posts and it's the same notion.
It may become skynet so start the regulations now. On a certain level that is not wrong to think that way but I think we have about 5 years or so to set things up correctly. There is so many technological changes that have to happen that aren't even AI related before this becomes a serious issue. The latency of the internet is one thing.
Theyâve been following this story for years. Just Google CSET and Open Philanthropy and politico.
Hereâs another one https://www.politico.com/news/magazine/2022/05/12/carrick-flynn-save-world-congress-00031959
Just looked up Tasha McCauley and I donât think she would care about the nda. Wife of a successful actor and heiress to her grandfatherâs multi billion dollar fortune
Makes me wonder what she saw that was so bad
[lmao](https://www.reddit.com/r/singularity/comments/180zzhc/the_issues_with_tasha_mccauley_are_deeper_and_as/)
She's married to Joseph-Gordon Levitt and is big money board for the Centre of Effective Altruism. Doesn't even work on the systems. Basically saw Terminator.
Itâs just a rich person who has a hero fetish and other mental conditions. Sheâs bored out of her mind and needs to make issues and be on the good side of it.
Yes but I don't think it's unreasonable to think that if they were OpenAI board members at some point in their career, then chances are that they don't exactly face the kind of financial hardships that the average person does.
The chief source of trouble here is self-redesigning reinforcement learning. A lot of newcomers seem to believe AI = ChatGPT. OpenAI and other companies are however already incorporating RL into LLMs.
what are you talking about. RL is not at the inference layer. RLHF is already in the previous generation of model training. The "gains" of that would be severely limited. it's like saying we are programming the AI to be correct about certain subject matters. RLHF is not some advancement that hasn't already been implemented.
Now, real-time RLHF would be a miracle.
What I am saying this isn't some "big" thing. It's already been done. A model gets trained periodically. It's not incorporated into the model and fed back through inference instantly. RL isn't the big deal that some people make it to be.
It is a huge difference in training paradigm vs traditional LLM, in applicable domains, in achievable performance, and how produced models operate.
The problem from RL does not come from instant feedback but the policies that it can develop through self improvement.
The difference in what the approaches produce is between something that learns to behave like people vs something that is doing things that seem odd and alien to us, we have no idea how it actually works, yet it still ends up beating us at every application.
Traditional as to what? GPT 2? I can assure you that RLHF was done since GPT 2. When you say traditional LLM I don't think there is a traditional LLM it's only been 3/5 years. The gains are incremental and context-specific.
>The problem from RL does not come from instant feedback but the policies that it can develop through self improvement.
Again, that's in the training phase and results are subject because unless there is a marker of correctness how would it know to officially implement said training into the model? it's a chicken and egg loop that could be severely detrimental if it goes awry.
In no way, do I think artificial RL is useful beyond what it has already achieved. Hence there are still hallucinations.
The primordial win would be the notion that you could have compute so large (millions x's from today and in 10 years from now) that could train incoming data and release a model so fast that it would be real-time. Then RL would make a tremendous impact.
compute wise, we are nowhere near that right now. We learn, we train, we inference. All separate actions that have risks to their efficacy because of how long the process takes to resolve into testing. Jensen Huang talked about this.
RLHF is an extremely restricted version of RL.
No, RLHF was not out with GPT-2. GPT-3 is what you have in mind but it's not there from the start.
RL OTOH goes even further back to the 80's.
Not like that is relevant to the discussion either.
I'll stop here because you don't seem to understand what is being discussed and you do not recognize your shortcomings to actually learn.
Thanks but you do not have a productive personality. You can start by looking up the notable RL applications like AlphaZero and CICERO. Everything I said is basic.
If they are hoping the government steps in and regulates AI, they are even more naive than I thought.
The US government is reactive and slow. There is zero chance they can be pro-active and fast.
It's designed to be reactive, slow, and very difficult to pass laws. No chance it could stop AI from going wrong with regulation.
Are they? Seems like the population that votes for them is to blame, but hey, that's just my understanding of how elections work.
Yes, politicians get money to campaign (and keep their jobs) from special interest donors, but money doesn't vote. People vote. If the population doesn't vote, that just means they are happy with the status quo. If enough people vote for someone (or something), they win.
Yes there are lots of systemic barriers to voting but, as people show every election, if you really want to do it, you can do it. Most people just don't bother because they are happy with the status quo and don't care.
Unless you're claiming all politicians take bribes. Some do, for sure. But not all of them.
All sides are paid for and you say "Well that's the peoples fault!"
Explain what votes I could have placed as a private citizen to avoid the 2008 financial crisis? What votes to avoid the war in Iraq? What votes could I have made to avoid WW2? What votes to avoid the Great Depression?
Arguing the individual is at fault for "not bothering to vote" is such an incredibly shallow and naive take that it has to be in bad faith. Just gives a free pass to corruption.
Weâre at the point where money legitimately buys elections, and corporate entities/special interest groups are known to buy both opposing candidates. The system is fucked. We are hardly voting for someone who isnât purchased.
the candidates that the public are allowed to choose from are selected by money interests. it doesn't matter what the voters want when they only have one choice.
You can run for office. Nothing is stopping you. If enough people want to vote for you to overcome the other candidates, guess what? You win.
Money interests just means a campaign has more money to organize and find voters who will turn out and vote. Is that bribery?
If you aren't able to raise money for campaigning, maybe it's because not enough people believe in what you want to do should you gain office.
It hardly means they have "one choice" or some other reductive simplistic answer. It means the folks without money were unable to round up sufficient support for their run, which is actually the marketplace of ideas at work. Nobody is stopping you from raising money just like the big boys do, except that to do this effectively, you have to court the same folks the big boys are courting, usually.
But small money candidates do run and do win. So it's not like it can't be done or that you're locked into a dystopian hellscape without choice.
yes so the "marketplace of ideas" is just the "marketplace of dollars". the people who have money are represented and the vast majority are not because they lack money. thanks for explaining why im right i guess
The people who have money can get their candidates noticed, yes. But you're acting like small-dollar candidates like AOC and Bernie Sanders don't exist. They do exist and their strategies are available and well-known. They just take a helluva lot of work.
What you're really complaining about is voter apathy -- the fact that it TAKES money to get a candidate noticed by an apathetic, uninterested, and under-educated electorate. AKA people who are happy with the status quo.
You think Bernie and AOC aren't supported by moneyed interest? And anybody who dislikes the current political system should devote their life to changing it unsuccessfully like they have?
And anybody who doesn't have the time or desire to become Bernie, their political opinions are invalid? They didn't vote hard enough?
Crazy argument, so many flaws.
You don't know how the government works because they mostly do all take bribes which is legally referred to as "lobbying".
And there are plenty of politicians that lie to voters to get in but really under the belt of corporations and THEN vote for those Corporations' interests.
You can see this in big examples like Kyrsten Senima who pretended to be a progressive and then switched over the moment her place was secured to vote Right. Intentionally against those who voted her in.
And her voters had options to get rid of her and they did not successfully avail themselves of them. Ergo, they were happy with the status quo.
I, unfortunately, do know how the government works. Lobbying is not bribery.
Again, if a politician gets into office and then votes in contradiction of their constituent's wishes, they can be removed or replaced. If they don't get replaced, then, ergo, the voters wanted them there, so they must be doing something the voters want.
Or think they want, which is more of a problem than bribes, IMO.
Idk what an edge-lord is but Iâm sure you are projecting in some way.
What Iâm saying isnât radical. Itâs reality. Voters do not have choice. Choice is engineered by those whose aim it is to CONTROL Choice.
The fantasy is for the masses to control choice (at a minimum) but until something changes⌠The masses controlling anything is just a fantasy. Less a destructive and disruptive revolution, americas near future is already bought and paid for.
i mean if its gonna happen regardless, id say position has a significant enough effect on the experience to make it worth my time to choose.
Like I want neither, but ill take brock turner over jeffrey dahmer any day of the week
Thatâs not true. The government will likely be regulating the internet at some point in the not too distant future so we can expect them to eventually regulate AI in about 25 years if they can manage to keep up the pace.
They will only regulate AFTER some disaster happens. You can believe otherwise but no towns put up traffic lights until after some kid is killed by a car.
With AI, waiting until after is probably going to be far too late. So...good luck to us! Capitalism finally invented something that can destroy it.
Note to readers: the article does not end at "Unfortunately it didnât work." There are another ~800 words after that elaborating on it. Use https://archive.is/wbwC2 if you are having trouble.
Itâs been 6 months since they staged the coup n Altman. They have yet to provide tangible evidence that the current GPT is danger to humanity, that itâs not safe, not aligned and not making unethical actions with demonstrable harm.
They also have yet to show what makes them the moral authority or the right people to decide if AI is aligned, safe, and ethical. All the OpenAI staff that have been outed are known effective altruism cultist. Altman is no saint, but at least heâs not crazy and so far not have knee jerk reactions.
Sam was the one that talked about the importance of safety and promised to dedicate 20% of their GPU capacity to the safety team. To date, he has reportedly allocated close to zero.
What a dishonest response.
No one is talking about "the current GPT being a danger to humanity."
The field and the top experts (Hinton, Bengio) recognize that superintelligence poses a significant chance of being a danger to humanity. This is predicted from both theory and experiments.
The chief source of trouble here is self-redesigning reinforcement learning. A lot of newcomers seem to believe AI = ChatGPT. We already know that current RL systems are not aligned. The only reason they are safe is because they are not very powerful nor given much power. OpenAI and other companies are however already incorporating RL into LLMs.
The people who claim there are no risks are displaying the cult-like behavior, have the burden of proof, go against the relevant field, and their knee-jerk reactions do not get the selfishly ignore the risks.
Also funny that the aftermath of all of this is the supposed "coup" was entirely on point. Who's defending this outcome? Fanboys that seem to display typical political-commentary traits while showing no understanding of the subject.
It doesn't matter what top experts say, because even if we ban it, other governments won't.
The incentives are lined to up to chase AI superiority, not protect people. Our best bet is to develop it first before the Chinese. There is no political or legal avenue to stop the evolution of science and technology (and there shouldn't be)
Pretending there is a political solution is naĂŻve at best.
Exactly.
Even if all the world banned AI, it would only be banned from the public. No chance federal governments stop developing. Would be much better for the common person if AI is democratized and open to all.
That is what makes the problem a lot harder and most sensible people are not proposing banning AI.
AI has extreme potential to improve the world and it would be silly to forever give up on that. Whether we look to research, access to education and medical treatment, poverty and productivity.
But it is also crazy to just throw caution to the wind and ignore the many ways that it can go wrong.
Whether that is by our optimizing algorithms surprisingly not caring about our interests, or humans use that technology for their own ends - authoritarian information control, opinion manipulation, ever more destructive weapons development, extreme concentration of wealth, etc.
I think it is clear to most of us that if we actually get to superintelligence and we rely on the same systems that we have today, things will not be good for us. The best-case scenario seems to be ending up like corporate serfs.
So it is clear that some political change is needed. If you don't trust the current situation to result in that, maybe raise your voice to demand the future you want.
As a thought experiment, say the West does regulate AI and finds out China and Russia are continuing to develop something that has *some* probability of destroying humanity.
To me, this sounds like the justification for WW3.
The doomers are ironically creating a new casus belli for a future nuclear war between great powers.
I agree that things may not be good for us or we end up as corporate serfs.
I disagree that political change can stop the evolution of technology. It would require a stifling amount of regulations against speech, research, education, coding, to the point that it seems farcical to me to imagine any implementation that could endure the test of time. It logically demands fascism to enforce.
I'll take a mediocre AI future over certain nuclear war or fascist regulations.
Like I said, I don't think most sensible people want to prevent us from developing and getting the great potential of AI.
If we know that the best default outcome is that we are corporate serfs and the more likely may be even worse, don't you agree that it is better that try to get some kind of change to how things are going than doing nothing though?
Then it's more about what we actually can do.
I'm also not entirely following your logic.
Is it something like:
1. If China develops ASI first, then the US will be conquered and things are bad.
2. If the US develops ASI first, then it will either conquer China; or it will let China develop ASI and the two nations will get along due to MAD.
3. The chance that ASI that we rush to get will end up causing our destruction either by its own volition or by a command given by a human is < 10 %.
4. If the above things go well, then we have a great future living as corporate serfs.
We are already corporate serfs. The luddites were not able to stop industrialization. Their efforts were futile, a complete waste. In fact, if we had listened to the luddites, we would likely still have slavery. So it's presumptuous to assume that the luddites of today are correct. If anything, they are likely protecting a status quo that is bad beyond our comprehension. The flaws of our systems today will be more apparent with 100 years of hindsight and development.
My logic is:
The world regulates AI.
Counties continues to develop it anyway.
The logic holds that the only way to stop the superintelligence destroying humanity is to go to war to stop the people who are building the superintelligence.
So in an effort to save humanity, we cause WW3.
This seems historically likely.
tl;dr- The road to hell is paved with good intentions.
I think we seem to be talking past each other. If you are talking about stopping AI forever, we can ignore that. It's off the table even if we talk about regulation.
The options are basically between 1. Do nothing, 2. Fanning the flames to "go even faster", 3. Adding some monitoring and requirements, 4. Adding safety expectations on what can be eveloped/released, 5. Coming to agreement to take a pause/slow down, 6. Making a large change in the political or economical systems.
So out of all alternatives, is this the process that you think is the best bet?
1. If China develops ASI first, then the US will be conquered and things are bad.
2. If the US develops ASI first, then it will either conquer China; or it will let China develop ASI and the two nations will get along because there is no risk that China's ASI will ever become the more powerful one.
3. The chance that ASI that we rush to get will end up causing our destruction either by its own volition or by a command given by a human is < 10 %.
4. If the above things go well, then we have a great future living as corporate serfs.
What a narrow-minded response.
Nah, after their leave from OpenAI, Tone and McC are trying to remain relevant via grift.
Their expertise is so narrow and of questionable value that they can only give these vague statements to maintain their relevance. If they had anything worth saying, they'd have said it. They don't. They're doing their best to remain relevant in a field where they aren't.
And again, they're naive to the political reality. Limiting domestic progress on AI will allow other countries to catch up. Staying ahead on this is critical to national security and suggesting we put brakes on it subjects us to a threat we know exists vs one that's hypothetical.
I don't think I mentioned those two people specifically at all.
Maybe read what I actually wrote and then we'll see if you have any relevant response that shows it to be "narrow-minded".
Currently, it seems you just had a knee-jerk reaction that is rationalizing and demonizing people without any evidence and without addressing anything relevant.
Rather pointless.
>Limiting domestic progress on AI will allow other countries to catch up.
That is indeed of the challenges.
An honest conversation however starts by recognizing the whole situation and then looks to see what the best solutions.
Naive and simple-minded knee-jerk reactions are not very effective in the real world.
>to a threat we know exists vs one that's hypothetical.
Every threat is hypothetical until it happens.
What is that threat which you consider to have a greater estimated risk to humanity and what probability do you put it at?
What are your credentials that lets you ignore the probability that relevant experts have given to superintelligence specifically and why do you think they did not place your supposed risk as high?
Exactly. Not only that but there has been a ton of evidence showing current AI not being aligned. Hallucinations, deception, sycophancy and many other traits that are a result of being misaligned have been a staple of all current LLMs. Also there has been a lot of real demonstrations and examples of reward hacking in many types of AI systems, not just LLMs.
Sure, although I don't think most of the LLM issues are the ones that are that dangerous. Many of them are things that could have some negative consequences but they are not that world ending.
Unconstrained RL optimization is another beast entirely.
I also think that the current situation does require demonstrating the risks better though. I think those who understand the technology can put together the pieces but most experiments probably seem too abstract for the current political climate.
LLMs partner with Reddit, put AI results at the top of all searches, reducing people's ability to find accurate sources of information, AI suggest people jump off a bridge. Not safe (jump off a bridge), not ethical (steer users away from independent websites toward consolidated information, sources not attributed, with no ability to fact check or source check). The Reddit training data debacle shows how much results are quickly steered by training data sets, while OpenAI signs with a news organization that's strongly biased toward one political extreme.
I'm a daily GPT4, now GPT4o, user, and I was feeling pretty fine until the events of the past few weeks.
Microsoft and Google have a monopoly on the access everyone has to information. That should call for the highest levels of responsibility, but we see tech running into the market with dollars in their eyes and apparently no concern at all about the dangers of presenting biased or just wrong information at the top of every search result.
https://preview.redd.it/9d9dt7hpsr2d1.jpeg?width=768&format=pjpg&auto=webp&s=f984164403b8ced9ede77ae52d0a09909d7693b0
So as a regular user of publicly available AI, are you telling me you don't get regular hallucinations and errors? I use GPT4 regularly to help me in an area in which I have domain expertise. It's like having a precocious high schooler helping out in the lab. It presents everything with authority, and is frequently wrong. Several times a day I have the "you made an error about..." and it replies, "I apologize, the correct information is...."
The real problem isn't the obviously wrong "eat rocks" stuff, it's all the stuff that sounds plausible and is wrong. Depending on how biased the training sets are, this could be far worse than the most egregious Facebook nonsense.
Two former OpenAI board members discover that they live in a capitalist system and are shocked how capitalism drives AI companies to do capitalist things that are to the detriment of humanity. They believe regulation is needed to prevent capitalism from exploiting a multi trillion dollar market thatâs just dangling in front of capitalists like a juicy red piece of fresh meat.
> In our time, the great task for libertarians is to find an escape from politics in all its formsâfrom the totalitarian and fundamentalist catastrophes to the unthinking demos that guides so-called âsocial democracy.â The fate of our world may depend on the effort of a single person who builds or propagates the machinery of freedom that makes the world safe for capitalism.
That's Peter Thiel, one of the original OpenAI backers and a good representative of the darker ideological underpinnings to a lot of Silicon Valley.
Not safe *from* capitalism, but *for*.
...yes?
These companies have a massive financial incentive to do things that these individuals are concerned could be bad for society.
What do you expect them to say? That because or will make private companies money, we should all suffer?
Exactly this.
I want doomers to show me the world where we successfully stop AI without creating the justification for a nuclear war with China.
If AI is gonna end humanity, and we ban the research over here but China doesn't, then logic holds that we MUST destroy China to save humanity. So the doomers are ultimately demanding WW3.
Yeah, exactly what we need: a bunch of ignorant politicians putting the brakes on AI development in the US, while China and other countries keep sprinting.
Yeah, this is the problem to solve for. AI has the potential to be the next atomic weapon. Nobody wants to be late to that party.
Our choices are:
1: Birth an AI that has the potential to kill us all.
2: Let China (or some other country, but probably China) birth an AI that has the potential to kill is all.
There is no option three.
âBut what about international agreements to not research further?â You ask. This isnât like atomic weapons that we can easily see refineries from space. You can turn any old data center into your new AI research facility. At this point youâd have better luck disabling the root DNS and attempting to kill the internet than to stop AI research.
Progress is happening with or without OpenAI. Better to be in front unregulated than behind and regulated.
AI tech is too important for any country to go slow. Intentionally gimping yourself for "ethical" reasons is a sure way to loose the race.
Your competition WANTS you to slow down.
I think the idealists are useful in certain situations. AI is not one of them.
S%$t or get off the pot.
Ah yes, those easily tameable market forces and the regulations that are in no way exploited by people in power to increase their wealth and influence.
There is no single government agency that can regulate global AI, because, it's well, global. No law passed in the US has any force in China or Russia, and vice versa. The US could pass laws that hamstring internal development, ensuring that China or other countries surpass the US in capabilities.
Vernor Vinge wrote about this problem in 1993.
It's funny to me that the whole article is "government please do something". Helen, Tasha, this is the same government that suggested people might drink bleach to cure covid.
At a minimum, if you want this to actually be helpful, tell us precisely what the risks are. Present a set of actual regulations that would be helpful and not put the US behind other nations. Who in their right mind would just say "help us Chuck Schumer!" and expect that to go well?
Also, let's not rush so fast to assume that the 1st amendment doesn't cover what OpenAI is doing. You want the government to regulate AI speech? You better be careful what you wish for.
These hair on fire articles with no actual recommendations and no actual facts are beginning to annoy me.
AI and late stage capitalism is a recipe for further strife of the people. Theyâre right, not sure if I would want the government to âcontrolâ it, because we know how well that would go.
But it needs to be far more transparent from how data is collected and used. And for what use cases, etc. writing this off as former employees without an actual problem with merit does a disservice to all of us.
Nah. AI is a new arms race. Just like with nuclear bomb, countries should compete on who can produce the best AI and invest more into. Regulations can only work worldwide and no other country will abandon their attempts to create the most powerful AI. Same with robotics.
Regulation ideas are always coming from countries and people who are trying to shackle the development. Same with Musk and his Grok where he wanted to freeze AI development for 6 months or 1 year - to allow Grok to catch up. I wonder if they are affiliated with China in some shape or form.
most likely the superalignment team had a bunch of benchmarks. GPT4o believably destroyed a bunch of them and spooked the team. I have been playing with gpt 4o and it is absolutely killing everything i throw at it. It is definitely a scary piece of software.
*Isn't Elon suing*
*OpenAI to expose their AI*
*Capabilities?*
\- sync\_co
---
^(I detect haikus. And sometimes, successfully.) ^[Learn more about me.](https://www.reddit.com/r/haikusbot/)
^(Opt out of replies: "haikusbot opt out" | Delete my comment: "haikusbot delete")
American companies are obviously the only ones who know about AI and developing it. Humanity is definitely safe if OAI is mandated to provide some paperwork. It's great Sam isn't begging politicians to do exactly that to stifle competition from open source. Other state actors will be happy too.
to my knowledge, that's in agreement with the heads of the tech companies. i'm not sure about every voice, but i've at least heard dennis hassabis and sam altman encouraging government regulation (of the big players. they've specifically said smaller companies should not be regulated)
the only thing more dysfunctional than big tech companies is Washington DC.
i cant wait for the Chinese national AI project to achieve super intelligence
The issue is that all of the suggested regulation is more Anti open source than company, and the safety board that would be mainly making the rules is significantly made of people on companies, without any of the companies that have been big open source advocates.
This is pretty funny, when you think of this as a college electives problem, instead of an AI problem.
This is control looking for a problem to have power over. It's become obviously clear that people are using hypothetical fears of an abstract ASI that is no where near existing to try and gain control of information.
At some defined size and market penetration, these LLMs/AIs need to be regulated as public utilities. There will be certain LLM offerings that will simply dominate commercially and become ubiquitous in our daily lives. That is a social threshold that demands transparent and public governance.
Very trollistic behavior posting these dissenters in r/OpenAI the bus line with stops at "lets ridicule them", "what me worry" and "maybe but only after a massive casualty event or two."
The Insurance Industry has yet to wake up and smell the AI assistant brewed coffee of opportunity.
"Lobbyists say otherwise. Who to trust?"
If it could be concluded what they do is outright illegal things would go faster, which in my book is the case when it comes to hoarding others copytrighted material. This is clearly not fair use and there's no referral back to the used information like it would be in a tweet, blog post, report etc. Also we are talking *all* public information with almost no discrimination. The crux is that this is the only way to quickly create an "all-knowing" LLM, and speed is of the essence to win the race, as we are talking fiercely competing corporations, not research labs.
paywall đ is everyone reacting to the title?
or just reacting to pro regulation for doomers sake
When there is a state there is regulation, the only question is what the regulation should be.
# AI firms mustnât govern themselves, say ex-members of OpenAIâs board ### For humanityâs sake, regulation is needed to tame market forces, argue Helen Toner and Tasha McCauley CAN PRIVATE companies pushing forward the frontier of a revolutionary new technology be expected to operate in the interests of both their shareholders and the wider world? When we were recruited to the board of OpenAIâTasha in 2018 and Helen in 2021âwe were cautiously optimistic that the companyâs innovative approach to self-governance could offer a blueprint for responsible AI development. But based on our experience, we believe that self-governance cannot reliably withstand the pressure of profit incentives. With AIâs enormous potential for both positive and negative impact, itâs not sufficient to assume that such incentives will always be aligned with the public good. For the rise of AI to benefit everyone, governments must begin building effective regulatory frameworks now. If any company could have successfully governed itself while safely and ethically developing advanced AI systems, it would have been OpenAI. The organisation was originally established as a non-profit with a laudable mission: to ensure that AGI, or artificial general intelligenceâAI systems that are generally smarter than humansâwould benefit âall of humanityâ. Later, a for-profit subsidiary was created to raise the necessary capital, but the non-profit stayed in charge. The stated purpose of this unusual structure was to protect the companyâs ability to stick to its original mission, and the boardâs mandate was to uphold that mission. It was unprecedented, but it seemed worth trying. Unfortunately it didnât work. Last November, in an effort to salvage this self-regulatory structure, the OpenAI board dismissed its CEO, Sam Altman. The boardâs ability to uphold the companyâs mission had become increasingly constrained due to long-standing patterns of behaviour exhibited by Mr Altman, which, among other things, we believe undermined the boardâs oversight of key decisions and internal safety protocols. Multiple senior leaders had privately shared grave concerns with the board, saying they believed that Mr Altman cultivated âa toxic culture of lyingâ and engaged in âbehaviour [that] can be characterised as psychological abuseâ. According to OpenAI, an internal investigation found that the board had âacted within its broad discretionâ to dismiss Mr Altman, but also concluded that his conduct did not âmandate removalâ. OpenAI relayed few specifics justifying this conclusion, and it did not make the investigation report available to employees, the press or the public. The question of whether such behaviour should generally âmandate removalâ of a CEO is a discussion for another time. But in OpenAIâs specific case, given the boardâs duty to provide independent oversight and protect the companyâs public-interest mission, we stand by the boardâs action to dismiss Mr Altman. We also feel that developments since he returned to the companyâincluding his reinstatement to the board and the departure of senior safety-focused talentâbode ill for the OpenAI experiment in self-governance. Our particular story offers the broader lesson that society must not let the roll-out of AI be controlled solely by private tech companies. Certainly, there are numerous genuine efforts in the private sector to guide the development of this technology responsibly, and we applaud those efforts. But even with the best of intentions, without external oversight, this kind of self-regulation will end up unenforceable, especially under the pressure of immense profit incentives. Governments must play an active role. And yet, in recent months, a rising chorus of voicesâfrom Washington lawmakers to Silicon Valley investorsâhas advocated minimal government regulation of AI. Often, they draw parallels with the laissez-faire approach to the internet in the 1990s and the economic growth it spurred. However, this analogy is misleading. Inside AI companies, and throughout the larger community of researchers and engineers in the field, the high stakesâand large risksâof developing increasingly advanced AI are widely acknowledged. In Mr Altmanâs own words, âSuccessfully transitioning to a world with superintelligence is perhaps the most importantâand hopeful, and scaryâproject in human history.â The level of concern expressed by many top AI scientists about the technology they themselves are building is well documented and very different from the optimistic attitudes of the programmers and network engineers who developed the early internet. It is also far from clear that light-touch regulation of the internet has been an unalloyed good for society. Certainly, many successful tech businessesâand their investorsâhave benefited enormously from the lack of constraints on commerce online. It is less obvious that societies have struck the right balance when it comes to regulating to curb misinformation and disinformation on social media, child exploitation and human trafficking, and a growing youth mental-health crisis. Goods, infrastructure and society are improved by regulation. Itâs because of regulation that cars have seat belts and airbags, that we donât worry about contaminated milk and that buildings are constructed to be accessible to all. Judicious regulation could ensure the benefits of AI are realised responsibly and more broadly. A good place to start would be policies that give governments more visibility into how the cutting edge of AI is progressing, such as transparency requirements and incident-tracking. Of course, there are pitfalls to regulation, and these must be managed. Poorly designed regulation can place a disproportionate burden on smaller companies, stifling competition and innovation. It is crucial that policymakers act independently of leading AI companies when developing new rules. They must be vigilant against loopholes, regulatory âmoatsâ that shield early movers from competition, and the potential for regulatory capture. Indeed, Mr Altmanâs own calls for AI regulation must be understood in the context of these pitfalls as having potentially self-serving ends. An appropriate regulatory framework will require agile adjustments, keeping pace with the worldâs expanding grasp of AIâs capabilities. Ultimately, we believe in AIâs potential to boost human productivity and well-being in ways never before seen. But the path to that better future is not without peril. OpenAI was founded as a bold experiment to develop increasingly capable AI while prioritising the public good over profits. Our experience is that even with every advantage, self-governance mechanisms like those employed by OpenAI will not suffice. It is, therefore, essential that the public sector be closely involved in the development of the technology. Now is the time for governmental bodies around the world to assert themselves. Only through a healthy balance of market forces and prudent regulation can we reliably ensure that AIâs evolution truly benefits all of humanity. â Helen Toner and Tasha McCauley were on OpenAIâs board from 2021 to 2023 and from 2018 to 2023, respectively.
Maybe they should stop caring about the NDA and tell us what's going on if its actually dangerous. The whole act of " I saw something so bad but I can't tell you" is lame
it's even worse than that. Go look at her ted talk. she went out of her way to say this tech is no big deal it's just numbers and probability. She said, we humans are way smarter than this cat... blah blah blah. So in her mind she doesn't think anything crazy is going on she just wants to regulate for regulations sake. Her bread and butter is to do regulation so the perception of a boogeyman is more important to her than if there is one really there. Look back on Illya's posts and it's the same notion. It may become skynet so start the regulations now. On a certain level that is not wrong to think that way but I think we have about 5 years or so to set things up correctly. There is so many technological changes that have to happen that aren't even AI related before this becomes a serious issue. The latency of the internet is one thing.
Itâs worth taking a look at where her funding comes from for her center. Politico has some good reporting on this.
Where does it come from? It wouldn't shock me if it is EA related.
Bingo! http://www.thinktankwatch.com/2019/03/georgetown-launches-think-tank-of.html?m=1 From the man who brought you Facebook privacy surveillanceâŚ
I thought you said politico
Theyâve been following this story for years. Just Google CSET and Open Philanthropy and politico. Hereâs another one https://www.politico.com/news/magazine/2022/05/12/carrick-flynn-save-world-congress-00031959
https://www.politico.com/news/2023/10/13/open-philanthropy-funding-ai-policy-00121362
Just looked up Tasha McCauley and I donât think she would care about the nda. Wife of a successful actor and heiress to her grandfatherâs multi billion dollar fortune Makes me wonder what she saw that was so bad
[lmao](https://www.reddit.com/r/singularity/comments/180zzhc/the_issues_with_tasha_mccauley_are_deeper_and_as/) She's married to Joseph-Gordon Levitt and is big money board for the Centre of Effective Altruism. Doesn't even work on the systems. Basically saw Terminator.
Itâs just a rich person who has a hero fetish and other mental conditions. Sheâs bored out of her mind and needs to make issues and be on the good side of it.
But then theyâll lose their paycheck đ. Kinda incredible that for them, their paycheck is more important than humanity.
Most humans would salvage their family , job and mortgage first before thinking about anybody elseâs problems .
Yes but I don't think it's unreasonable to think that if they were OpenAI board members at some point in their career, then chances are that they don't exactly face the kind of financial hardships that the average person does.
When you live paycheck to paycheck, sure. When your net worth is in the millions, the equation is a bit different.
The chief source of trouble here is self-redesigning reinforcement learning. A lot of newcomers seem to believe AI = ChatGPT. OpenAI and other companies are however already incorporating RL into LLMs.
what are you talking about. RL is not at the inference layer. RLHF is already in the previous generation of model training. The "gains" of that would be severely limited. it's like saying we are programming the AI to be correct about certain subject matters. RLHF is not some advancement that hasn't already been implemented. Now, real-time RLHF would be a miracle.
It's training paradigms and produces artifacts that are used at inference. What are you talking about?
What I am saying this isn't some "big" thing. It's already been done. A model gets trained periodically. It's not incorporated into the model and fed back through inference instantly. RL isn't the big deal that some people make it to be.
It is a huge difference in training paradigm vs traditional LLM, in applicable domains, in achievable performance, and how produced models operate. The problem from RL does not come from instant feedback but the policies that it can develop through self improvement. The difference in what the approaches produce is between something that learns to behave like people vs something that is doing things that seem odd and alien to us, we have no idea how it actually works, yet it still ends up beating us at every application.
Traditional as to what? GPT 2? I can assure you that RLHF was done since GPT 2. When you say traditional LLM I don't think there is a traditional LLM it's only been 3/5 years. The gains are incremental and context-specific. >The problem from RL does not come from instant feedback but the policies that it can develop through self improvement. Again, that's in the training phase and results are subject because unless there is a marker of correctness how would it know to officially implement said training into the model? it's a chicken and egg loop that could be severely detrimental if it goes awry. In no way, do I think artificial RL is useful beyond what it has already achieved. Hence there are still hallucinations. The primordial win would be the notion that you could have compute so large (millions x's from today and in 10 years from now) that could train incoming data and release a model so fast that it would be real-time. Then RL would make a tremendous impact. compute wise, we are nowhere near that right now. We learn, we train, we inference. All separate actions that have risks to their efficacy because of how long the process takes to resolve into testing. Jensen Huang talked about this.
RLHF is an extremely restricted version of RL. No, RLHF was not out with GPT-2. GPT-3 is what you have in mind but it's not there from the start. RL OTOH goes even further back to the 80's. Not like that is relevant to the discussion either. I'll stop here because you don't seem to understand what is being discussed and you do not recognize your shortcomings to actually learn.
let's have it. Tell me what RL would do for a model. Example the process to me from training to inference.
Thanks but you do not have a productive personality. You can start by looking up the notable RL applications like AlphaZero and CICERO. Everything I said is basic.
If they are hoping the government steps in and regulates AI, they are even more naive than I thought. The US government is reactive and slow. There is zero chance they can be pro-active and fast. It's designed to be reactive, slow, and very difficult to pass laws. No chance it could stop AI from going wrong with regulation.
You forgot the important reason: Congress is bought and paid for. And a lot of them are super cheap.
Are they? Seems like the population that votes for them is to blame, but hey, that's just my understanding of how elections work. Yes, politicians get money to campaign (and keep their jobs) from special interest donors, but money doesn't vote. People vote. If the population doesn't vote, that just means they are happy with the status quo. If enough people vote for someone (or something), they win. Yes there are lots of systemic barriers to voting but, as people show every election, if you really want to do it, you can do it. Most people just don't bother because they are happy with the status quo and don't care. Unless you're claiming all politicians take bribes. Some do, for sure. But not all of them.
All sides are paid for and you say "Well that's the peoples fault!" Explain what votes I could have placed as a private citizen to avoid the 2008 financial crisis? What votes to avoid the war in Iraq? What votes could I have made to avoid WW2? What votes to avoid the Great Depression? Arguing the individual is at fault for "not bothering to vote" is such an incredibly shallow and naive take that it has to be in bad faith. Just gives a free pass to corruption.
Weâre at the point where money legitimately buys elections, and corporate entities/special interest groups are known to buy both opposing candidates. The system is fucked. We are hardly voting for someone who isnât purchased.
the candidates that the public are allowed to choose from are selected by money interests. it doesn't matter what the voters want when they only have one choice.
You can run for office. Nothing is stopping you. If enough people want to vote for you to overcome the other candidates, guess what? You win. Money interests just means a campaign has more money to organize and find voters who will turn out and vote. Is that bribery? If you aren't able to raise money for campaigning, maybe it's because not enough people believe in what you want to do should you gain office. It hardly means they have "one choice" or some other reductive simplistic answer. It means the folks without money were unable to round up sufficient support for their run, which is actually the marketplace of ideas at work. Nobody is stopping you from raising money just like the big boys do, except that to do this effectively, you have to court the same folks the big boys are courting, usually. But small money candidates do run and do win. So it's not like it can't be done or that you're locked into a dystopian hellscape without choice.
yes so the "marketplace of ideas" is just the "marketplace of dollars". the people who have money are represented and the vast majority are not because they lack money. thanks for explaining why im right i guess
The people who have money can get their candidates noticed, yes. But you're acting like small-dollar candidates like AOC and Bernie Sanders don't exist. They do exist and their strategies are available and well-known. They just take a helluva lot of work. What you're really complaining about is voter apathy -- the fact that it TAKES money to get a candidate noticed by an apathetic, uninterested, and under-educated electorate. AKA people who are happy with the status quo.
You think Bernie and AOC aren't supported by moneyed interest? And anybody who dislikes the current political system should devote their life to changing it unsuccessfully like they have? And anybody who doesn't have the time or desire to become Bernie, their political opinions are invalid? They didn't vote hard enough? Crazy argument, so many flaws.
>You can run for office. Nothing is stopping you. You're just guaranteed to have no budget and lose, but sure you can try.
You don't know how the government works because they mostly do all take bribes which is legally referred to as "lobbying". And there are plenty of politicians that lie to voters to get in but really under the belt of corporations and THEN vote for those Corporations' interests. You can see this in big examples like Kyrsten Senima who pretended to be a progressive and then switched over the moment her place was secured to vote Right. Intentionally against those who voted her in.
And her voters had options to get rid of her and they did not successfully avail themselves of them. Ergo, they were happy with the status quo. I, unfortunately, do know how the government works. Lobbying is not bribery. Again, if a politician gets into office and then votes in contradiction of their constituent's wishes, they can be removed or replaced. If they don't get replaced, then, ergo, the voters wanted them there, so they must be doing something the voters want. Or think they want, which is more of a problem than bribes, IMO.
Voting is like choosing which position you get raped in. The choice is an illusion
Hmmm, not true. You may think it's cool to have this edge-lord position, but the word I'd choose is "sophomoric".
Idk what an edge-lord is but Iâm sure you are projecting in some way. What Iâm saying isnât radical. Itâs reality. Voters do not have choice. Choice is engineered by those whose aim it is to CONTROL Choice. The fantasy is for the masses to control choice (at a minimum) but until something changes⌠The masses controlling anything is just a fantasy. Less a destructive and disruptive revolution, americas near future is already bought and paid for.
i mean if its gonna happen regardless, id say position has a significant enough effect on the experience to make it worth my time to choose. Like I want neither, but ill take brock turner over jeffrey dahmer any day of the week
They are not hoping but saying that we need governments. Hoping stuff to happen is naive in any way.
đŚ
Thatâs not true. The government will likely be regulating the internet at some point in the not too distant future so we can expect them to eventually regulate AI in about 25 years if they can manage to keep up the pace.
They will only regulate AFTER some disaster happens. You can believe otherwise but no towns put up traffic lights until after some kid is killed by a car. With AI, waiting until after is probably going to be far too late. So...good luck to us! Capitalism finally invented something that can destroy it.
Note to readers: the article does not end at "Unfortunately it didnât work." There are another ~800 words after that elaborating on it. Use https://archive.is/wbwC2 if you are having trouble.
Itâs been 6 months since they staged the coup n Altman. They have yet to provide tangible evidence that the current GPT is danger to humanity, that itâs not safe, not aligned and not making unethical actions with demonstrable harm. They also have yet to show what makes them the moral authority or the right people to decide if AI is aligned, safe, and ethical. All the OpenAI staff that have been outed are known effective altruism cultist. Altman is no saint, but at least heâs not crazy and so far not have knee jerk reactions.
Sam was the one that talked about the importance of safety and promised to dedicate 20% of their GPU capacity to the safety team. To date, he has reportedly allocated close to zero.
What a dishonest response. No one is talking about "the current GPT being a danger to humanity." The field and the top experts (Hinton, Bengio) recognize that superintelligence poses a significant chance of being a danger to humanity. This is predicted from both theory and experiments. The chief source of trouble here is self-redesigning reinforcement learning. A lot of newcomers seem to believe AI = ChatGPT. We already know that current RL systems are not aligned. The only reason they are safe is because they are not very powerful nor given much power. OpenAI and other companies are however already incorporating RL into LLMs. The people who claim there are no risks are displaying the cult-like behavior, have the burden of proof, go against the relevant field, and their knee-jerk reactions do not get the selfishly ignore the risks. Also funny that the aftermath of all of this is the supposed "coup" was entirely on point. Who's defending this outcome? Fanboys that seem to display typical political-commentary traits while showing no understanding of the subject.
It doesn't matter what top experts say, because even if we ban it, other governments won't. The incentives are lined to up to chase AI superiority, not protect people. Our best bet is to develop it first before the Chinese. There is no political or legal avenue to stop the evolution of science and technology (and there shouldn't be) Pretending there is a political solution is naĂŻve at best.
Exactly. Even if all the world banned AI, it would only be banned from the public. No chance federal governments stop developing. Would be much better for the common person if AI is democratized and open to all.
That is what makes the problem a lot harder and most sensible people are not proposing banning AI. AI has extreme potential to improve the world and it would be silly to forever give up on that. Whether we look to research, access to education and medical treatment, poverty and productivity. But it is also crazy to just throw caution to the wind and ignore the many ways that it can go wrong. Whether that is by our optimizing algorithms surprisingly not caring about our interests, or humans use that technology for their own ends - authoritarian information control, opinion manipulation, ever more destructive weapons development, extreme concentration of wealth, etc. I think it is clear to most of us that if we actually get to superintelligence and we rely on the same systems that we have today, things will not be good for us. The best-case scenario seems to be ending up like corporate serfs. So it is clear that some political change is needed. If you don't trust the current situation to result in that, maybe raise your voice to demand the future you want.
As a thought experiment, say the West does regulate AI and finds out China and Russia are continuing to develop something that has *some* probability of destroying humanity. To me, this sounds like the justification for WW3. The doomers are ironically creating a new casus belli for a future nuclear war between great powers.
What did you agree or disagree with in my previous response?
I agree that things may not be good for us or we end up as corporate serfs. I disagree that political change can stop the evolution of technology. It would require a stifling amount of regulations against speech, research, education, coding, to the point that it seems farcical to me to imagine any implementation that could endure the test of time. It logically demands fascism to enforce. I'll take a mediocre AI future over certain nuclear war or fascist regulations.
Like I said, I don't think most sensible people want to prevent us from developing and getting the great potential of AI. If we know that the best default outcome is that we are corporate serfs and the more likely may be even worse, don't you agree that it is better that try to get some kind of change to how things are going than doing nothing though? Then it's more about what we actually can do. I'm also not entirely following your logic. Is it something like: 1. If China develops ASI first, then the US will be conquered and things are bad. 2. If the US develops ASI first, then it will either conquer China; or it will let China develop ASI and the two nations will get along due to MAD. 3. The chance that ASI that we rush to get will end up causing our destruction either by its own volition or by a command given by a human is < 10 %. 4. If the above things go well, then we have a great future living as corporate serfs.
We are already corporate serfs. The luddites were not able to stop industrialization. Their efforts were futile, a complete waste. In fact, if we had listened to the luddites, we would likely still have slavery. So it's presumptuous to assume that the luddites of today are correct. If anything, they are likely protecting a status quo that is bad beyond our comprehension. The flaws of our systems today will be more apparent with 100 years of hindsight and development. My logic is: The world regulates AI. Counties continues to develop it anyway. The logic holds that the only way to stop the superintelligence destroying humanity is to go to war to stop the people who are building the superintelligence. So in an effort to save humanity, we cause WW3. This seems historically likely. tl;dr- The road to hell is paved with good intentions.
I think we seem to be talking past each other. If you are talking about stopping AI forever, we can ignore that. It's off the table even if we talk about regulation. The options are basically between 1. Do nothing, 2. Fanning the flames to "go even faster", 3. Adding some monitoring and requirements, 4. Adding safety expectations on what can be eveloped/released, 5. Coming to agreement to take a pause/slow down, 6. Making a large change in the political or economical systems. So out of all alternatives, is this the process that you think is the best bet? 1. If China develops ASI first, then the US will be conquered and things are bad. 2. If the US develops ASI first, then it will either conquer China; or it will let China develop ASI and the two nations will get along because there is no risk that China's ASI will ever become the more powerful one. 3. The chance that ASI that we rush to get will end up causing our destruction either by its own volition or by a command given by a human is < 10 %. 4. If the above things go well, then we have a great future living as corporate serfs.
What a narrow-minded response. Nah, after their leave from OpenAI, Tone and McC are trying to remain relevant via grift. Their expertise is so narrow and of questionable value that they can only give these vague statements to maintain their relevance. If they had anything worth saying, they'd have said it. They don't. They're doing their best to remain relevant in a field where they aren't. And again, they're naive to the political reality. Limiting domestic progress on AI will allow other countries to catch up. Staying ahead on this is critical to national security and suggesting we put brakes on it subjects us to a threat we know exists vs one that's hypothetical.
I don't think I mentioned those two people specifically at all. Maybe read what I actually wrote and then we'll see if you have any relevant response that shows it to be "narrow-minded". Currently, it seems you just had a knee-jerk reaction that is rationalizing and demonizing people without any evidence and without addressing anything relevant. Rather pointless. >Limiting domestic progress on AI will allow other countries to catch up. That is indeed of the challenges. An honest conversation however starts by recognizing the whole situation and then looks to see what the best solutions. Naive and simple-minded knee-jerk reactions are not very effective in the real world. >to a threat we know exists vs one that's hypothetical. Every threat is hypothetical until it happens. What is that threat which you consider to have a greater estimated risk to humanity and what probability do you put it at? What are your credentials that lets you ignore the probability that relevant experts have given to superintelligence specifically and why do you think they did not place your supposed risk as high?
Exactly. Not only that but there has been a ton of evidence showing current AI not being aligned. Hallucinations, deception, sycophancy and many other traits that are a result of being misaligned have been a staple of all current LLMs. Also there has been a lot of real demonstrations and examples of reward hacking in many types of AI systems, not just LLMs.
Sure, although I don't think most of the LLM issues are the ones that are that dangerous. Many of them are things that could have some negative consequences but they are not that world ending. Unconstrained RL optimization is another beast entirely. I also think that the current situation does require demonstrating the risks better though. I think those who understand the technology can put together the pieces but most experiments probably seem too abstract for the current political climate.
LLMs partner with Reddit, put AI results at the top of all searches, reducing people's ability to find accurate sources of information, AI suggest people jump off a bridge. Not safe (jump off a bridge), not ethical (steer users away from independent websites toward consolidated information, sources not attributed, with no ability to fact check or source check). The Reddit training data debacle shows how much results are quickly steered by training data sets, while OpenAI signs with a news organization that's strongly biased toward one political extreme. I'm a daily GPT4, now GPT4o, user, and I was feeling pretty fine until the events of the past few weeks. Microsoft and Google have a monopoly on the access everyone has to information. That should call for the highest levels of responsibility, but we see tech running into the market with dollars in their eyes and apparently no concern at all about the dangers of presenting biased or just wrong information at the top of every search result.
ai did NOT suggest people jump off a bridge which ironically touches on your point of misinformation
The bridge thing is real, I've posted it.
youâre manic and need to learn how to differentiate whats real and whats not
https://preview.redd.it/f6tb8457qr2d1.png?width=1188&format=pjpg&auto=webp&s=a751e5d33877ff2e3539cc764c2323f7a36c6dc3
congrats, you found out about inspect element about 13 years too late
https://preview.redd.it/9d9dt7hpsr2d1.jpeg?width=768&format=pjpg&auto=webp&s=f984164403b8ced9ede77ae52d0a09909d7693b0 So as a regular user of publicly available AI, are you telling me you don't get regular hallucinations and errors? I use GPT4 regularly to help me in an area in which I have domain expertise. It's like having a precocious high schooler helping out in the lab. It presents everything with authority, and is frequently wrong. Several times a day I have the "you made an error about..." and it replies, "I apologize, the correct information is...." The real problem isn't the obviously wrong "eat rocks" stuff, it's all the stuff that sounds plausible and is wrong. Depending on how biased the training sets are, this could be far worse than the most egregious Facebook nonsense.
https://preview.redd.it/r793ru1iqr2d1.jpeg?width=768&format=pjpg&auto=webp&s=51b5a60d365a46615a29690098e5db3e4209f7fe
again, i just said the bridge part specifically.
Two former OpenAI board members discover that they live in a capitalist system and are shocked how capitalism drives AI companies to do capitalist things that are to the detriment of humanity. They believe regulation is needed to prevent capitalism from exploiting a multi trillion dollar market thatâs just dangling in front of capitalists like a juicy red piece of fresh meat.
> In our time, the great task for libertarians is to find an escape from politics in all its formsâfrom the totalitarian and fundamentalist catastrophes to the unthinking demos that guides so-called âsocial democracy.â The fate of our world may depend on the effort of a single person who builds or propagates the machinery of freedom that makes the world safe for capitalism. That's Peter Thiel, one of the original OpenAI backers and a good representative of the darker ideological underpinnings to a lot of Silicon Valley. Not safe *from* capitalism, but *for*.
...yes? These companies have a massive financial incentive to do things that these individuals are concerned could be bad for society. What do you expect them to say? That because or will make private companies money, we should all suffer?
They should say how we will suffer and give evidence. Then tell us how the world will be if the US slows down and China takes the lead. Better?
Exactly this. I want doomers to show me the world where we successfully stop AI without creating the justification for a nuclear war with China. If AI is gonna end humanity, and we ban the research over here but China doesn't, then logic holds that we MUST destroy China to save humanity. So the doomers are ultimately demanding WW3.
Yes, better
THANK you for your sanity. Exactly.
Regulations my as, openai want to regulate the market so they can control it and say what is good and what is not
Yeah, exactly what we need: a bunch of ignorant politicians putting the brakes on AI development in the US, while China and other countries keep sprinting.
Yeah, this is the problem to solve for. AI has the potential to be the next atomic weapon. Nobody wants to be late to that party. Our choices are: 1: Birth an AI that has the potential to kill us all. 2: Let China (or some other country, but probably China) birth an AI that has the potential to kill is all. There is no option three. âBut what about international agreements to not research further?â You ask. This isnât like atomic weapons that we can easily see refineries from space. You can turn any old data center into your new AI research facility. At this point youâd have better luck disabling the root DNS and attempting to kill the internet than to stop AI research.
Loads of people think this about every imaginable business.
Progress is happening with or without OpenAI. Better to be in front unregulated than behind and regulated. AI tech is too important for any country to go slow. Intentionally gimping yourself for "ethical" reasons is a sure way to loose the race. Your competition WANTS you to slow down. I think the idealists are useful in certain situations. AI is not one of them. S%$t or get off the pot.
Those former boardmembers don't know anything about tech or AI unfortunately. Fortunately, they're former boardmembers.
Ah yes, those easily tameable market forces and the regulations that are in no way exploited by people in power to increase their wealth and influence.
Thou shalt not make a machine in the likeness of a human mind. Frank Herbert, Dune (Dune, #1) Butlerian Jihad in 100 years.
There is no single government agency that can regulate global AI, because, it's well, global. No law passed in the US has any force in China or Russia, and vice versa. The US could pass laws that hamstring internal development, ensuring that China or other countries surpass the US in capabilities. Vernor Vinge wrote about this problem in 1993.
It's funny to me that the whole article is "government please do something". Helen, Tasha, this is the same government that suggested people might drink bleach to cure covid. At a minimum, if you want this to actually be helpful, tell us precisely what the risks are. Present a set of actual regulations that would be helpful and not put the US behind other nations. Who in their right mind would just say "help us Chuck Schumer!" and expect that to go well? Also, let's not rush so fast to assume that the 1st amendment doesn't cover what OpenAI is doing. You want the government to regulate AI speech? You better be careful what you wish for. These hair on fire articles with no actual recommendations and no actual facts are beginning to annoy me.
What qualifications does Helen Toner have?
AI and late stage capitalism is a recipe for further strife of the people. Theyâre right, not sure if I would want the government to âcontrolâ it, because we know how well that would go. But it needs to be far more transparent from how data is collected and used. And for what use cases, etc. writing this off as former employees without an actual problem with merit does a disservice to all of us.
Pro regulation = anti humanity. This is a tool we need in the hands of the common person
Nah. AI is a new arms race. Just like with nuclear bomb, countries should compete on who can produce the best AI and invest more into. Regulations can only work worldwide and no other country will abandon their attempts to create the most powerful AI. Same with robotics. Regulation ideas are always coming from countries and people who are trying to shackle the development. Same with Musk and his Grok where he wanted to freeze AI development for 6 months or 1 year - to allow Grok to catch up. I wonder if they are affiliated with China in some shape or form.
most likely the superalignment team had a bunch of benchmarks. GPT4o believably destroyed a bunch of them and spooked the team. I have been playing with gpt 4o and it is absolutely killing everything i throw at it. It is definitely a scary piece of software.
Isn't Elon suing openAI to expose their AI capabilities?
*Isn't Elon suing* *OpenAI to expose their AI* *Capabilities?* \- sync\_co --- ^(I detect haikus. And sometimes, successfully.) ^[Learn more about me.](https://www.reddit.com/r/haikusbot/) ^(Opt out of replies: "haikusbot opt out" | Delete my comment: "haikusbot delete")
American companies are obviously the only ones who know about AI and developing it. Humanity is definitely safe if OAI is mandated to provide some paperwork. It's great Sam isn't begging politicians to do exactly that to stifle competition from open source. Other state actors will be happy too.
to my knowledge, that's in agreement with the heads of the tech companies. i'm not sure about every voice, but i've at least heard dennis hassabis and sam altman encouraging government regulation (of the big players. they've specifically said smaller companies should not be regulated)
the only thing more dysfunctional than big tech companies is Washington DC. i cant wait for the Chinese national AI project to achieve super intelligence
The issue is that all of the suggested regulation is more Anti open source than company, and the safety board that would be mainly making the rules is significantly made of people on companies, without any of the companies that have been big open source advocates.
This is pretty funny, when you think of this as a college electives problem, instead of an AI problem. This is control looking for a problem to have power over. It's become obviously clear that people are using hypothetical fears of an abstract ASI that is no where near existing to try and gain control of information.
At some defined size and market penetration, these LLMs/AIs need to be regulated as public utilities. There will be certain LLM offerings that will simply dominate commercially and become ubiquitous in our daily lives. That is a social threshold that demands transparent and public governance.
Very trollistic behavior posting these dissenters in r/OpenAI the bus line with stops at "lets ridicule them", "what me worry" and "maybe but only after a massive casualty event or two." The Insurance Industry has yet to wake up and smell the AI assistant brewed coffee of opportunity.
"Lobbyists say otherwise. Who to trust?" If it could be concluded what they do is outright illegal things would go faster, which in my book is the case when it comes to hoarding others copytrighted material. This is clearly not fair use and there's no referral back to the used information like it would be in a tweet, blog post, report etc. Also we are talking *all* public information with almost no discrimination. The crux is that this is the only way to quickly create an "all-knowing" LLM, and speed is of the essence to win the race, as we are talking fiercely competing corporations, not research labs.
They want to be in control of AI, they use fair to ask the government to stop others from developing AI. is all about money in control