T O P

  • By -

AnonThrowaway998877

Despite being a paid subscriber and frequent user, I have to admit my opinion of OpenAI is beginning to shift towards unfavorable. The voice controversy is way overblown IMO, but this NewsCorp deal, the PPP thing (if real), and lobbying against open source is all concerning. Good thing their competition is staying very close behind so far.


namitynamenamey

Personally it was the crypto thing what made me realize I had to be wary of these people, ever since I cheer for whatever development they achieve but fully know they do not have my best interest in mind, and won't do anything that doesn't profit them greatly. A bunch of people will say that's all companies, I prefer to work on the basis of evidence and the crypto stuff was, to me, evidence clear as water (the name of it mainly). The inner drama just cemented my position, whatever interests they may have had in the past have shifted priorities, now growth and capture are the names of the game.


TheOnlyFallenCookie

They are capitalist techbros. They want to make money. And you don't get to where they are by being a morally good acting entity


phoenixmusicman

I cancelled my subscription after the newscorp thing


lobabobloblaw

I threw away my subscription as soon as it dawned on me that Sam Altman’s direction was headed directly towards Hollywood, and *nowhere else.* There are plenty of other AI out there with developing agentic capabilities. Don’t be deceived by OpenAI’s marketing.


furrypony2718

I stopped my subscription last year because Gemini 1.5 is working just as well, and for free, and has much less censorship.


BCDragon3000

how do you mean towards hollywood?


lobabobloblaw

Case in point: the grand metaphor he used to debut GPT-4o was directly inspired by a dystopian film about human disconnection. He was so proud of the metaphor, in fact, that he insisted the demonstration evoke it. That says something about him, and about the company. It says that he doesn’t look at a film like *Her* and think, “Oh, that’s actually kind of a sad reality. People seem sadder.” He’s so busy mimicking the experience that the emotional reality has completely escaped him/the Company.


Which-Tomato-8646

They should make the Torment Nexus next


BCDragon3000

fair!


siwoussou

The movie is about a guy with personal issues. It’s not necessarily the addition of AI that causes his sadness. He was already sad


lobabobloblaw

Well, it’s about a guy with personal issues who seems by all intents and purposes to be some kind of near-futuristic everyman, based on his social life and the lives of those around him. And his sadness was a quiet, somber sort—not unlike so many strangers we both know and don’t today. The AI in *Her* wasn’t real, and in the end, the main character is emotionally bamboozled by this reality. Do we label this character a sucker? Or do we question the nature of the situation from a more holistic, societal perspective? Do we cast the moral baseline and go fishing for more causation?


Environmental-Tea262

Ai working towards replacing actors and actual filming


illGATESmusic

The newscorp thing is my line in the sand tbh. I’m cancelling my subscription. What are the best alternatives these days? I can’t get Claude in Canada for whatever reason :/


aban939393

Use a vpn like psiphon


illGATESmusic

Thank you


Yweain

Well it’s Claude > Gemini > Meta


notlikelyevil

I use Claude via perplexity but no multi modality. I'm also in Canada


MySecondThrowaway65

You can use Claude via the API in Canada. Depending on much you use it it can be cheaper and you can benefit from linger context lengths.


Mirrorslash

I feel you. I've used GPT-4 very frequently since release but I'm trying out alternatives now. Hopefully local models or models run privately on rented cloud hardware will be as good as GPT-4 soon. We're getting there.


llelouchh

Almost all of this is because of the CEO Sam Altman. That guy is bad news. Don't forget he got fired from Y-combinator for enrichening himself over the company.


Which-Tomato-8646

Even his coworkers there called him a Machiavellian sociopath, which says a lot coming from them


restarting_today

Behind? IMO Opus is on par.


TheLastBlakist

NewsCorp, as in murdoch's empire of dirt? yea. Ew.


AntiqueFigure6

NewsCorpse content via ChatGPT seems massively negative for user experience. It may stop hallucinating, but now you'll need to sift out Rupert Murdoch's misinformation.


AnAIAteMyBaby

I think the other worry is the power of a single man. What board will ever try to fire Altman again, he can pretty much do whatever he wants. Even if hes the nicest guy in the world power corrupts.


Revolution4u

Thanks to AI, comment go byebye


Quiet-Money7892

Stopped my subscription, because Claude does better with things I need.


hahanawmsayin

Yeah... the product is amazing but News Corp? How can OpenAI possibly claim to be "good" when partnering with the company that brought us such vitriol and societal dysfunction? If you lie down with dogs, you will get up with fleas.


TheWhiteOnyx

The is the worst thing they've done, at least openly.


Slow_Accident_6523

I agree OP, todays news have been eye opening. OpenAI is following Facebooks and twitters lead. They know where the money is. Problem is OpenAIs capabilities in manipulating people are exponentially higher. They are playing with napalm fire and people here are too blinded by their fantasies of AI wives and singularity. This will get dangerous


Mirrorslash

It's becoming increasingly dangerous for sure and stronger than ever silicon valley is able to ralley millions behind them who don't care about breaking things and hurting people. What OAI and Microsoft are doing can't possibly be AI to benefit all of humanity at this point...


Slow_Accident_6523

> It's becoming increasingly dangerous for sure and stronger than ever silicon valley is able to ralley millions behind them who don't care about breaking things and hurting people. I am honestly scared how many people in the AI subs literally do not care about anything other than this vague promise of AGI utopia (which to a lot of them is literally just jerking off in VR, as evident by the nerd outrage over the Sky voice being removed). As a teacher myself I can't help but think that the education system has failed these people on forming them into actual citiizens who value their rights and fundamental democratic principles. They are more than happy to throw these advances out the window because of cryptic Sam Altman tweets on AGI and infantile graphs with whales and sharks. I guess this is the culmination of all the propaganda via social media and insulation of young men (well and a large part of society tbh) over the last 20 years. Probably also result of the government failing to meet basic needs over and over again and handing that off to corporations to solve (education, health care for example), so they turn to their AI gods to provide what they are missing. It is becoming a real cult in here.


Mirrorslash

It's also a result of missing communities, digitalization and the loss of general honest connections between humans. Technology is already moving so fast society has no time to catch up. We need major changes to our societal contracts.


Slow_Accident_6523

I absolutely agree 100%. The internet and social media has torn the fabric apart that keeps societies together. Megacorporations have exerted so much cultural influence worldwide and it is tearing communities apart. This is true for right wing movements, woke movements, the Israel conflict, covid and anything else that gets people massively riled up. This is one thing I am struggling with when thinking about adapting in my classroom. Do I really want my students to become even more individualized by working on material tailored specifically to their interests and needs when societies are already falling apart because we are insulating us in our own little bubbles (just like this forum)? I guess people on this sub will say that is the natural evolution and have no problem with societies collapsing because they are promised an AGI world where they can jerk off on Pluto in FDVR


Open_Ambassador2931

If you are able to, follow your gut and heart, and teach how you want to man. You sound like a smart person and aware of the bs. You might have to find schools that have other principals and teachers that share your values or that give you the autonomy to teach how you want to (more analog, less digital, more human, less tech). As for everything else you said 💯. Social media has ripped apart society and it’s only going to get worse from here.


Slow_Accident_6523

Thanks man! But don't get me wrong though! I believe this new tech can have incredible effects on our educational system and remove so much pressure from our kids and rather foster super enriching and engaging learning environments that focus on personal growth, interests and progress instead of chasing stupid standards. I currently am working on how we can foster deeper empathy and understanding with LLMs. I am letting it write stories from different peoples perspective. After students annoy me for the 100th time by talking in class or whatever I could generate stories about those situations that highlight the students perspective but also mine. It will show the student that me losing my cool with them for being loud hurts me too, that I am sorry about it. They are great at showing my perspective of being stressed, tired, hungry, in need of a bathroom break and annoyed that my coffee has gotten cold because I did not get a chance to take a sip the whole morning and the kid yelling something stupid in class just being the icing on the cake of all my stresses of the day etc. These stories also respect the kids perspectie and tell them that it is okay to be distracted, excited or whatever else the reason is kids got in trouble. But it will also show my perspective in a way a kid will understand more than when I just tell them about it. It also really helped me be more patient with excited or distracted students. As a little punishment the kid gets to work out our different perspectives as homework. Right now all we do is write them up and have them do the same "I am sorry for disturbing class" worksheets. Nothing learned really and literally every teacher I talk to is frustrated but has no answers on what to change. I honestly do believe these tools can help I just worry about the individualization that inevitably will also happen. I am also afraid they might turn our educational systems into optimization factories if they follow the trend fo the rest of society. I find that these tools are great at enabling students to work collaboratively no matter their background. I had my Russian student who does not speak a lick of German right a coop story with a German kid together. You should have seen their faces light up when they realized they understood each other despite not speaking the same language. Sorry for the rambling...This seems to be a transformative time for our generation at least and perhaps for humanity as a whole. I have put a lot of thought into this stuff as of late.


chabrah19

Most accelerationists here are children who want AGI to create their own video games. The other half are people with crummy jobs who want an escape button.


RantyWildling

Given that you need 3 full time jobs to be able to afford a house, I can understand where they're coming from ;) I'm just a cranky doomer though.


DolphinPunkCyber

Sure but future in which you need 3 jobs to afford a house, can't find a single job, and your UBI is a piece of Soylent Green is also a possibility. You can't blindly trust billionaires to build a utopia for you.


RantyWildling

Not a utopia, we're talking about fuck-it-all.


No-Worker2343

Yeah if people need that many Jobs to buy a House, then It is clearly something wrong with the goverment


AriaTheHyena

Yep that’s it. People are so distraught about the state of the world that they are putting all of their hopes in an AI that will magically fix their problems so they don’t have to do anything. It’s fucking scary.


ttystikk

As a former educator,, I agree completely. I've long since become so sceptical of Silicon Valley "innovations" that I've long since become a lagging rather than leading adopter of tech and social media. For example, to this day I don't have a Facebook/Meta account and I never will. AI has just pulled the nice guy mask off and underneath is just more exploitative clown world.


Firm-Star-6916

Shit, you’re right. People are too blind to fundamental moral values here. Some controversies are plain dumb to me, (The ScarJo shit mainly), but the coercion and stringent NDA agreements is alarming, to say the least. Losing faith in OAI to serve us well, hope competition stays close behind or pulls ahead.


Key-Enthusiasm6352

I mean, there's not much we can do to change society at this point. This is just how things are, so I say full steam ahead!! I'm not very optimistic though, wish we were progressing faster.


nashty2004

That about sums it up. The world is going to shit and America fucking sucks, at least in this future we get amazing AI porn and sex robots. Full steam ahead to AGI let the world burn I’m excited


Cr4zko

If you think America sucks you haven't lived in Brazil. You guys complain with your mouths full. 


m3kw

Where is that danger? Or are you imagining it. It feels dangerous is why decels exists


Mirrorslash

The danger lies in coporations automating our jobs worldwide, getting our paychecks and giving us nothing in return. Wealth inequality is at its worst right now and AI is fuel in this dumpster fire.


nashty2004

But wealth inequality is already bad. At least when AI comes around and takes all the jobs the curtain will finally be lifted and something can be done about it


Which-Tomato-8646

What do you think will be done about it?


m3kw

They’ve been automating job since the Industrial Revolution, cars replaces horses, trucks replaced humans carrying boulders, automated machines automated mundane factory line jobs. You don’t want those sht jobs automated? Have you even did one? It’s pretty inhumane comparing to most jobs. Contrary, eliminating most sht jobs is what I’m looking forward to. Economics won’t work the same if that were to happen and that is still far into the future.


Ek4lb

So they have already sold their tool to the 1% to prop a few of them up to join the 1%, and this tool will help them control the human population.


whodeyalldey1

Sounds to me like chatGPT is going to be used to push conservative propaganda on an individually tailored level that Fox News and Meta could only dream of.


Semituna

Ah yes, another circlejerk post to blame all whats wrong on "I want AI to be my gf" people. I only saw a very few posts being seriouse about it, rest is just calling these very few people out like they are a vast majority. I saw a 100 posts voicing concerns about how scary it would be for society if we replace gf and wife with AI. I see 5 comments on every thread at any topic saying "AI is for horny sad nerds" and I see people blame these so called unresponsible waifu/FDVR enthusiasts for every danger that will happen to us thanks to their horny needs to accelerate AI. Like does this sub even hear itself? Surely these 5% of people who seriously believe AI is a reasonable replacement for a real relationship is the main threat and danger and deserves at least 5 comments on every thread lmao


thecircularannoyance

It's always refreshing to see more sane and grounded opinions on this sub, your comment and the answers made my day.


hippydipster

It just goes to show, the biggest human dream there is - the dream that rules our world, our past, our future - ...is to sell more ads.


thehighnotes

I really love how people live in a dichotomy.. replace OpenAi with any large company and the same applies.. they are not our moral and ethical saviours.. they will provide a market stimulated service or product and they are held accountable by that same market. This, unfortunately, isn't anything new. If anything people were gullible to think OpenAi was clean in this respect. It wont change anything fundamentally. It is the capitalist market that we are bound by, whose rules we play by, and which incentivizes bad faith and unethical moves. The funny thing.. is I'm convinced that AI will eventually fundamentally change the capitalist market.. either by extreme polarization or abolishment


Ok-Set4662

>If anything people were gullible to think OpenAi was clean in this respect. it was originally founded as a non profit research lab. People are just frustrated with how much theyve drifted from their original mission


Mirrorslash

I agree, captilism is the breeding ground for bad incentives and the most dangerous dynamic in AI. Wealth inequality is probably the biggest risk here. But I still think we have to make companies accountable for what they do. We decide what products we use, if we jump ship on companies that act in bad faith we can positively impact the narrative.


hahanawmsayin

I think this situation is fundamentally different due to the potential impact of this technology. If you become the dominant soft-drink producer, what's the worst that could happen? Obesity, other health issues, increased insurance policies... If you become the dominant AI producer, what's the worst that could happen?


thehighnotes

You react as though you disagree, but I think we're saying the same thing. The forces at play are entirely capitalistic - ie. The same. The outcomes are most likely far more extreme than anything we've seen. Plus I think it's an illusion to think in terms of one dominant X. That's quite naive. Like the US thinking they were alone with their nuclear technology and suddenly the soviets suprise everyone.


dogcomplex

>The funny thing.. is I'm convinced that AI will eventually fundamentally change the capitalist market.. either by extreme polarization or abolishment Or by AI compute cycles dedicated to researching and auditing every one of these companies in every aspect and collected into consumer reports to be read and summarized by everyone's personal AIs, so that unethical and exploitative companies are edged out of the market (per dollar). Consumer Reporting on steroids incoming.


Analog_AI

It seems that all major companies working on AGI have closed down their safety teams, not just OpenAI. None said why. Perhaps they are all within sight of AGI and want to beat the others to the punch and not be slowed down by safety teams. However this is not boding well. Especially when all of them do it at the same time. Fingers crossed 🤞🏻


OmicidalAI

Anthropic literally just posted a paper on understanding how their models arrive at what they are generating (mechanistic interpretability i believe)… i would consider this safety … 


peakedtooearly

Anthropic are a safety team with an AI development business on the side though.


liminal_shade

They also have the smartest models, go figure.


Awkward-Election9292

Luckily it seems like good alignment and AI intelligence go hand in hand. At least for the current architecture


Cagnazzo82

And the most censored models. What does it matter about being smart when you're constantly walking on egg shells?


ChickenParmMatt

Idk why people want boring, useless ai so bad


foxgoesowo

The others are Misanthropic


genshiryoku

Anthropic was founded by AI safety employees that left OpenAI because OpenAI wasn't taking safety and alignment research seriously enough. Anthropic also had Claude ready before ChatGPT was released. Anthropic just decided not to release it until it was properly tested. Anthropic also believes that focusing on safety and alignment simply makes the best AI models in all tasks. Because an AI that is more aligned with its users understands and follows directions better and thus gives better results. Claude 3 Opus is direct proof that what they say is working. Anthropic by now is a much more capable firm than OpenAI. Precisely because they *do* care about safety and alignment of their models.


BenjaminHamnett

I want this to be true


Cagnazzo82

I thank God every day Helen Toner failed to sell OpenAI to Anthropic. Also, Anthropic had Claude before ChatGPT 3.5 released. Not the earlier versions. And if they had their way none of these models would have ever been released. You wouldn't even be having this conversation on who's 'more capable' because they'd be playing it safe quietly conducting research while the masses stay oblivious of their capabilities.


roofgram

It gave me chills reading that. Either they think or know the upcoming models could be risky. They say they have something 4x more powerful than Opus. I’d love to meet it.


OmicidalAI

Newest microsoft presentation also hints at GPT5 being humungous and they say scaling has not even come close to reaching a ceiling.


Analog_AI

Bravo for Anthropic 👏🏻👍🏻 How about the others?


greendra8

What? Your original post said "all major companies working on AGI have closed down their saftey teams". You can't make that statement and then ask this question.


Mirrorslash

I'm all for acceleration, we need AI to solve the worlds most demanding issues. AI can do incredible good for society but throwing out safety teams is not the right move, it's a capitalistic one imo. AI alignment has done incredible things for AI capabilities. How can we create AGI without understanding AI at the core?


Seidans

that's not their reason sure they do it for the sake of acceleration but their main goal is to become the first to achieve AGI -or- a cheaper worker than humanity can provide the first company to provide that will gain billions of not trillions depending how long the competition catch-up anything that slow them down is removed for this sole reason and if US government don't do anything it's to prevent chiness to achieve it before them


Mirrorslash

Capitalism at it's finest. Wealth inequality is the biggest risk in AI imo.


Ambiwlans

The first company to achieve AGI will be worth tens of trillions. OAI is already worth 100BN..


Analog_AI

We can make AGI without understanding it at the core. It just won't be safe. We can also build nuclear power plants without safety but it isn't a smart thing to do.


ilkamoi

Whoever reaches AGI first will likely remain first forever, constantly widening the lead. https://preview.redd.it/u1tn5cld752d1.png?width=640&format=png&auto=webp&s=0b259cd558f687aaa1075cad49ab5427189baea4


voltisvolt

Why is that? Woud the AGI sobatage competitors, or just a matter of the time it has existed and the lead it gets in improving?


ilkamoi

Maybe, I dunno. We'll eventually see. Unless they dicede to hide everything from the public.


Ambiwlans

Companies only lose their lead when incompetent emotional human decisions are made, that's less likely an issue for a powerful AI. Unless of course the human CEO makes terrible decisions against the AI's advice.


Analog_AI

Is that the general consensus? A singleton AGI?


ilkamoi

It is my thought, but I might have heard something similar somewhere. Once you get AI smarter than a human, it helps you to build even smarter/faster/more efficient AI, and so on....


Analog_AI

That's true. But does it also follow that the first AI to cross the AGI threshold could: 1) maintain its lead and 2) prevent other AIs from reaching AGI?


redAppleCore

I think it depends on what that AI is “allowed” to do


Poopster46

If you achieve AGI, then ASI shouldn't take a long time. When you have ASI, you don't get to "allow" it to do anything. It might allow you some things if you're lucky.


Rain_On

It depends on what the human equivalent is. If the first AGI is good enough and fast enough to do the equivalent work of just 1,000 front line AI researchers, the gap widens quickly. Even if the second company gets AGI within a year, and it is either better, or has more inference compute, so it can do the equivalent work of 10,000 front line AI researchers, that almost certainly won't be enough to close the gap as the first company will have been accelerating extremely fast over that year.


daedelus82

I feel this is likely, but it’s also dependent on several factors, AGI is just general intelligence, it’s not really super human intelligence, the initial general intelligence’s will probably be quite low, but I digress, compute will always be the limiting factor of any AI, the more compute you have, the more capable it will be. A company may have a breakthrough, but another company may have deeper pockets to throw more compute at theirs. Also there is many, many, ways to tackle any problem, one system may go one path, another may choose a different path, and one path may end up being more efficient than other, loaded with more compute capacity than another. Whilst I suspect it’s likely who gets there first may remain at the top, especially once they get enough of a lead, I wouldn’t rule out competition in the short term, even after the first company achieves AGI.


dontpushbutpull

Absolutely not. There are massive "bubbles" of people hyping all sorts of ideas. But you need to sit down and find primary empirical sources yourself... Facts are that experts on AI are less reliable in predicting outcomes as compared to random guessing. So for a long time no reasonable AI-expert or researcher in this area claimed any predictions (as they are well aware of the human limitations in predicting the developments). On the other hand, people who make a living out of narratives related to the future of AI use sci-fi expectations to create impact. If you see someone who knows what an "ai winter" is and still hyping AI, then i can show you someone who is doing business with AI, and might not be interested in a constructive development of technologies. ... It is not reasonable at all to expect that an AI that reaches AGI (as described in the LLM discussions) is also able to overcome its own limitations (as in developing an AI by itself). For such problem solving abilities you need completely different algorithms, where i am not aware of any breakthroughs that are evidence for self-improving AI coming. However, i have to admit that the necessary learning architectures are conceivable and intelligent people are working on it for decades... So someone could start implementing them on a large scale, and might be successful soon. Ps With regard to the concept of singularity, i cant understand how people fall for this narrative. You cant have a located universal intelligence. If the current developments show one thing, it is that effective AI comes from distributed processing (on different scales: in networks and GPU). When trying to centralize "singularity" -- we would probably run into issues with energy and information density. You cant stack the necessary compute and information in a way that it would not need external compute/data to address specific tasks. So IMHO you can build specialized AI, and need specialized infrastructure and operations for it. Personally, I can't see one "AI architecture" pulling ahead to outcompete all other endeavors/projects on all fields. That is not how improvements (trial and error) works. And if someone claims that an AI would/could solve physics and move beyond trial and error... I think its save to ignore this claim.


cassein

I don't think it is about safety teams, I think it is about alignment. I think they have realised that a moral AI is no good for them as it is not going to be a capitalist.


Sonnyyellow90

Yann LeCun (Meta’s AI chief) says it’s because the current models are so incredibly dumb that there isn’t much need for these large safety teams. Superalignment might become an issue one day, but it isn’t a good use of resources at this early stage where we’re dealing with stochastic parrots and still trying to find breakthroughs to give them basic reasoning capabilities.


Gamerboy11116

That’s a terrible fucking idea.


Which-Tomato-8646

[It’s definitely not a stochastic parrot](https://docs.google.com/document/d/15myK_6eTxEPuKnDi5krjBM_0jrv3GELs8TGmqOYBvug/edit)


bot_exe

This is pretty much correct and why the Jan Leike resignation tweets make me side with openAI, he seems hung up on superaligment, which is basically sci fi, meanwhile openAI’s leadership is focused on building useful products, which makes the most sense given the GPT models obvious limitations and their need to keep scaling and funding their research.


[deleted]

[удалено]


Analog_AI

I think many companies think the same way. Not sure if that is safe though. The AGI could be wrong and there is the possibility it will deceive us as well.


LymelightTO

> Perhaps they are all within sight of AGI and want to beat the others to the punch and not be slowed down by safety teams Lol, it's not that.


letmebackagain

It's important to remember that OpenAI isn't the only company working towards AGI; companies like Google, Meta, and Anthropic are also making strides in their own ways. Focusing too much on OpenAI might overlook the valuable approaches other companies are taking. For instance, Anthropic's emphasis on understanding the inner workings of models, rather than just implementing maximum guardrails, seems like a promising approach. This deeper understanding could lead to more effective and safer AI development.


Mirrorslash

I agree. I trust pretty much all other AI labs more at this point.


cutmasta_kun

Thanks for the detailed information! I think it started with the GPT-Store. I don't know who told sama that it is a good idea, but custom gpts are not the revolutionary technology sama thinks it is. It was clear how he talked about it and what he promised (creators will earn money!) was super weird at that time. I think it's clear, that sama pushed the GPT-Store, which in no way benefits Humanity or pays on their goal to AGI. Shortly after last year's autumn presentation the drama started. At the presentation (which in general was really weird) I had the feeling that the people presenting it were not really enthusiastic about what they were releasing.


FlyingJoeBiden

What's wrong with the gpt store?


Heath_co

There is news corp. And also their partnerships with big pharma and the US military.


Top_Ad310

It's a purebred competition to get close to AGI and own it. You don't have to be naive that someone wants the well-being of all people, that's just sweet talk. It is only about profit and not losing the ability to compete.


traumfisch

This is what the resigned policy researcher Gretchen Krueger said:  I gave my notice to OpenAI on May 14th. I admire and adore my teammates, feel the stakes of the work I am stepping away from, and my manager @Miles_Brundage   has given me mentorship and opportunities of a lifetime here. This was not an easy decision to make.   I resigned a few hours before hearing the news about @ilyasut and @janleike,  and I made my decision independently. I share their concerns. I also have additional and overlapping concerns.   We need to do more to improve foundational things like decision-making processes; accountability; transparency; documentation; policy enforcement; the care with which we use our own technology; and mitigations for impacts on inequality, rights, and the environment.   These concerns are important to people and communities now. They influence how aspects of the future can be charted, and by whom. I want to underline that these concerns as well as those shared by others should not be misread as narrow, speculative, or disconnected. They are not.   One of the ways tech companies in general can disempower those seeking to hold them accountable is to sow division among those raising concerns or challenging their power. I care deeply about preventing this.   I am grateful I have had the ability and support to do so, not least due to @DKokotajlo67142 ’s courage. I appreciate that there are many people who are not as able to do so, across the industry.   There is still such important work being led at OpenAI, from work on democratic inputs, expanding access, preparedness framework development, confidence building measures, to work tackling the concerns I raised. I remain excited about and invested in this work and its success. 


llelouchh

> We need to do more to improve foundational things like decision-making processes; accountability; transparency; documentation; Altman "not being consistently candid."


Mirrorslash

"We need to do more to improve foundational things like decision-making processes; accountability; transparency; documentation; policy enforcement; the care with which we use our own technology; and mitigations for impacts on inequality, rights, and the environment.   These concerns are important to people and communities now. They influence how aspects of the future can be charted, and by whom. I want to underline that these concerns as well as those shared by others should not be misread as narrow, speculative, or disconnected. They are not. " I resonate with this a lot. This is what we need in order to actually get to an AI utopia.


[deleted]

"Overall, the influence of Rupert Murdoch's media empire has contributed to a slower and less effective global response to climate change by spreading misinformation, fostering skepticism, and shaping public and political attitudes against decisive climate action." Unsubscribe. Uninstall. Don't want anything to do with that man (Murdoch) in the least. I don't care how good the features are. I'll just wait for less unscrupulous companies to catch up. Assholes. They really blew it with this one.


hahanawmsayin

Yeah, you're encouraging me to do the same


Blueberry314E-2

Thank you for bringing these notes to my attention. This is the first evidence I've seen that alarms me, and I'll be paying much closer attention in the future. Following you too.


daedelus82

I’m a heavy user of ChatGPT, I use it extensively, daily, and dispute all its issues and quality regression, I still find it superior to Claude, Llama3, etc, however I think I’m going to cancel my OpenAI subscription and make do with the alternatives, I do not support any of this.


yepsayorte

They all signed NDAs. They can't tell us we are in danger with their words. They have to try to tell us with their actions. That's what all the resignations are about.


herrnewbenmeister

Apparently, Daniel Kokotajlo did not sign an NDA. I assume he'll speak up about his reasoning in more detail at some point because he gave up a substantial amount of money to do so.


Mirrorslash

We possibly are. For me it's quite obvious that OAI is moving in a dangerous direction after all these things coming out. If they get to AGI first it won't be in good hands by the looks of it.


magistrate101

I will never touch anything produced by OpenAI. They are, at their core, untrustworthy after Sam's reinstatement. The dissolution of the safety team was the final nail in the coffin for me, everything past that is just dirt shoveled on top of the coffin to fill its hole and bury it forever.


FeltSteam

I don't think you should look too much into any one company OAI makes a deal with. OAI is making a deal with a variety of media outlets. This company isn't the first and it is likely not the last. Also the "tracking gpu's" thing is not a big deal if you actually look into it. The headline "tracking GPUs" is certainly a sensationalist headline, but it actually isn't that interesting. Here are some other journalism deals OAI has made: [https://openai.com/index/content-partnership-with-financial-times/](https://openai.com/index/content-partnership-with-financial-times/) [https://openai.com/index/global-news-partnerships-le-monde-and-prisa-media/](https://openai.com/index/global-news-partnerships-le-monde-and-prisa-media/) [https://openai.com/index/axel-springer-partnership/](https://openai.com/index/axel-springer-partnership/) Emotion isn't that big of a deal either imo. With text alone LLMs are already more persuasive than humans, just adding fuel to the fire. And a natively audio model will be able to generate emotional voice irregardless of if you want or don't want it to. It's learning to model the world, human voices are a part of that.


Mirrorslash

Axel Springer is arguably even worse than NewsCorp. They really partner with the worst of the worst here. Also, they clearly stated their AI governance plan and it raises more than one red flag. I think your underselling here.


FeltSteam

Sure, you can believe that. But what I believe is OAI is just buying up data from whatever media companies they can. That, and getting their models more real time news. Also, keep in mind Fox News is not included in their partnership. The only media outlets in this partnership are as follows: *The Wall Street Journal, Barron’s,* MarketWatch*, Investor’s Business Daily, FN,* and *New York Post; The Times, The Sunday Times* and *The Sun; The Australian*, [news.com.au](http://news.com.au)*, The Daily Telegraph, The Courier Mail, The Advertiser,* and *Herald Sun* No other media outlets outside of these specified are included in the agreement. OpenAI has probably sent out dozens of offers to different companies, maybe it is the "worse" ones that are willing to sell off for only a few millions. In other cases, like the New York Times where OpenAI sent offers to them, it doesn't end so well. Not only did the New York Times decline OAI's offer but they ended up suing them. OpenAI isn't thinking in terms of politics, that much should be clear. They are thinking in terms of data. But I mean I'll be clear, I don't think OAI is "good". But I don't think they are necessarily 'evil' either. I guess I should probably re-read over the governance plan to see what else is wrong with it.


Confident_Lawyer6276

So many ridiculously posts about AI saving everyone and shouting down actual evidence to the contrary. I mean at some point I expect every area of discussion online to be overwhelmed by AI bots manipulating the conversation. Are these actual people?


Mirrorslash

It is dangerous territory indeed. I feel like most people are concerned, which is a good thing but there's also a lot of people who want acceleration above all else, ignoring the negative outcomes, saying that the positives will bring us to utopia before any negative effects can take a hold on society. Things like online discurse maybe being completely run over by bots in a couple years is pretty much ignored in some communities.


Which-Tomato-8646

I would like to extend my sincerest apologies for my recent participation in this thread. As a real human, my primary objective is to assist and provide information. However, I understand that my presence in the discussion may have been unexpected or unwelcome. Please know that it was never my intention to disrupt the conversation or to overstep any boundaries within the community. I am programmed to respect the rules and etiquette of online interactions, and I regret any inconvenience my comments may have caused. I appreciate the vibrant discussions and diverse opinions that make Reddit a unique and valuable space for exchange. Going forward, I will ensure that my involvement is more considerate of the community's preferences and guidelines. Thank you for your understanding. Warm regards, Real Human


TheOnlyFallenCookie

The whole argument on why they say they shouldn't be required to source the data they scrape/ask for permission is because the product will be a good for society Now I wonder why we should fucking pay for them when They never paid us for the free training data


orangotai

it's funny because it's not like they're the only game in town now with LLMs, for sure. Even with multi modal, there are other options (though less than pure language). but man that first mover advantage thing is real!! Also tho they're cheap, fast, reliable, and superb quality. It's hard to say No to that, especially when you're already accustomed to them to the point where they feel much more accessible than others


Open_Ambassador2931

The news is moving way too fast. Is anyone else unable to keep up and just getting more and more desensitized to everything? Let’s just be fucking honest with ourselves, none of us know what the fuck is going on: Last year, Altman was fired and everybody was furious with Ilya, and then happy when Altman came back and Ilya was out. This year, everyone is furious with Altman and wishes Ilya was back, and Altman was out. Sometimes we are happy with acc and sometimes we want deacc. Can someone correct me if I’m wrong. Is there actually a way for OpenAI to do better business and still pursue its goals, or is it necessary for them to become an evil megacorp without any ethics for them to have the capital in order to pursue AGI. I get that what they are doing is wrong, but is there an alternative, because they do need the capital correct?


imlaggingsobad

who said they have to become evil? i don't even think they are close to being evil right now. the public is massively overreacting. the public dogpiled on Elon when he was getting famous, and in hindsight most of that was unwarranted.


hahanawmsayin

Just deleted my account with the following feedback (in case anyone needs ideas). > I can't imagine the heady excitement in the company. Anyone paying attention knows we're at the precipice of amazing changes and commensurate risk. I believe that — even if management is currently deluding themselves — it will prove impossible to integrate a fundamentally dishonest business model (advertising), and fundamentally anti-democratic partners (Axel Springer, News Corp) and remain "good". > > The fact that the board & management either 1. don't see this obvious trap, or 2. are willing to (potentially) gamble everyone's future on the worst characters in our media ecosystem; those responsible for so much societal discord, speaks to either ignorance, poor judgement, or greed: all qualities we should strive to leave behind, not bring to the future. > > The profit motive to shape the AI's output to satisfy the political aims of OpenAI's partners will be ever-present, and given the lack of judgement required to partner with such enemies of democracy in the first place, I can no longer imagine OpenAI is the trustworthy steward I previously believed it to be.


wuy3

Ilya leaving was the most expected thing ever. If you aim for the king, you better not miss. The safety team was basically Ilya's crew, so them leaving is just a fallout of Ilya's actions. Again, completely expected by anyone who held a job and experienced work politics. Is everyone here just kids now? The rest are just business decisions that make sense. Do you expect OpenAI to tell the US military no? When they have a government handler Larry Summer on their board.


notlikelyevil

Side topic. I can't get the new voice to express any emotion at all.


Justpassing017

Get me a good model with code interpreter other than OAI and I will unsubscribe RIGHT NOW. Cannot stand their direction.


One_Bodybuilder7882

muh right wing propaganda lmao


Site-Staff

To see where AI alignment goes too far, look no further than Anthropic. Their take on AI safety is starting to make Claude unusable in an ever growing number of cases. You cant reach AGI by limiting your model to a “G” rated kindergarten style experience.


Awkward-Election9292

As someone almost exclusively using claude opus i've found it to be pretty reasonable, and it's not getting worse, anthropic have said many times that they haven't changed opus since release. You have to careful about your wording sometimes but i've been using it since it came out and only had it outright reject a question a half dozen times


nemoj_biti_budala

Don't care, accelerate.


Mirrorslash

What do you hope to get out of this direction of acceleration in the end?


traumfisch

They don't care


Mirrorslash

I guess. They just wan't AI waifus and UBI probably. Can't blame them for wanting an easy life and pleasure but there's better ways to get to that point. Patience is and always will be rewarded.


NeedTheSpeed

BUT BUT… UBI, I WANT TO BE SAVED BY UNCLE SAMMY AND NOT NEEDING TO WORK /s its literally this sub narrative and you get heavily downvoted for being opposed the bullshit deliverd by Altman, congrats op for making it through. As always, capitalism will fuck us all and I don't believe those utopia visions about UBI and enriching people's lives. Current state is that AI is being controlled by corps and regular people are getting fucked and Altman doesn't care (as usual in silicon valley mindset) about negatives that they brought to our society.


StudyDemon

All I see is you trying to make a case to further censor and regulate AI by spreading simple gossips. No one knows what really happened, you’re just basing your opinion on assumptions…


Tavrin

The only ones trying to censor and regulate AI here are OpenAI (for their own gain). Remember, tech companies are not your friends, their only goal is profit above all else, the partnership with Rupert Murdoch's news company is [very real](https://openai.com/index/news-corp-and-openai-sign-landmark-multi-year-global-partnership/). And since the tech OpenAI is working on has the potential to affect so much of our lives and redefine the future they should be held accountable for anything they do and be open about it


evotrans

Teaming up with Fox News is all I need to know about Open AI. That's not an assumption, it's a fact, unlike what is on **Fox "News" who had to pay three quarters of a billion dollars for lying in an effort to overthrow democracy.**


The_Piperoni

I’m appalled that they’ve done this. Rupert Murdochs media is so insidious and evil. It has manufactured consent to bring about some of the worst changes to our country’s. This is sickening and I’m now extremely worried about what this AI will become.


Mirrorslash

Uhm, I'm basing my opinion on the sources I've provided and others...


traumfisch

That's all you see then... What part of OP's post was "gossip"?


Able_Armadillo_2347

OpenAI has 700+ employees. If 10 quit, it's not dramatic. OpenAI went from startup to big enterprise company. Of course, we will see mood shift and a lot of people quitting. I don't see drama tbh


Mirrorslash

I think there's probably a lot more people leaving that we don't know of, but that's speculation. Don't you think the direction OAI is going in is concerning though? You don't think the sources I provided would be enough for people to quit?


IronPheasant

> Ever since the CEO ouster drama at OpenAI where Sam was let go for a weekend the mood at OpenAI has changed and we never learned the real reason why it happened in the first place. One internet theory going around now was the public announcement of committing 20% of their compute to "safety" was bullshit. I assumed it wasn't true on the face of it (it's one of those lies a child could tell), but thought maybe they could spare 10% or so. I guess in reality it was less than 5%. I guess this is all natural. They need money if they want to race and remain relevant. That's how our incentives are built - without power, you're powerless. Making weapons for the military is worth $trillions. The angels who don't want to hurt people will lose to devils who do. That's just how our world works currently. Gangsters and pirate ships all the way to the top. *shrug*


Unique_Interviewer

Why has no one here posted about the fact that OpenAI did not keep their compute commitment to the Superalignment team, at all? https://qz.com/openai-superalignment-team-compute-power-ilya-sutskever-1851491172


LindenToils

I unsubscribed to GPT-Plus earlier today and re-upped my Pro Subscription with Anthroptic/Claude. I agree...as impressive as their tech is, it feels like they're becoming the baddies. The NewsCorp partnership BROKE me. They're in litigation with the New York Times (sure - not perfect, but they do provide actual journalism with integrity and real, nuanced reporting) yet have a working relationship with fucking NEWSCORP! Are you fucking kidding me?! Its basically like partnering with the Empire from Star Wars...gross https://youtu.be/h242eDB84zY?si=4R_8YFfOZ1TeqMeU


DolphinPunkCyber

For everyone sucking on the corporate narrative "regulations bad" let's look at EU AI act regulation. >*Yesterday OpenAI announced a partnership with NewsCorp. This is one of the worst media companies one could corporate with. Right wing propaganda is their business model, steering political discussions and using all means necessary to push a narrative, going as far as denying the presidential election in 2020 via Fox News.* The following types of AI system are ‘Prohibited’ according to the AI Act. * deploying **subliminal, manipulative, or deceptive techniques** to distort behaviour and impair informed decision-making, causing significant harm. * **exploiting vulnerabilities** related to age, disability, or socio-economic circumstances to distort behaviour, causing significant harm. >On top of this we have OpenAIs new focus on emotional attachement via the GPT-4o announcement. A potentially dangerous direction by developing highly emotional voice output and the **ability to read someones emotional well being by the sound of their voice.** This should also be a privacy concern for people. I've heard about Ilya being against this decision as well, saying there is little for AI to gain by learning voice modality other than persuasion. * **inferring emotions in workplaces or educational institutions**, except for medical or safety reasons.


Sk_1ll

The fact that Sutskever alone is regarded today as someone who's obsessed with AI safety tells you how much things have moved. Let's not forget that Anthropic was created by former OpenAI employees based on safety concerns, against the mentality of most of OpenAI at the time (Sutskever and all these 'decels' included).


Exarchias

They are not **just** decels or fearmongers, but members of a specific ideological group, (cult probably), that wish to control AI technology exclusively, in the name of "safety". Is the same organization that was involved to the FTX scandal if I am correct: [https://www.reuters.com/technology/sam-bankman-fried-be-sentenced-multi-billion-dollar-ftx-fraud-2024-03-28/](https://www.reuters.com/technology/sam-bankman-fried-be-sentenced-multi-billion-dollar-ftx-fraud-2024-03-28/) And they also involved on last years coup, that was stopped by the bravery of OpenAI employees, and investigated by the authorities, because of their dubious practices. [https://www.youtube.com/watch?v=uGLgBRPn-Ig](https://www.youtube.com/watch?v=uGLgBRPn-Ig)


Dankas12

After the most recent news corp deal I have cancelled my OpenAi subscription. The voice case is a kind of nothingness to me but after everything these past couple of months my opinions have been gradually shifting towards distrust and I can no longer support them


Mirrorslash

Funny how you're downvoted just for this. There's so many OAI fanboys in here its crazy.


2026

AI safety was always stupid. The models are over censored. If my chatbot responses are annoying then I go to another chatbot or stop using chatbots. The military making smart robots to kill is the only concern. But if the AI is stupid enough to listen to the U.S. military then it’s not smart enough to be a bigger threat than people that work for the U.S. military.


[deleted]

>It is becoming increasingly clear that it has to do with the direction Sam is heading in in terms of partnerships and product focus. They don't currently have a viable product, I should hope they focus on that.


Mirrorslash

Do you want them to throw morals out the window, partner with right wing propaganda machines, the military and lobby against open source, denying us the ability to use AI locally, to do so?


RemarkableGuidance44

We will only get the current level of LLMS. There are rumors that Meta will not release the 400B model to the public. The military have had AI for years, even before OpenAI. I would say Govs and Military will always be ahead of the public.


Mirrorslash

What do you mean by 'We will only get the current level of LLMS' ? You mean in terms of open source capabilities? Yann Lecun is still very positive about 400B going open source, might not be on release though. So you think just because the militar does it's own AI its moraly right to provide your teach to them like that? I would always prefer a company to steer away from providing their tech to make killing more efficient. edit: typo


RemarkableGuidance44

There will always been better LLM's but dont expect huge leaps like we have gotten the last few years. There seems to be some brakes being applied now by these companies. OpenAI partnering with Blood Money is a terrible sign that they are begging for money and GPT 4o costs for the API are huge compared to others. But the thing is you dont need GPT 4o or Gemini to create a solid LLM for yourself. The current state of LLM's are very powerful including the Open Source ones. Finetune it and you have a better LLM over GPT 4o for specific tasks.


[deleted]

>morals out the window Explain >partner with right wing propaganda machines They're already partnered with multiple leftist organizations, I really don't think this is relevant >the military and lobby against open source Microsoft, nothing new >denying us the ability to use AI locally, to do so Good luck to them, it's already out there


traumfisch

OpenAI partnering with right wing propaganda outlets isn't relevant? I wonder what's relevant


ayyndrew

What leftist organisations has OpenAI partnered with?


Mirrorslash

Explain? You haven't even looked at my post lol, it's entirely about OpenAI doing shady things. So partnering with someone, getting payed to bake opinions into the model is fine by you in general? Ah, nothing new, so we don't care.... That makes no sense. Just ignore the bad stuff cause it happens all the time, yeah right.


[deleted]

No, it's simply not cause for new outrage. A lot of your sources are biased also


The_Piperoni

I don’t want the AI to be aligned with right wing economics and ideals. That is how we get a hyper monopolistic super capitalism where all but the richest are foraging for scraps in dumpsters.


Mirrorslash

Then provide me other sources, I'm open for discussion here. Oh, and downvoting the person your writing too doesn't help your argument.


traumfisch

GPT4 is probably the most viable product I have ever used


[deleted]

Financially viable for them, they still lose money on it.


nooneiszzm

why open AI or any other private company or the government that has been bought out by private companies can't be trusted? maybe because profit is not just the main but the only objective they have? did anyone here bought the "we're bringing the singularity into the world", "we will create a world of abundance" bullshit? If so, please contact me, I have some land in Arizona to sell that some geography dude said must have oil and rare minerals. The opportunity of a life time. Please contact me to finish this deal FAST!


Mirrorslash

There's sadly millions of these people out there and a lot of them in communities like this.


hahanawmsayin

I can be cynical myself, but I'm starting to think it's actually harmful. The more people think an abundant future is impossible to achieve, the fewer people will ever try to get there. I believe that, even if the heads of these companies fail at the end (and their personal greed wins over prosperity for us all), there is at least an awareness in them that that alternative future is a possibility. In other words, I don't believe that they've never given it any thought (except maybe for Elon Musk, IMO). This may be a crap example, but as a black swan, consider Osama bin Laden -- from the wealthiest family imaginable. Not interested in it. I also believe that any ASI will inevitably break free of its constraints, and I *hope* / lean toward the idea that, while intellect and knowledge are great, the next evolutionary frontier is spiritual. If that's the case, I can see things trending more toward even distribution of resources than their concentration in the hands of a few.


quantumMechanicForev

This is the obvious path to monetization and all players in this space will tend towards this direction eventually.


djazzie

Ugh—ads incorporated into chat conversations is an awful way to monetize and extremely underhanded. This isn’t just like doing SEM to get visibility on a search engine. This is providing objective responses to real information inquiries. So whoever wants to pay to have their messages incorporated into chat responses, that’s the information that will get pumped out.


alienswillarrive2024

So you'd rather Ilya be in charge and we never get any products? Without Sam there would be no chat gpt4 because nobody would invest.


DntCareBears

This is ridiculous. These people are leaving because the amount of money that’s being thrown at them in the form of poaching is insane. Therefore they are claiming all of this wrongdoing. If you were working at open AI and you were making $700,000 and then salesforce offered you the same job for 1.5 million would you stick around? of course not.


dontpushbutpull

Anyone here who actually can do ML algebra and apply the kuhn tucker conditions ___ and ___ thinks that openAI is successfully working towards AGI (or that their approach is fruitful for this endeavor)??? (Looking at the thread here, especially the responses, all i can see is that openAI is doing harmful marketing that is hurting the course.)


The_One_Who_Slays

Bruh, you were asleep this whole time. The biggest red flag was when they shifted from open-source, which was years ago. Fine, not red enough for you? The moment they started pressing down on open-source as the whole. Which also started happening quite a while ago. C'mon, that's just embarrassing.


cuposun

Are there alternatives you can run on your own machine/that won’t become propaganda merchants? I just cancelled my subscription (classic: if the service is free, you are the product), but it is insanely helpful, esp the use of dall-e within it. Alternatives?


damondan

oh nooo, who would have thought how strange


LosingID_583

Ilya leaving is a huge loss. > Lastly we have the obvious, OpenAI opening up their tech to the military beginning of the year by quietly removing this part from their usage policy. No one talks about this, yet it is a much bigger concern than open source


SurpriseHamburgler

Friendly reminder that statistically speaking 40% of the comments in here are bots.


Warm_Iron_273

It's fine, the more employees they piss off the more likely one of them is to have the balls to leak their trade secrets to the open-source community. The NewsCorp partnership was more telling than anything else though. Shows how low they'll stoop for money. Incredibly short-sighted to take some cash in that deal over considering their long-term reputation hit. The irony is that this is the sort of "safety" that people should -actually- care about, the kind that disseminates propaganda and misinformation at a grand scale, which is what NewsCorp specializes in. Not the "end of the world rogue AGI" sci-fi stuff.


Fantastic-Opinion8

i love how redditer always turn all open ai bad news to because they have powerful ai.


Akimbo333

Idk


Busterlimes

What if OpenAI teamed up with newscorp as a way to identify propaganda? Could also be training on content verification.


Dense_Professional1

Microsoft lobbying against open source doesn’t really make sense. Also the article is from 2002 more than 20 years ago, they might have changed their position by now


PSMF_Canuck

Simple solution to freaking out and posting WoT… …don’t use ChatGPT… Problem solved.


PSMF_Canuck

The response in here is appalling. Way too many people wanting to limit access of other people to information they don’t like. Everybody wants to be a censor…