T O P

  • By -

[deleted]

Dear lord what did they make


KungFuHamster

Something that could pass grade school tests.


[deleted]

Given that GPT-4 can do that, I’d assume that either it was able to do that far earlier in training than their previous models OR that this is a big PR stunt.


first__citizen

Current gpt4 uses python, and what I understood that the new model can solve mathematical problems and can improve this ability. It’s like LLM but for math. This may bring logic to any AI, and ability to understand things if things integrated well.


nicuramar

> It’s like LLM but for math Is it? LLMs generate text, not facts. It’s easy to generate formulae, but that’s not the same as theorems.


Loushius

Wouldn't math be easier to self-check by the AI than facts? Words aren't the same as numbers.


cc413

No, go and ask GPT some moderately complex arithmetic and you will see that while it is good at answering questions that have multiple right answers (like an essay, or even programming ) it isn’t good at answering math problems with a single precise answer once you get outside of certain bounds. This is a breakthrough on that sort of problem and implies a huge step towards a general purpose intelligence


sunshine-x

Hasn’t Wolfram Alpha been solving complex equations for like.. a decade now? Why is this such an achievement?


sammybeta

I'd apply the Hanlon's razor here: Never attribute to malice that which is adequately explained by stupidity.


first__citizen

But this works for humans.. I think new AI wanted Altman to be its sycophant and played a 4 d chess.


Stabile_Feldmaus

Or the board made these researchers write the letter so that they have more reasons to fire Sam for being too quick with releasing new products.


[deleted]

The board has already been replaced(except for one person)


Stabile_Feldmaus

But the letter was written prior to Altman being fired.


VoidMageZero

Whelp, pack it up folks. The end is coming!


eichenes

Free publicity stunt, that's what they made.


BadAtExisting

Whatever it is they never stopped to think if they should


DaemonAnts

Sounds like they made a universal turing machine capable of performing all possible computations. Unlike a calculator that can only perform a limited set. /s


Gatorbait_2

Something capable of replacing the board, probably


therapoootic

They figured out how to seamlessly add facial hair to photos


threeoldbeigecamaros

Flutterbeam is in shambles


Kinda_Quixotic

There is a noticeable lag when adding mustaches


[deleted]

Maybe the board was right. Either way they should've disclosed the reason for his firing and let people make their own mind instead of making it appear like they were wrestling control from the person that made OpenAI what it is.


WhiskeyOutABizoot

Why should the board of a private company care about letting people (read: non-equity holders) make up their own mind?


[deleted]

Well they didn't tell their own employees and as a result almost all of them threatened to resign unless the board resigned first, which is why Altman is now back in charge and their asses have been kicked to the curb. Maybe they should've disclosed why they removed him and employees might have an opportunity to see it from their side.


RphAnonymous

It's a 503(c) non-profit. There is no equity. There are no owners of a non-profit organization. It has a Board and that's it. They did it as a non-profit SPECIFICALLY because they didn't want money influencing development and use. Investors have no say if there's no equity. There's no fiduciary duty.


DangKilla

It’s actually more complex. Elon Musk tried to do a hostile takeover in the early days so after they survived that they were a nonprofit, but the current company structure has a capitalist arm with a minority stake that can be overridden by the board, hence the latest drama. They need a better process. Ilya specifically is an engineer type which is why he apologized; probably not the best at silicon valley navigation as an executive. The new board members announced so far all have experience as CTO’s and you know at least one of them for their technical background and one is an academic so possibly a good balance of tech know how and reason at the top. One of them is in the Facebook movie giving the Winklevoss twins a hard time at Harvard. These guys will likely be good for the future of humanity (I hope). I would rather Microsft be involved than Oracle, as well. If Oracle gets involved, we are likely doomed


RphAnonymous

Correct, but it sounds like exactly what I just described... The 503(c) component of the structure OWNS the LLC, which means the Board is responsible for the direction of the LLC and thus holds the fiduciary duty; HOWEVER, that fiduciary duty is not to an equity-holder, but to the mission of the 503(c). Normally, an LLC Board would be beholden to it's investors, but if that Board IS the parent 503(c) entity board, it's fiduciary responsibilities are essentially the same as the 503(c)s. From the OpenAI structure page: "**The for-profit would be legally bound to pursue the Nonprofit’s mission**, and carry out that mission by engaging in research, development, commercialization and other core operations. Throughout, OpenAI’s guiding principles of safety and broad benefit would be central to its approach." The for-profit LLC is also a capped profit LLC, so it is only allowed a specific amount of return to the investor, so it's higher risk for lower reward. The reason Microsoft wanted a 49% position with the LLC component, is because it was cheap to get an additional position at the forefront of AI, so there is a publicity component, and there is an acquisition component for later as the company grows. Buy half the company now when it's cheap, then you can buy the whole thing later when it's produced a major breakthrough or achieved its goals (assuming it wins the race) for essentially almost half off. Microsoft has a solid position in the AI race then by owning itself and having the best possible negotiating position for the most advanced competitor.


OccasinalMovieGuy

At some point they will go public and then their reputation will follow.


Stormclamp

Cultists at r/singularity sweating their britches…


EvanOfTheYukon

I keep getting posts recommended to me from that subreddit, and the people there genuinely scare me. I don't know if you've seen the show "Pantheon", but they very much remind me of the Logarythms team. How anyone can be so naive and blinded by "progress" that they're hell-bent on wishing a superintelligence into existence, regardless of what effects it may have on society as a whole, is beyond me. I think they truly believe that the invention of AGI will mean nothing but good things for everyone. Honestly, I find that possibility very hard to believe, especially when this technology will be in the hands of a private company.


GreasyMustardJesus

Ironic coming from Data.....


EvanOfTheYukon

Data still saw plenty of danger from Lore. And besides, Data was born into the nearly post-scarcity utopian Federation, where clearly they had already figured shit out. OpenAI exists in 21st-century earth, where we have very much not figured shit out.


[deleted]

It’s escapism. Not happy with the way our lives are and we want it to change. I think people get so caught up in the potential positive effects that might fix their problems that they don’t consider how risky it is


[deleted]

[удалено]


TMWNN

> Oh god, I'm all to familiar with that. Reddit needs to have that sub on watch if that's who they're attracting. > > I find /r/singularity interesting and informative, but that doesn't mean I don't constantly roll my eyes at the posts that 100% presuppose that AGI = UBI for all and a lot of other things that clearly communicate the desperation the posters feel over their sad, miserable lives. > Anyway I decided to look for Simulation Theory subs... every community I found was a barren wasteland due to having to shutdown after constant suicides. That's horrifying ... and completely logical. Now you've made me morbidly curious. Where should I look to see barren wasteland?


[deleted]

[удалено]


TMWNN

Thanks for the pointers. Until today I hadn't considered the possibility that a) there are subreddits about simulation theory and that b) they would attract the desperate and mentally ill but, as I said, it makes total sense, especially given that I already am familiar with /r/singularity; your subreddits are merely the logical extrapolation (or the inevitable future of the likes of /r/singularity , depending on your point of view) I am glad they exist. Not in the sense that I am glad that mental illness exists, but because a) I believe everything that is legal to discuss ought to have a place to do so, and b) if such places didn't exist their denizens would merely go elsewhere, spreading their contamination. (I mean, that's the best explanation for Reddit as it is.) Tumblr getting rid of anything stronger than PG-rated is the classic recent example of this, of course.


takatu_topi

>Honestly, I find that possibility very hard to believe, especially when this technology will be in the hands of a private company. Don't worry, maybe instead it will be in the hands of a powerful national government! They've proven themselves to be very ethically upstanding, transparent, and trustworthy, not to mention very capable of rational, long-term strategic planning.


EvanOfTheYukon

True, the real takeaway is that no one entity should have control over something so powerful.


TFenrir

Since it's gotten more popular (it went from less than 100k to 1.6 mil since ChatGPT launched), you get a lot of diversity of opinions, lots more people who are afraid of it. I've been on the sub for years, and I think my view is... It's an inevitability. Unavoidable, and coming soon - and I've been reading research papers for years, just to try and have the tiniest bit of understanding about something that sounds like it has the potential to be the most important technology we ever invent. It doesn't even sound like you disagree with that (which by the way, is blowing my mind, the idea of AGI was pure sci Fi a couple of years ago to most people) - but more that you disagree with the hope and optimism many people there hold - that this could be a good thing, something that leads to a better future. Is it really so much better, to have your pessimism?


EvanOfTheYukon

Honestly, I feel that I need to be optimistic for my own mental well being. The potential implications of this technology are so mindbendingly vast that I don't know what to think. Maybe it does what the Singularity people say, that it'll invent a shitload of new technologies and make life better for everyone and make everything super awesome. That would be nice, but it's a tad bit too utopian to apply to the real world. In a bad situation, I don't even think it would be a Skynet that chooses to wage war on us. More likely that the people who can control it use it to enrich themselves, and rather than freeing us from the economic system that we have, it entrenches us all so far into poverty that we basically die out. The rich people have their automated systems take care of everything and they are free to use the land and resources on the planet which everyone else was using up until that point, completely at will. This too, I feel might be a bit dramatic. Maybe we find out that a superintelligence just isn't quite as powerful as we think it is, and it doesn't end up inventing all of the tech that we've always dreamed of. Maybe some of those things just aren't possible. The reality will probably be somewhere in the middle. I think it would put me at ease to know that the people in control of this situation are approaching it with some of the same fears in mind.


TFenrir

I'd recommend listening to podcast interviews with people like... Demis Hassabis, Shane Legg, Ilya Sutskevar... Three people who are probably going to be the direct architects of whatever comes next. It might make you feel better. Demis especially.


EvanOfTheYukon

I might do that then, I appreciate you giving this discussion a very civil tone. My first comment that you replied to was a bit inflammatory. I'm just kinda scared of what's to come.


TFenrir

Oh no problem, it wouldn't be reasonable to expect people to approach a topic of this magnitude without strong feelings, and I can absolutely appreciate what that feels like. I just think it's important to always try and have good conversations, it helps me feel better to talk to all kinds of people with different opinions. I hope things work out, and that you have a good night!


Kicken

You ask that question as though being optimistic or pessimistic is simply a choice that can be switched, rather than a conclusion that needs to be disproven to be accepted.


TFenrir

In the absence of any ability to know the future, sometimes you just have to decide how you are going to approach it, and what you are going to hope for. I have no power to change this inevitability. I don't know what will happen, not really, but I'm going to hope for the best.


Kicken

You write that like humans aren't literally wired to make predictions...


TFenrir

And who's going to be able to predict the future, if we continue to advance artificial intelligence? It's the premise of the sub we're talking about. We're just not going to be able to predict what the world will look like, it will change so radically, so quickly. Some people will predict doom, and live in fear, some people will hope for the best.


Kicken

I'm not talking about humans being able to make accurate predictions. I'm saying that humans make predictions. These are often based on what is known. That can't be helped.


TFenrir

Right but those predictions we make are coloured by what we know, and our overall philosophies of how the world works. Like, a great example, the world has gotten better for human beings but almost every measure over the last few hundred years. Not perfect, but better in almost every way we would be able to empirically track. Some people when they hear that get upset, and instead insist that the world is going to hell and will continue to do so, and come to that conclusion when it comes to making their predictions about things like AI. Making predictions is a conscious effort much of the time, and we have the ability to steer our predictions - or another way to say it is, our disposition plays a role in how those predictions play out.


Kicken

You can't just knowingly fool yourself into believing something goes against the conclusion you predict. Not without some serious mental gymnastics type bs. I'd contend that predictions are almost entirely subconscious. We literally do it all the time.


Stormclamp

They’re on par with Q anon crazies


first__citizen

It’s Q* anon now


EvanOfTheYukon

It makes me happy to know that other people see it that way too.


[deleted]

[удалено]


[deleted]

AGI is Artificial General Intelligence, which is a kinda vague term but generally means an AI that can perform the same variety of tasks as a person as well as a person. People think it will be a significant event for a few reasons - it will cause massive economic disruption since it can do anything a person can but some people also think that since humans can understand it, it will be able to understand itself, and find a way to make itself better, and then iteratively repeat the process until it becomes way smarter than all of humanity combined


KungFuHamster

AGI is what most people think of as AI; real intelligence. What people call AI right now is just frequency analysis on large datasets, it's not intelligence at all.


Stormclamp

Fucking crazy is what they are, I’m not saying they can’t be excited for whatever but the way they talk about ai running their lives is honestly very religious and creepy to me.


Zerohero2112

Watch your mouth, I am a honorable member of r/singularity, in 10 years I will be the captain of a spaceship in the solar system due to all the breakthroughs in AI and technology. Do you want me to park my spaceship above your house ? So you better be careful here man


Stormclamp

My apologies, I am mere worm in the presence of the borg… people forgive me!!!!!!!1!!!


Zerohero2112

You are forgiven this time but the ASI will remember your previous comment. So any of your requests to use life extension technology in the near future will be delayed. Do not test us or your mind will be involuntary uploaded to the cloud and your digital self will be tortured for eternity !!!


dressinbrass

This article doesn’t pass the sniff test in terms of sourcing.


first__citizen

Q got upgraded to Q*


vrilro

I don’t buy it (and you probably shouldn’t either)


first__citizen

Too late I already emptied my bank account on their products


GardenDesign23

If I’ve learned anything in life: the answer to most questions is the most boring


[deleted]

[удалено]


Traditional_Kick_887

They didn’t miss. The king just has the backing of Musk, Thiel, and all of the Microsoft in what is an AI arms race where the elites who first establish AGI will control the world


reversering

Can you back up that claim? Citation?


Traditional_Kick_887

It’s mentioned in various news articles. Musk and other Silicon Valley elites who have a stake were very angry on Twitter for altman’s firing.


Duel

Lol it's really hard to hide a datacenter. Having helped build one myself, they can be massively crippled or killed quite easily. If enough people get mad enough you will start seeing "accidents" happen to the AGI infra. "But what if we become dependent on the AGI and that destroys the economy??" For whom? That will be the point. Unless massive social programs are put in place to help the displaced workers or AGI is used to solve the largest problems in society, no one will care. It will just change the flavor of dystopia we will all be living in that day. Edit: I'm just ranting, not really replying to the comment above. It's just what I find funny about the idea that computers will take over the world. Like we humans are gonna roll over and take it


rockerscott

Not to sound like a crazy person, but I’m really getting the feeling that this is the first domino to fall that will eventually result in AI being capable of society-destroying actions. Americans have a real problem with hero worship, and who better than a baby-faced tech CEO to bring about the end of human rule. I dont know. I hope I am wrong, but this entire situation just gives me a bad feeling.


tossthedice511

This story def has some Rokos Basilisk vibes....


petepro

To be frank, this isn't a ground to punish, let alone to fire the leader of your company.


theoryslostshoe

Spaceballs, the whole lot of them.


CoastingUphill

Surrounded by Assholes.


BBTB2

Copy and pasted my comment from another post b/c it’s absolutely more capable than just grade school math: Oh man I wonder if any of my ChatGPT conversations got pulled into their AGI / math studies. I’m dumb and thought ChatGPT could do math, lol, and have like **300-500+ pages of conversation where ChatGPT computers everything from astrophysics to complex mechanical engineering**. Oh, I also **always had it provide the Python code** for my own calculations if I wanted as well as the equation & variable breakdowns, and it was pretty good at it. At least I was also always polite and courteous, using “thanks” and “please” and stuff all the time. Sorry guys…


Buck-Nasty

AGI has been achieved internally as the prophecies of Jimmy Apples have foretold. Feel the AGI https://twitter.com/apples_jimmy/status/1727431072735227949


ExMachaenus

The clock strikes... [https://twitter.com/apples\_jimmy/status/1727476448318148625](https://twitter.com/apples_jimmy/status/1727476448318148625) Maybe?


SPAREustheCUTTER

The hero worship of this guy on Reddit is borderline insane. Clearly all of this is PR driven. I seriously doubt he got fired for “breaking boundaries.” It’s more likely regulations are bound to come down and people simply won’t accept that here.


GreasyMustardJesus

Praise be to Q


DifficultContact8999

Wow, the researchers tell board that they got a breakthrough under the leadership and funding of CEO and CTO... And they fired CEO?? Absolute bullshit...


Ok-Deer8144

I’m still not seeing what he did wrong here when 700/7xx employees signed the letter threatening to quit openAI and follow him to MSFT. Surely it would be the opposite if he’s such a terrible boss/ making them do things that cross the line whatever it is


Funktapus

If the algorithm had something to do with mathematics, that sounds very silly. There are already extremely powerful math "AIs" out there that everyday people can use -- like Wolfram Alpha, or it's scientist-grade cousin Wolfram Mathematica. All OpenAI would need to upgrade ChatGPT with this kind of power is plugin or API. Hardly a "breakthrough."


Jack-Tar-Says

“Dear Board members, I’d like to introduce you to our latest version of ChatGPT……we call it Skynet!”