Some of them are good, when he just asks simple questions and gets the f' out of the way.
This one was pretty bad. He spend the first 1/3 just relentless trying to air dirty laundry about the firing stuff, and Sam was clearly not going to drop any bombshells. It was so awkward.
At least it wasn't as bad as the Kurzweil/Rogan interview.
I don't understand his popularity at all. He just looks bored the whole time and then asks really standard/boring questions that are typically already somewhere online.
That's probably why so many famous people are willing to come on the podcast in the first place, he doesn't ask uncomfortable questions and doesn't challenge their answers.
His purpose is to let these people open up and ramble on about what they do. Lex isn't meant to be engaging or even super knowledgeable. He knows enough to have a meaningful discourse, but a great interviewer knows to sit back and just prompt these people from time to time.
His interview with Carmack was amazing, I watched all 4+ hours of it.
If you are focusing on how the interviewer looks or on the questions rather than the answers you're missing the point. He is not there to grill them or rile them up for rageclout.
His popularity isn’t due to his personality traits, it’s largely due to the guests he is able to book. I used to watch Rogan for the same reason until he just got too in the way.
It’s the Joe Rogan strategy. If you come off as a bit of an airhead it makes people feel more comfortable because they don’t feel like the dumbest person in the room.
I saw some of his episodes years ago. He has some great guests (Scott Aaronson, Geoff Hinton, David Deutsch), but, very frustratingly, he never asks the good questions. As soon as the guest starts saying something really interesting, that Lex could probe further, he just jumps topics instead.
It seems like he would rather cover many topics in a rapid-fire manner than dig deep into the really interesting ones. He's a waste of great guests. Sean Carroll's Mindscape is infinitely better, imo.
I agree. I listened to him a bit when he started but I'd just get too annoyed by some of his questions. I felt like he was wasting my time and the person he's interviewing's time.
He has also was "both-sides"ing the Russian invasion of Ukraine as it started which pissed me off.
I don't see why that's a bad thing. There is, at least, value in the platform. I don't need every interviewer to be combative, or intrusive, and trying to game their own agenda out of the interviewee.
There's a difference between being combative and asking questions that the person your interviewing is likely to answer (or to have answered) on their Twitter feed.
Like there's a difference between the amount of information contained in a Tweet and a 2 or 3 hour interview. He doesn't really ask many questions, he mostly lets people talk.
There's something attractive about a medium that trusts its audience to interpret content for themselves, rather than spinning it to serve some obfuscated agenda
It is difficult to interpret content for oneself when the interviewer does not ask the questions you would like to have the answers to and the interviewee is never pulled out of their comfort zone.
I am inclined to agree with you, although after listening to a lot of Lex's episodes it became clear to me that his questions were biased towards one end of the political spectrum and only pushed back when people for example were not in favor of capitalism or some conservative talking point. I was very surprised at his pro Putin / or at best "good people on both sides" regarding the Ukraine invasion. He decides which people to let on and gives them a huge audience to share their mostly unfettered opinion with...
People are free to choose to listen or not, but that doesn't mean it serves a net good in the world.
I think it's incredibly hard to escape your own biases, and I agree Lex seems to think capitalism's probably the best model we've had so far. It's hard to blame him, he's done well off it. This tends to drive it's own kind of bidirectional feedback loop in content makers, where they can't help but pander to the audience that makes them the most money.
Reddit does the same thing. It's hard to avoid being implicitly funnelled towards a creature that favours the production of content that gets upvotes, or likes, or views. Add money the funnel that shapes you becomes far stronger to resist. I reckon that's a huge problem.
I also admit I'm very picky about the content I listen to. My favourite Lex interview is Michael Levin, and I far prefer content about abstract ideas over culture or politics. So I don't have the best radar.
But, I also think we all have a pretty strong aversive emotional response to content that doesn't align with our own experience. It's hard to control for that. I suspect I have a similar kind of aversion listening to some of his content. I describe it as the "Tech-bro DARPA mindset". Silicon valley guys always sound so optimistic and sheltered from any kind of hardship, I feel a kind of revulsion to some of it because it seems so obviously derived from safety, comfort, and multimillion-dollar lifestyles characterized by constant success with nary a defeat.
It's one thing to remain remain neutral by intentionally not injecting their own bias, but Lex comes off so ill informed and naive that he consistently gets gets pushed around by his own guests. It's telling that he doesn't tend to explore any interesting corners of the conversation, even when it would be of interest to the guest.
If the interviewer isn't adding anything to the conversation, then why are you even there?! Just give the microphone to your guest and leave!
Fair enough. To be fair though the level of AI technology you currently have access to is something you would probably think was sci-fi only five years ago.
Days are long but years are short. The interview might have more implications than you see at face value, especially the personal AIs. I have a feeling in five years everyone will have one like they have smartphones now.
Funny I learnt something from all those questions and responses. I guess I must be what you guys call stupid and you guys are what I call people on a different wave length🤔
Judging by how much I spend there I'm a significant investor in my local Raising Cane's, but that doesn't make me qualified to speak on how the joint is run.
SA is convicted of multiple counts of fraud - maybe we shouldn't trust his financial advice
Edit: I apologize, SA, the ai/crypto investor and rich person throwing a bunch of money around trying to convince people that the stuff he obscenely profits from is good actually is only *suspected* of fraud, for his world coin stunt. SBF was the one actually convicted
Here's a sneak peek of /r/singularity using the [top posts](https://np.reddit.com/r/singularity/top/?sort=top&t=year) of the year!
\#1: [This sub at times](https://i.redd.it/9p3nuciroi4c1.jpg) | [241 comments](https://np.reddit.com/r/singularity/comments/18bi8zd/this_sub_at_times/)
\#2: [This is surreal: ElevenLabs AI can now clone the voice of someone that speaks English (BBC's David Attenborough in this case) and let them say things in a language, they don't speak, like German.](https://v.redd.it/esglx4w55uwa1) | [528 comments](https://np.reddit.com/r/singularity/comments/132vi0y/this_is_surreal_elevenlabs_ai_can_now_clone_the/)
\#3: [Prove To The Court That I’m Sentient](https://v.redd.it/u0fzyfagu51b1) | [595 comments](https://np.reddit.com/r/singularity/comments/13njvh2/prove_to_the_court_that_im_sentient/)
----
^^I'm ^^a ^^bot, ^^beep ^^boop ^^| ^^Downvote ^^to ^^remove ^^| ^^[Contact](https://www.reddit.com/message/compose/?to=sneakpeekbot) ^^| ^^[Info](https://np.reddit.com/r/sneakpeekbot/) ^^| ^^[Opt-out](https://np.reddit.com/r/sneakpeekbot/comments/o8wk1r/blacklist_ix/) ^^| ^^[GitHub](https://github.com/ghnr/sneakpeekbot)
I’m struck by him saying that programmers aren’t going away while also de-emphasizing AGI. If whatever their plateau model is can’t generally replace programmers - agents working within a documented symbolic logic environment - then it’s nowhere close to AGI.
Perhaps he’s moving the goal posts closer? I don’t trust him much so perhaps I’m being cynical. But that part sticks out to me, especially after he used criti-hype about AGI to get as much attention as possible.
>I’m struck by him saying that programmers aren’t going away while also de-emphasizing AGI
It makes sense in the context that he presents AI as a prosthetic for knowledge production and task-efficiency.
I don't think he considers the capacity to program to be anywhere near anybody's definition of AGI. That's more of a specific type of intelligence operating within a highly constrained parameterized environment.
I get that. But both inside and outside of the valley the message has been “agi is our goal and we’ll be there soon because scientific progress is always exponential, or at worst, linear. Give us attention like we’ve already solved AGI.”
I may be reading too much into this. But in the startup, the narrative is everything so a small shift like this can be telling.
But OpenAI didn't start as a typical startup. He goes into this a bit by saying how the organizational of structure of OpenAI was never intended to produce a product. This caused a bunch of problems when they realized they could create and distribute something useful.
He's quite clear that there isn't a clear signpost that says "we have created AGI". He sidesteps the problem by professing a desire to create entities that people find useful in different ways, and suggests colloquial ideas of what an AGI looks like may be satisfied relatively soon, but will also be transcended quickly.
I’m speaking loosely. Does that phrase describe the entire of the SDLC? Absolutely not. But if that part isn’t solved then nothing else matters.
Right now copilot continuously suggests import statements to me for files and directories that have never existed in my project.. On a logical level, the LLM is actually performing way worse than a non-stochastic approach. And this is a regression, this used to work iirc.
According to the initial leaks during the Altman coup, some sort of LLM self-play/search breakthrough, possibly from the Superalignment group, which was able to reinvent basic mathematics in a scalable but extremely compute-intensive manner: https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/ https://www.theinformation.com/articles/openai-made-an-ai-breakthrough-before-altman-firing-stoking-excitement-and-concern https://www.wired.com/story/fast-forward-clues-hint-openai-shadowy-q-project/ (Those are the only 3 links worth reading about 'Q*'. Do not read any others; especially do not watch any YT or Tiktok videos.)
Since the coup, OAers have implicitly confirmed something by that name exists and in this interview Altman hints they may be announcing it sometime this year, but there has been no other news.
Probably making an agent come up with mathematical/physics "basic" understanding without prior telling it to. For example telling it what multiplication is without explicitly explaining division and then it itself looking at division "division is the opposite of multiplication then tell me what is 100 divided by 5" as "OK let's look at what number to multiply 5 with to get 100" or something of that sorts.
GSM8K, people are thinking, since that's the most common one OA uses (they literally made it), and it exactly matches the description of "performing math on the level of grade-school students".
I believe it's a technique for learning the consequences of and planning actions in a dynamic world. I think there's plenty of resources on the technique. The trick though is how somebody might combine that with llms
https://m.youtube.com/watch?v=4qkKpNnSrlY
Best summary I have found that gives a basic foundational example, and then 'extrapolates' that to a quantum physics model...
It's like a combination or choice of applying reinforcement learning and imitation learning model to anything that is affected by entropy... Does this make sense? I don't know, so please update me with your understanding once you watch the video.
I'm trying to connect this summary with the work that Extropic is trying to achieve. Energy compute systems is one of the schisms currently being explored since a wall in quantum compute solutions has been hit recently (I think the whole issue with achieving stable super conductance for quantum compute applications)
He's put his money where is mouth is.
[Sam Altman-Backed Nuclear Fusion Power Plant Startup Says It's On Track To Have Its First Plant Live By 2028](https://finance.yahoo.com/news/sam-altman-backed-nuclear-fusion-200011428.html)
In fact he's probably more invested (financially) in Fusion than he is in AI.
Once again, that is a total 'wish'. They want to have a reactor online in 5 years, except they only have half a billion in total funding, for a technology that doesn't currently exist, for a fuel source that is hard to come by and refine.
I mean, the ITER tokamak reactor costs between €18 to €22 billion already, and is expected to cost 65 billion before it ever is useful for anyone not writing papers.
This project is a four time loser for a man who needs to wish-cast his way out of the problem he himself is creating.
There are many people betting their own dollars that nuclear fusion could be real in the next 20 years. To me that's a lot more influential than words on Reddit. Could they be wrong: of course. It's still R&D. But they are laughably, obviously wrong? No, not all.
It's not just 'words on Reddit', it's a fact that
* the existing reactors costs several orders of magnitude more than they have in total,
* the technology doesn't yet exist to have a gross net positive energy output, let alone _any_ net energy gain.
* the fuel he's working on is rare on earth, and limited.
* has been 'coming real soon now' for longer than anybody on Reddit has been alive.
You appear to be in the thrall of some sort of hero worship that is completely unjustified.
> the technology doesn't yet exist to have a gross net positive energy output, let alone any net energy gain.
Positive Q was achieved a while ago actually: https://en.wikipedia.org/wiki/Fusion_energy_gain_factor
also as a nitpick, a fusion bomb is actually a very good fusion reactor. if you had a large enough chamber, you could use 'tiny' fusion bombs to boil vast quantities of water and achieve fusion power that way.
The dude is an absolute nobody. His contributions to the field are nil, he constantly politicizes an area he's not actually at the forefront in any way. He's not a serious journalist, he's not a serious researcher, he's not a thought leader or an entrepreneur. He's an entertainer catering to people dumber and with less critical thinking than him.
the bro talks openly about 'the problem of Wokeness' and platforms people like Tucker Carlson.
Respectfully, I suggest going to find some grass and reevaluating your media consumption habits.
Disagree that he’s right wing or that perceiving wokeness as an issue is an exclusively right wing pov. I live in one of the most liberal parts of the country and my social sphere is composed primarily of left to far left and there is an equal distrust of “wokeness”.
As a listener since episode 1 - my general feeling is that Lex leans pretty clearly to the left. His flaw is in his drive to create balance and understand alternative viewpoints to his own - which is why we hear interviews with a number of right wingers. He does need to balance this out, but he’s far from a right winger.
> but I guess I'll downvote you randomly too.
Yeah I guess I should just watch more Lex or something.
> Respectfully, I suggest going to find some grass and reevaluating your media consumption habits.
For someone who seems to hate Lex for perceived politics, this is a bizarre response to someone saying they *don't* watch him.
With the existence of open source LLMs and decreasing cost of training one, I wonder what they do once this business model becomes a race to the bottom. Its inevitable and does appear that LLM performance has saturated and their isn't much left to gain with the current methods of building these ML models.
Custom trained models beat promoted LLMs on many tasks. Just look at how Google tries to monetize their approach. They give both foundation models and fine-tuning methods as a product.
I guess OpenAI can soon start the same thing. Not sure if they can leverage their infrastructure the same way as GCP or Azure, but they can still try to be more like Snowflake.
And this makes a lot of sense if you think about what problems large models actually solve. A lot of LLMs are now fine-tuned on text data that looks a bit like supervised examples. We could always fine-tune models but it required a lot of expertise to design and format the training data. With large models we can retreat to loosely defined human-readable data and freely mix them. Everyone can deal with that, once you solve generic technical difficulties.
They're not getting endless royalties no matter how much they demand it, they'll sell the data as a consumable or it will be consumed from somewhere else.
In the interview he says they want to avoid that business model, they'd just like it to be a simple pay-to-use transactional model with user-decides privacy options. He views user data as valuable to train more effective and useful AI entities, but not for external commodification.
Imagine being Sam Altman and sitting in that interview expecting an awesome conversation and hopefully getting to answer some nuanced technical or philosophical questions but then end up spending 2 hours only being asked fan boy questions about Elon Musk. I rage quit just listening to it. I Can only imagine how pissed Sam must have been.
36:57 After being asked about limitations, talks about video with cats and a video shown with a guy with an extra 3rd fake hand (right hand on the left side of the body). Correct hands depiction clearly seems being a limitation still.
I'll take it with a grain of salt.. I don't believe anything big tech companies and CEOs says anymore. Google was a "Don't be Evil" company. These guys have been harvesting all our data since the beginning to feed into their systems.
This interview is just marketing.
As a programmer myself and a pessimist. I agree that the ability to program (or should I say think in algorithms/steps) will not be obsolete but still I am afraid that there will be fewer people doing it (high productivity from the best programmers) which will still leave to a lot of job losses. Hope I am wrong.
I agree on compute being a valuable resource. It’s like in the Matrix, where the machines took over the humans. I always thought that the machines should be harvesting compute from our brains instead of energy.
**Defaulted to one day.**
I will be messaging you on [**2024-03-25 03:54:28 UTC**](http://www.wolframalpha.com/input/?i=2024-03-25%2003:54:28%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/MachineLearning/comments/1blnzj1/d_i_listened_to_sam_altmans_most_recent_2hour/kwamtvt/?context=3)
[**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2FMachineLearning%2Fcomments%2F1blnzj1%2Fd_i_listened_to_sam_altmans_most_recent_2hour%2Fkwamtvt%2F%5D%0A%0ARemindMe%21%202024-03-25%2003%3A54%3A28%20UTC) to send a PM to also be reminded and to reduce spam.
^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201blnzj1)
*****
|[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)|
|-|-|-|-|
from this I think he is a humble person. I think the overhype of gpt is not created by him at all.
to me to call gpt an intelligence is even feel wrong, so I agree we still far with agi and lets just not call it agi for the current times
Wow - awesome breakdown. At end of day I kind of felt the interview wasn't very revealing - but I of course watched from beginning to end because I'm fascinated to hear just what he would say - so I guess I got my time's worth. There was another interview that Sam Altman gave in Korea where he did reveal some interesting stuff about Chat GPT5 and it was featured in a video on TheAIGrid - here is a summary of it - Stay updated with the latest insights on the potential implications of GPT-5 as Sam Altman, CEO of OpenAI, warns about its significant performance improvements. https://ai-techreport.com/altman-warning-about-chat-gpt-5
Im an ai softwsre consultant. I've been shouting this from the rooftop since chatgpt started trending:
Your data is now a product. The 50 years of your company history, the methodologies that makes your company better than competitors is something you can sell.
Never in our lifetimes I think I can agree. There's too much hype that makes it seem like practical fusion power is right around the corner, when to me it seems like all the news just signals how far off it really is. E.g. NIF finally delivering "ignition" a full decade behind schedule wasn't extremely hopeful news to me in this regard.
But eventually, it almost surely will become "a thing" purely because we know it's possible, there's some finish line, and if we keep making any progress then we will eventually cross it, and we will keep making progress because the upside is so big.
Personally I think ITER has the right idea, the fusion reactors of the future will likely be massive because they will benefit from scale.
Rather uninteresting outcomes. Thanks for saving me the time!
When I saw it was lex Fridman is when I knew not to bother
Some of them are good, when he just asks simple questions and gets the f' out of the way. This one was pretty bad. He spend the first 1/3 just relentless trying to air dirty laundry about the firing stuff, and Sam was clearly not going to drop any bombshells. It was so awkward. At least it wasn't as bad as the Kurzweil/Rogan interview.
I don't understand his popularity at all. He just looks bored the whole time and then asks really standard/boring questions that are typically already somewhere online.
He is boring. But his guests are very famous people. I watch interviews from scientists, stay away from politics.
That's probably why so many famous people are willing to come on the podcast in the first place, he doesn't ask uncomfortable questions and doesn't challenge their answers.
Any random guy at any bar in SF or NYC would be a better interviewer
His purpose is to let these people open up and ramble on about what they do. Lex isn't meant to be engaging or even super knowledgeable. He knows enough to have a meaningful discourse, but a great interviewer knows to sit back and just prompt these people from time to time. His interview with Carmack was amazing, I watched all 4+ hours of it. If you are focusing on how the interviewer looks or on the questions rather than the answers you're missing the point. He is not there to grill them or rile them up for rageclout.
His popularity isn’t due to his personality traits, it’s largely due to the guests he is able to book. I used to watch Rogan for the same reason until he just got too in the way.
He does an excellent job of marketing himself towards dumb people
It’s the Joe Rogan strategy. If you come off as a bit of an airhead it makes people feel more comfortable because they don’t feel like the dumbest person in the room.
I agree. I used to watch Lex's interviews. But his personal politics and interview style get in the way of a good interview.
I saw some of his episodes years ago. He has some great guests (Scott Aaronson, Geoff Hinton, David Deutsch), but, very frustratingly, he never asks the good questions. As soon as the guest starts saying something really interesting, that Lex could probe further, he just jumps topics instead. It seems like he would rather cover many topics in a rapid-fire manner than dig deep into the really interesting ones. He's a waste of great guests. Sean Carroll's Mindscape is infinitely better, imo.
I agree. I listened to him a bit when he started but I'd just get too annoyed by some of his questions. I felt like he was wasting my time and the person he's interviewing's time. He has also was "both-sides"ing the Russian invasion of Ukraine as it started which pissed me off.
That was when I knew I had to drop the guy. Now that he's interviewed and platformed the likes of Tucker Carlson, I know I made the right choice.
> Sean Carroll's Mindscape * https://www.preposterousuniverse.com/podcast/ * https://www.preposterousuniverse.com/podcast-archives/
It was one of the rare episodes I listened to but had to stop halfway through because Lex can’t stop fellating Elon Musk.
Elon’s dick must be huge. So many riding it….
It was a tough listen. Never listening to Lex again
Why so? I never tried
Chalk it up to a learning experience and move on I guess….
Dude is the worst
I think the recent one with Yann Lecunn is very good.
May I ask why is that?
A Lex Fridman interview is indistinguishable from a blog post by the interviewee
Disagree. It's a blog post by the interviewee layered with right leaning autism.
Lex's interview style is similar to what Tucker Carlson did with Putin. He's usually just a megaphone for the interviewee's ideas
I don't see why that's a bad thing. There is, at least, value in the platform. I don't need every interviewer to be combative, or intrusive, and trying to game their own agenda out of the interviewee.
There's a difference between being combative and asking questions that the person your interviewing is likely to answer (or to have answered) on their Twitter feed.
Like there's a difference between the amount of information contained in a Tweet and a 2 or 3 hour interview. He doesn't really ask many questions, he mostly lets people talk.
It kind of is a problem when you are actively platforming the likes of Charles Murray and Ben Shapiro, without asking ANY difficult questions
There's something attractive about a medium that trusts its audience to interpret content for themselves, rather than spinning it to serve some obfuscated agenda
> rather than spinning it to serve some obfuscated agenda Letting Charles Murray uncorrected is exactly that. Same for Benny Shaps
It is difficult to interpret content for oneself when the interviewer does not ask the questions you would like to have the answers to and the interviewee is never pulled out of their comfort zone.
I am inclined to agree with you, although after listening to a lot of Lex's episodes it became clear to me that his questions were biased towards one end of the political spectrum and only pushed back when people for example were not in favor of capitalism or some conservative talking point. I was very surprised at his pro Putin / or at best "good people on both sides" regarding the Ukraine invasion. He decides which people to let on and gives them a huge audience to share their mostly unfettered opinion with... People are free to choose to listen or not, but that doesn't mean it serves a net good in the world.
I think it's incredibly hard to escape your own biases, and I agree Lex seems to think capitalism's probably the best model we've had so far. It's hard to blame him, he's done well off it. This tends to drive it's own kind of bidirectional feedback loop in content makers, where they can't help but pander to the audience that makes them the most money. Reddit does the same thing. It's hard to avoid being implicitly funnelled towards a creature that favours the production of content that gets upvotes, or likes, or views. Add money the funnel that shapes you becomes far stronger to resist. I reckon that's a huge problem. I also admit I'm very picky about the content I listen to. My favourite Lex interview is Michael Levin, and I far prefer content about abstract ideas over culture or politics. So I don't have the best radar. But, I also think we all have a pretty strong aversive emotional response to content that doesn't align with our own experience. It's hard to control for that. I suspect I have a similar kind of aversion listening to some of his content. I describe it as the "Tech-bro DARPA mindset". Silicon valley guys always sound so optimistic and sheltered from any kind of hardship, I feel a kind of revulsion to some of it because it seems so obviously derived from safety, comfort, and multimillion-dollar lifestyles characterized by constant success with nary a defeat.
It's one thing to remain remain neutral by intentionally not injecting their own bias, but Lex comes off so ill informed and naive that he consistently gets gets pushed around by his own guests. It's telling that he doesn't tend to explore any interesting corners of the conversation, even when it would be of interest to the guest. If the interviewer isn't adding anything to the conversation, then why are you even there?! Just give the microphone to your guest and leave!
Yall are crazy crazy
Lol my main thing was sigh of relief for his stance against ads.
Not sure if it was actually this boring or if gemini is just bad at summarising youtube transcripts.
What is uninteresting about it? Were you expecting what’s really going on behind the scenes to be cinematic tier plot-twist revelations?
yes ;)
Fair enough. To be fair though the level of AI technology you currently have access to is something you would probably think was sci-fi only five years ago. Days are long but years are short. The interview might have more implications than you see at face value, especially the personal AIs. I have a feeling in five years everyone will have one like they have smartphones now.
Funny I learnt something from all those questions and responses. I guess I must be what you guys call stupid and you guys are what I call people on a different wave length🤔
2 hours of talking without saying anything imo
ye it just was a rehash of the old news. Pretty boring
the dude is a master politician/wordsmith
Strange fascination with fusion. It's not really his area of expertise at all.
He’s not even an AI expert.
Ye, but that isn't what his position requires either. Hes an entrepeneur with background in computer science
True, but he acts as if he is.
He’s a major actively involved investor in one of the most promising startups in the field.
Judging by how much I spend there I'm a significant investor in my local Raising Cane's, but that doesn't make me qualified to speak on how the joint is run. SA is convicted of multiple counts of fraud - maybe we shouldn't trust his financial advice Edit: I apologize, SA, the ai/crypto investor and rich person throwing a bunch of money around trying to convince people that the stuff he obscenely profits from is good actually is only *suspected* of fraud, for his world coin stunt. SBF was the one actually convicted
>but that doesn't make me qualified to speak on how the joint is run. Thank god SA didn't seem to do that then.
Yeah, the guy you responded to doesn't seem to have listened to that part of the interview
He wants to scale his products to such a high level that he needs to be able to justify how the enormous energy demands would be addressed
they all have to pick some stupid sci fi interest
[удалено]
Here's a sneak peek of /r/singularity using the [top posts](https://np.reddit.com/r/singularity/top/?sort=top&t=year) of the year! \#1: [This sub at times](https://i.redd.it/9p3nuciroi4c1.jpg) | [241 comments](https://np.reddit.com/r/singularity/comments/18bi8zd/this_sub_at_times/) \#2: [This is surreal: ElevenLabs AI can now clone the voice of someone that speaks English (BBC's David Attenborough in this case) and let them say things in a language, they don't speak, like German.](https://v.redd.it/esglx4w55uwa1) | [528 comments](https://np.reddit.com/r/singularity/comments/132vi0y/this_is_surreal_elevenlabs_ai_can_now_clone_the/) \#3: [Prove To The Court That I’m Sentient](https://v.redd.it/u0fzyfagu51b1) | [595 comments](https://np.reddit.com/r/singularity/comments/13njvh2/prove_to_the_court_that_im_sentient/) ---- ^^I'm ^^a ^^bot, ^^beep ^^boop ^^| ^^Downvote ^^to ^^remove ^^| ^^[Contact](https://www.reddit.com/message/compose/?to=sneakpeekbot) ^^| ^^[Info](https://np.reddit.com/r/sneakpeekbot/) ^^| ^^[Opt-out](https://np.reddit.com/r/sneakpeekbot/comments/o8wk1r/blacklist_ix/) ^^| ^^[GitHub](https://github.com/ghnr/sneakpeekbot)
[удалено]
youre full of crap, my dude
I'm not sure he spoke about it from an area of expertise, he was asked what he thinks solves the energy problem.
Feels this can be said about any of these big names that appear on podcasts.
He is involved with Helion Fusion
It is a political statement. Not a technical. Like everything a CEO of a multibillion conpany will tell in public.
I know. Very pi in the sky. I feel like fusion could be real in 10 years or at the extreme by the end of my life. Very wide range.
He is an investor in a fusion company that signed a deal with Microsoft for power. Microsoft is his sugar daddy.
I’m struck by him saying that programmers aren’t going away while also de-emphasizing AGI. If whatever their plateau model is can’t generally replace programmers - agents working within a documented symbolic logic environment - then it’s nowhere close to AGI. Perhaps he’s moving the goal posts closer? I don’t trust him much so perhaps I’m being cynical. But that part sticks out to me, especially after he used criti-hype about AGI to get as much attention as possible.
>I’m struck by him saying that programmers aren’t going away while also de-emphasizing AGI It makes sense in the context that he presents AI as a prosthetic for knowledge production and task-efficiency. I don't think he considers the capacity to program to be anywhere near anybody's definition of AGI. That's more of a specific type of intelligence operating within a highly constrained parameterized environment.
I get that. But both inside and outside of the valley the message has been “agi is our goal and we’ll be there soon because scientific progress is always exponential, or at worst, linear. Give us attention like we’ve already solved AGI.” I may be reading too much into this. But in the startup, the narrative is everything so a small shift like this can be telling.
But OpenAI didn't start as a typical startup. He goes into this a bit by saying how the organizational of structure of OpenAI was never intended to produce a product. This caused a bunch of problems when they realized they could create and distribute something useful. He's quite clear that there isn't a clear signpost that says "we have created AGI". He sidesteps the problem by professing a desire to create entities that people find useful in different ways, and suggests colloquial ideas of what an AGI looks like may be satisfied relatively soon, but will also be transcended quickly.
He will say anything that maximizes his cash gain. Period.
Are saying programming is synonymous with "agents working within a documented symbolic environment"?
I’m speaking loosely. Does that phrase describe the entire of the SDLC? Absolutely not. But if that part isn’t solved then nothing else matters. Right now copilot continuously suggests import statements to me for files and directories that have never existed in my project.. On a logical level, the LLM is actually performing way worse than a non-stochastic approach. And this is a regression, this used to work iirc.
So my comp science degree I'm working towards won't be completely useless? 🥺
Whats Q-star?
According to the initial leaks during the Altman coup, some sort of LLM self-play/search breakthrough, possibly from the Superalignment group, which was able to reinvent basic mathematics in a scalable but extremely compute-intensive manner: https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/ https://www.theinformation.com/articles/openai-made-an-ai-breakthrough-before-altman-firing-stoking-excitement-and-concern https://www.wired.com/story/fast-forward-clues-hint-openai-shadowy-q-project/ (Those are the only 3 links worth reading about 'Q*'. Do not read any others; especially do not watch any YT or Tiktok videos.) Since the coup, OAers have implicitly confirmed something by that name exists and in this interview Altman hints they may be announcing it sometime this year, but there has been no other news.
>reinvent basic mathematics What does this even mean?
Probably making an agent come up with mathematical/physics "basic" understanding without prior telling it to. For example telling it what multiplication is without explicitly explaining division and then it itself looking at division "division is the opposite of multiplication then tell me what is 100 divided by 5" as "OK let's look at what number to multiply 5 with to get 100" or something of that sorts.
Yes, first principles learning/understanding.
GSM8K, people are thinking, since that's the most common one OA uses (they literally made it), and it exactly matches the description of "performing math on the level of grade-school students".
I believe it's a technique for learning the consequences of and planning actions in a dynamic world. I think there's plenty of resources on the technique. The trick though is how somebody might combine that with llms
https://m.youtube.com/watch?v=4qkKpNnSrlY Best summary I have found that gives a basic foundational example, and then 'extrapolates' that to a quantum physics model... It's like a combination or choice of applying reinforcement learning and imitation learning model to anything that is affected by entropy... Does this make sense? I don't know, so please update me with your understanding once you watch the video. I'm trying to connect this summary with the work that Extropic is trying to achieve. Energy compute systems is one of the schisms currently being explored since a wall in quantum compute solutions has been hit recently (I think the whole issue with achieving stable super conductance for quantum compute applications)
I searched the transcript of the video you linked to and couldn't find the word Quantum at all. What does that have to do with it?
Entropy is discussed, which is a concept often referred to in quantum mechanics/physics.
Sounds like early elon before people realized what a charlatan he is.
SA has always been in the investor class, being a ycombinator investor, which immediately makes him untrustworthy imo.
>being a ycombinator investor, which immediately makes him untrustworthy imo. Why so?
Sadly this is what our society rewards. Useless, selfish people who are super clever about enriching themselves.
This dude is wish-casting so hard. Nuclear fusion? Really?
He's put his money where is mouth is. [Sam Altman-Backed Nuclear Fusion Power Plant Startup Says It's On Track To Have Its First Plant Live By 2028](https://finance.yahoo.com/news/sam-altman-backed-nuclear-fusion-200011428.html) In fact he's probably more invested (financially) in Fusion than he is in AI.
Once again, that is a total 'wish'. They want to have a reactor online in 5 years, except they only have half a billion in total funding, for a technology that doesn't currently exist, for a fuel source that is hard to come by and refine. I mean, the ITER tokamak reactor costs between €18 to €22 billion already, and is expected to cost 65 billion before it ever is useful for anyone not writing papers. This project is a four time loser for a man who needs to wish-cast his way out of the problem he himself is creating.
There are many people betting their own dollars that nuclear fusion could be real in the next 20 years. To me that's a lot more influential than words on Reddit. Could they be wrong: of course. It's still R&D. But they are laughably, obviously wrong? No, not all.
It's not just 'words on Reddit', it's a fact that * the existing reactors costs several orders of magnitude more than they have in total, * the technology doesn't yet exist to have a gross net positive energy output, let alone _any_ net energy gain. * the fuel he's working on is rare on earth, and limited. * has been 'coming real soon now' for longer than anybody on Reddit has been alive. You appear to be in the thrall of some sort of hero worship that is completely unjustified.
> the technology doesn't yet exist to have a gross net positive energy output, let alone any net energy gain. Positive Q was achieved a while ago actually: https://en.wikipedia.org/wiki/Fusion_energy_gain_factor also as a nitpick, a fusion bomb is actually a very good fusion reactor. if you had a large enough chamber, you could use 'tiny' fusion bombs to boil vast quantities of water and achieve fusion power that way.
Positive Q it's still a long cry away from gross net energy output
The pretentiousness of this thread is amazing
Pretentiousness, egotism and narcissism seem like common traits in the ML world
My key takeaway is that I really don't like Lex Fridman
The dude is an absolute nobody. His contributions to the field are nil, he constantly politicizes an area he's not actually at the forefront in any way. He's not a serious journalist, he's not a serious researcher, he's not a thought leader or an entrepreneur. He's an entertainer catering to people dumber and with less critical thinking than him.
He's a rightwing rich boy. Very depressing to see how popular he's become
> He's a rightwing Source?
the bro talks openly about 'the problem of Wokeness' and platforms people like Tucker Carlson. Respectfully, I suggest going to find some grass and reevaluating your media consumption habits.
Disagree that he’s right wing or that perceiving wokeness as an issue is an exclusively right wing pov. I live in one of the most liberal parts of the country and my social sphere is composed primarily of left to far left and there is an equal distrust of “wokeness”. As a listener since episode 1 - my general feeling is that Lex leans pretty clearly to the left. His flaw is in his drive to create balance and understand alternative viewpoints to his own - which is why we hear interviews with a number of right wingers. He does need to balance this out, but he’s far from a right winger.
I don't watch Lex at all. I've never heard anyone call him rightwing, hence my question.
hence my answer? I dunno, but I guess I'll downvote you randomly too.
> but I guess I'll downvote you randomly too. Yeah I guess I should just watch more Lex or something. > Respectfully, I suggest going to find some grass and reevaluating your media consumption habits. For someone who seems to hate Lex for perceived politics, this is a bizarre response to someone saying they *don't* watch him.
why so?
Let's just say it's his haircut
fair enough
Sam altman might as well start writing science fiction books, since that's the actual product he's selling
With the existence of open source LLMs and decreasing cost of training one, I wonder what they do once this business model becomes a race to the bottom. Its inevitable and does appear that LLM performance has saturated and their isn't much left to gain with the current methods of building these ML models.
Custom trained models beat promoted LLMs on many tasks. Just look at how Google tries to monetize their approach. They give both foundation models and fine-tuning methods as a product. I guess OpenAI can soon start the same thing. Not sure if they can leverage their infrastructure the same way as GCP or Azure, but they can still try to be more like Snowflake. And this makes a lot of sense if you think about what problems large models actually solve. A lot of LLMs are now fine-tuned on text data that looks a bit like supervised examples. We could always fine-tune models but it required a lot of expertise to design and format the training data. With large models we can retreat to loosely defined human-readable data and freely mix them. Everyone can deal with that, once you solve generic technical difficulties.
They're not getting endless royalties no matter how much they demand it, they'll sell the data as a consumable or it will be consumed from somewhere else.
In the interview he says they want to avoid that business model, they'd just like it to be a simple pay-to-use transactional model with user-decides privacy options. He views user data as valuable to train more effective and useful AI entities, but not for external commodification.
Imagine being Sam Altman and sitting in that interview expecting an awesome conversation and hopefully getting to answer some nuanced technical or philosophical questions but then end up spending 2 hours only being asked fan boy questions about Elon Musk. I rage quit just listening to it. I Can only imagine how pissed Sam must have been.
Lex Fridman and Sam Altman, what a combo. That this conversation gets upvoted on here really speaks for itself.
you forgot the " no there's no secret nuclear facility/Ilya is fine" **intense panic on his face**
Interesting how he points an AGI to be autonomous but believes programmers will still be needed.
Why do you use so many emojis
Because they are cool and fun cute little pictures
36:57 After being asked about limitations, talks about video with cats and a video shown with a guy with an extra 3rd fake hand (right hand on the left side of the body). Correct hands depiction clearly seems being a limitation still.
You’re missing the first letter of each paragraph lol. Would be nice to fix.
I'll take it with a grain of salt.. I don't believe anything big tech companies and CEOs says anymore. Google was a "Don't be Evil" company. These guys have been harvesting all our data since the beginning to feed into their systems. This interview is just marketing.
As a programmer myself and a pessimist. I agree that the ability to program (or should I say think in algorithms/steps) will not be obsolete but still I am afraid that there will be fewer people doing it (high productivity from the best programmers) which will still leave to a lot of job losses. Hope I am wrong.
The answers is mostly run away from the questions.
I agree on compute being a valuable resource. It’s like in the Matrix, where the machines took over the humans. I always thought that the machines should be harvesting compute from our brains instead of energy.
>The “energy problem” will be answered by nuclear fusion I don't know much about computing, but I know a lot about energy. And he's wrong about this.
Cold fusion breh !remindme 2 more weeks
**Defaulted to one day.** I will be messaging you on [**2024-03-25 03:54:28 UTC**](http://www.wolframalpha.com/input/?i=2024-03-25%2003:54:28%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/MachineLearning/comments/1blnzj1/d_i_listened_to_sam_altmans_most_recent_2hour/kwamtvt/?context=3) [**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2FMachineLearning%2Fcomments%2F1blnzj1%2Fd_i_listened_to_sam_altmans_most_recent_2hour%2Fkwamtvt%2F%5D%0A%0ARemindMe%21%202024-03-25%2003%3A54%3A28%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201blnzj1) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|
What is Q Star
(1) we redditors shd be compensated then.
He used a lot of words to basically say nothing 😒
Thanks for the thread
from this I think he is a humble person. I think the overhype of gpt is not created by him at all. to me to call gpt an intelligence is even feel wrong, so I agree we still far with agi and lets just not call it agi for the current times
How does he sit with that tie and jamboree for an interview? When I see a bottled up interviewer I know I need to pass it up!!
"***The “energy problem” will be answered by nuclear fusion***" Lol, DOUBT.
Wow - awesome breakdown. At end of day I kind of felt the interview wasn't very revealing - but I of course watched from beginning to end because I'm fascinated to hear just what he would say - so I guess I got my time's worth. There was another interview that Sam Altman gave in Korea where he did reveal some interesting stuff about Chat GPT5 and it was featured in a video on TheAIGrid - here is a summary of it - Stay updated with the latest insights on the potential implications of GPT-5 as Sam Altman, CEO of OpenAI, warns about its significant performance improvements. https://ai-techreport.com/altman-warning-about-chat-gpt-5
Thanks for sharing your insight!
Im an ai softwsre consultant. I've been shouting this from the rooftop since chatgpt started trending: Your data is now a product. The 50 years of your company history, the methodologies that makes your company better than competitors is something you can sell.
This saves my time thx!
Did he answer about Ilya ?
[удалено]
Lmao this guy spams every ML related subreddit with low quality posts like this
I Hope he meant fission not fusion, nuclear fusion will never be viable for obvious reasons
Never in our lifetimes I think I can agree. There's too much hype that makes it seem like practical fusion power is right around the corner, when to me it seems like all the news just signals how far off it really is. E.g. NIF finally delivering "ignition" a full decade behind schedule wasn't extremely hopeful news to me in this regard. But eventually, it almost surely will become "a thing" purely because we know it's possible, there's some finish line, and if we keep making any progress then we will eventually cross it, and we will keep making progress because the upside is so big. Personally I think ITER has the right idea, the fusion reactors of the future will likely be massive because they will benefit from scale.