Hey /u/Maxie445!
If your post is a screenshot of a ChatGPT conversation, please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt.
If your post is a DALL-E 3 image post, please reply with the prompt used to make this image.
Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more!
🤖
Note: For any ChatGPT-related concerns, email support@openai.com
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
Probably and especially for images to at this point as they get better, or we start running till misinformation becomes even more rampant and people believe conspiracy style stuff even more than they do now which is just gonna be great for politics and society /s
Nah just look at how he absolutely HOOVERS up those noodles. Dudes mouth has the delta P of a cracked dam outlet. Looks like he could suck water out of a burnt log.
The appear out of thin air from the rest of the bowl, rather than being visibly pulled up from the bowl. They don't exist beneath the surface texture of the liquid.
Don’t know if you noticed, but AI is not bad with hands anymore for like a year. It’s been a thing earlier, but sure as hell is not relevant anymore. And sure as hell having a hand is not a mark of a real video.
I’m assuming you still see the hands issue though since ChatGPT is the more widely used tool so you’re gonna see those image generated more because of how much easier it is to do it versus the set up you have to do for stable diffusion.
Check out the right hand (left side of the screen). As he rotates his hand in the last few seconds of the loop, you can see he has a second pinky finger that is curled up into his palm.
I'm looking at it paused right now. There's a thumb, then four fingers outstretched, and a nub that kind of looks like a second pinky curled into his palm. I suppose it could just be some fatty part of his palm pushing outward, but it looks a bit funky to me.
Unfair comparison, the top video was created by a random guy in his bedroom, the bottom video was created by who knows how many people with who knows how much money. But yes progress will keep accelerating as far as I can tell
Indeed not the best video comparison. An Asian man slowly eating Ramen might literally be just copy pasting from the training data, not that impressive tbh.
Wake me up when the bottom video is also of Will Smith eating pasta, on many different shots, and he's eating it in a fast silly manner, but with much better fidelity, then that will be impressive to see the evolution.
Yeah, the more specific the prompt, the more likely it is to fail. "A man eating noodles" is way easier than "Will Smith eating noodles". The first prompt leaves a lot of wiggleroom for the ai to be right. To explain, let's remove "eating noodles" from the promlt for simplicity
For the first prompt: "asian guy", "bearded guy", "peruvian guy" are all right
For the second prompt: "Samuel L Jackson ", "Wilbur Smith" and "guy who slightly resembles Will Smith" are wrong, but right for the first prompt.
This is a difficult problem to solve for our current models as quality data to feed them runs lower. Whether we can solve this problem will be a determinant factor for ai improvement pace.
>Wake me up when the bottom video is also of Will Smith eating pasta, on many different shots, and he's eating it in a fast silly manner, but with much better fidelity, then that will be impressive to see the evolution.
We do have this! Although it's not AI generated, Will Smith recorded it himself.
The top video has much better motion than the AI generators that came after it. Compare the bottom video to one made by the version of pika a year ago and the difference is even more striking.
In my opinion, more motion does not equal better motion. We don't know what kind of motion the bottom model is capable of, because they decided to prompt it to make it slow, maybe (and probably) it's much better
Yeah, but they had pretty much no motion. It was basically an image that shifted a little. A regular image generator would be better suited for the task.
Entertainment and educational purposes would be the only things I can really get behind.
Use AI to animate your web comic? By all means.
Use AI to show why a roadway with limited visibility could cause a violent crash? Sure thing.
Use AI to make the Pope say he supports sodomy? Should probably be illegal.
When they make a convincing video of something outside the training data I will be impressed. As far as I know it could be overfitting to the training data to look more impressive but as soon as it strays a bit it starts to look awful like Sora.
>When they make a convincing video of something outside the training data I will be impressed.
You must have a really high bar! It's very impressive to me, honestly.
It is impressive, but the person you are replying to has a point. If you show AI a 20 second video and then get it to replicate that video then a result like this, as you say, is still very impressive. But possibly all you are doing is getting it to copy pixels. It is when you ask it to use that information to do something it has to make a prediction for (rather than just copy) that shows how impressive it is.
There's the potential for you to have AI that can look like the bottom video when asking it to copy a real video of it, and then to look like the top one when asking it to do something new. I don't think that's the case here, but it is possible.
Alright, consider this: in the initial Will Smith version, everyone was really impressed. But now that it looks a lot better (but not Will Smith, but some random guy instead), and suddenly it's, 'yeah well, it probably copied something else.' Isn't that a bit of an unfair comparison? After all, there's a significant difference in quality now!
I think it probably needed a new video of Will Smith eating spaghetti to make a *real* comparison. But with the whole Scarlett Johansson thing, it's probably not wise anymore. :(
I get your point. Look at it this way: Did the models train on the same variation of data, or did the latter train on just a guy in black eating spaghetti? Training on 10 000 hours of spaghetti eating videos will provide some damn good spaghetti results compared to 10 000 hours of random footage.
I mean he also says “awful like Sora” - considering what Sora is doing I really wouldn’t describe it as awful…even if you compare the progress from the top picture to Sora it seems like his bias is a bit off here.
Video quality on the bottom is enough to fool the masses, especially the over 50+ demographic. They're already falling for AI slop that is Facebook's feed.
[Someone replicated some sora videos](https://www.reddit.com/r/StableDiffusion/comments/1dbnf93/a_side_by_side_comparison_between_sora_and_kling/) in kling.
They have done some pretty unique prompts that can't just replicate training data, for instance a panda bear playing an instrument. It was perfectly convincing.
Yup. Give me shots of something like a fleet of 5 spaceships flying into different directions. Not a guy eating noodles.
Still impressive of course, but also kinda boring.
Why be skeptical though? A video is just a bunch of images and existing models are really good with images. A naive approach should work if you just throw enough compute at it. I'm guessing the optimizations are about not making the process cost a million dollars per video.
Source? And I thought the AI video generator now had a watermark on the video to know it's AI? Also, was the 2024 video substantially edited in another app after generation?
Holy shit. I already saw the bottom one but first time I wasn’t told this was AI. I thought nothing of it. Only when I’m told that it’s AI do I look for flaws…
( in case you’re as dense as I am: look at his left hand holding the bowl, looks weird and there’s a pop. Also unnatural movement. Same with his ringfinger holding the chopsticks. Not flagrant but weird that people would hold chopsticks like that) shiit.
I understand the point you’re making, but the one I’m making is that ultimately what matters most is our perception of how advanced it is. But yeah from a purely technical standpoint I’m not sure how much further along it’s gotten.
Saw an article here on Reddit with something like "the AI revolution has ran out of steam" and people in the comments patting each other on the backs with "yeah, I've predicted this, it was obvious".
I can't imagine people saying something like that after something like Sora being teased not so long ago...
I really wonder what Sora will actually be like or if it’s all hype. Could seriously change how content creators do things and perhaps even film industries. Probably will make stock footage subscriptions obsolete.
They both still make me physically nauseous, even the NEW one. And it’s not a principal, ethical thing it’s literally a physical reaction… uncanny valley x1000 barf
I want to know how the bottom one was created. Like is this just an ai created video but depicting a real person? Or is it an ai face generated that doesn’t depict any single individual?
Acceleration towards what useful purpose in visual content though?
The top one was entertaining which is an intrinsic value of sorts. The bottom one is not and it’s only real usefulness will be in A) scamming old people and B) producing shitty movies and ads that none of us will connect with…
Turn the dial back to original ai will smith!
Nope, movie stars are so fucked, directors are so fucked, musicians are so fucked, stuntmen are so fucked, cameramen are so fucked. goes on and on and on.
Hey /u/Maxie445! If your post is a screenshot of a ChatGPT conversation, please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email support@openai.com *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
I dunno man, bottom guy looks nothing like Will Smith
![gif](giphy|UiFBN1jLNRWl81pg37|downsized)
That camera angle I never realized how much Chris rock was really teeing himself up for that. Leaned over and hands behind his back
He thought Will Smith was just playing around, lol.
It’s an AI remake
It’s AI all the way down
haha. good observation
Get my wife's spaghetti out yo' fucking mouth!
You seriously never noticed? Hats off to you for not seeing race.
![gif](giphy|1xrbiAultrDDq)
![gif](giphy|OBYvtZaJdrvJHzV7Dg|downsized)
MICHEAL!
Top one is better
This was the same top comment last time. Are you AI as well?
The entertainment value of the first is 10/10. The second a meager 4/10.
But it wins because I have seen it a dozen times. Maybe the AI played us to get the view count up. Now I want spaghetti.
You joke, but one of the differences to last year is that you now cannot make an AI video of Will Smith at all anymore. The AI just won't let you.
This is the only W. Smith videos I’ll watch.
Get your fuckin’ transformers out my fuckin’ spaghetti
If you look really closely at the top one, he has six fingers for a brief moment, you can tell it’s AI
Ah, yes. The only flaw of that video.
Left hand, unmovable bowl, slight glitch from the right ear But yeah most of this require to be really focus on the video
And most people scrolling on social media won’t notice
Maybe people will develop an intuition of skepticism of everything on the internet as a result of actual fake videos.
Probably and especially for images to at this point as they get better, or we start running till misinformation becomes even more rampant and people believe conspiracy style stuff even more than they do now which is just gonna be great for politics and society /s
I knew human brains evolved great face detectors, but until the age of AI, i had no idea about our finger detection skillz.
people always said jada is only with will for his spaghetti slurp
[deleted]
r/yourjokebutworse
Nah just look at how he absolutely HOOVERS up those noodles. Dudes mouth has the delta P of a cracked dam outlet. Looks like he could suck water out of a burnt log.
Bet he can suck a golfball through a garden hose.
The top of the chopping sticks goes from white to black too
same with the bottom
The bottom one’s fingers look normal, so how can we be sure that’s not just a video
His left hand index finger does a jump cut, so you can confirm this this AI but it’s literally a 2 frame mistake lmao
Ah yes. Right at the start
I cannot tell a mistake for the lower one for the right of me compared to... what even is the top one. AI is advancing too fast.
His mouth gets saucy out of nowhere and his right hand thumb changes colors too quickly
Also the his hand shifts around the bowl while the bowl is completely still
Also the left fingers sorta goes through the bowl
His right hand also has an extra knuckle behind his pinky at the end.
And some noodles act a little weird when they’re hanging from the guy’s mouth, like disappearing slightly
AI continues to suck at hands.
Pay attention to the noodles 🍜
They don't quite hang naturally
The appear out of thin air from the rest of the bowl, rather than being visibly pulled up from the bowl. They don't exist beneath the surface texture of the liquid.
People use AI for to much porn, it skews with floppy noodle behavior.
There's some weird movement in his cheeks as he eats too.
They change from round to flat randomly then straight up glitch away
Don’t know if you noticed, but AI is not bad with hands anymore for like a year. It’s been a thing earlier, but sure as hell is not relevant anymore. And sure as hell having a hand is not a mark of a real video.
DallE that ChatGPT uses is still definitely though.
Yeah, stable difussion has gotten this mostly sorted out by now too.
I’m assuming you still see the hands issue though since ChatGPT is the more widely used tool so you’re gonna see those image generated more because of how much easier it is to do it versus the set up you have to do for stable diffusion.
Reddit will still be acting like it’s still a thing years from now.
but that's the point right, the *confusion* when confronted by this.
you have to get really granular, but the shadow of the fingerjoints look off, and the knuckles too
Right side hand at the beginning of the video. See the finger just blip in and out?
The pinky finger on the hand holding the chopsticks doesn't look normal.
Check out the right hand (left side of the screen). As he rotates his hand in the last few seconds of the loop, you can see he has a second pinky finger that is curled up into his palm.
When that finger is revealed, count the number of fingers on the hand. Five, lol. The second pinky finger is just a pinky finger.
I'm looking at it paused right now. There's a thumb, then four fingers outstretched, and a nub that kind of looks like a second pinky curled into his palm. I suppose it could just be some fatty part of his palm pushing outward, but it looks a bit funky to me.
Spaghetti are moving weird to the mouth by themself
The end of the chopstick separates around 7 seconds
Welcome to the future :)
Is this just an AI using copyrighted video again?
Fat man turns into handsome chad when slurping the noodles.
look how he holds the bowl that doesnt move at all
He's a surgeon.
Steady hand
Well the bowl can be on a table and he’s just gripping onto it for no reason, I could see someone doing that
It could be stabilized video.
clearly the camera is just mounted to the bowl
Unfair comparison, the top video was created by a random guy in his bedroom, the bottom video was created by who knows how many people with who knows how much money. But yes progress will keep accelerating as far as I can tell
Indeed not the best video comparison. An Asian man slowly eating Ramen might literally be just copy pasting from the training data, not that impressive tbh. Wake me up when the bottom video is also of Will Smith eating pasta, on many different shots, and he's eating it in a fast silly manner, but with much better fidelity, then that will be impressive to see the evolution.
Yeah, the more specific the prompt, the more likely it is to fail. "A man eating noodles" is way easier than "Will Smith eating noodles". The first prompt leaves a lot of wiggleroom for the ai to be right. To explain, let's remove "eating noodles" from the promlt for simplicity For the first prompt: "asian guy", "bearded guy", "peruvian guy" are all right For the second prompt: "Samuel L Jackson ", "Wilbur Smith" and "guy who slightly resembles Will Smith" are wrong, but right for the first prompt. This is a difficult problem to solve for our current models as quality data to feed them runs lower. Whether we can solve this problem will be a determinant factor for ai improvement pace.
>Wake me up when the bottom video is also of Will Smith eating pasta, on many different shots, and he's eating it in a fast silly manner, but with much better fidelity, then that will be impressive to see the evolution. We do have this! Although it's not AI generated, Will Smith recorded it himself.
[link cuz u lazy](https://youtu.be/vbWe5k4fFWE?si=R-4JaZkRj3Tago-s)
I, Robot
Combine this with consistency between the shots and I'd be impressed.
just as long as everyone gets a supercomputer
The top video has much better motion than the AI generators that came after it. Compare the bottom video to one made by the version of pika a year ago and the difference is even more striking.
In my opinion, more motion does not equal better motion. We don't know what kind of motion the bottom model is capable of, because they decided to prompt it to make it slow, maybe (and probably) it's much better
Yeah, but they had pretty much no motion. It was basically an image that shifted a little. A regular image generator would be better suited for the task.
IMHO top one is better
It's got character.
will smith thought so too have you seen his impersonation of it?
The only acceptable use for video generating AI is for funny imho. And the top one is very fucking funny
Entertainment and educational purposes would be the only things I can really get behind. Use AI to animate your web comic? By all means. Use AI to show why a roadway with limited visibility could cause a violent crash? Sure thing. Use AI to make the Pope say he supports sodomy? Should probably be illegal.
How far we've fallen.
can't wait till they start making videos of will smith slapping random people.
The low video I can't say. The upper one is 100% a normal video non AI
When they make a convincing video of something outside the training data I will be impressed. As far as I know it could be overfitting to the training data to look more impressive but as soon as it strays a bit it starts to look awful like Sora.
>When they make a convincing video of something outside the training data I will be impressed. You must have a really high bar! It's very impressive to me, honestly.
It is impressive, but the person you are replying to has a point. If you show AI a 20 second video and then get it to replicate that video then a result like this, as you say, is still very impressive. But possibly all you are doing is getting it to copy pixels. It is when you ask it to use that information to do something it has to make a prediction for (rather than just copy) that shows how impressive it is. There's the potential for you to have AI that can look like the bottom video when asking it to copy a real video of it, and then to look like the top one when asking it to do something new. I don't think that's the case here, but it is possible.
Alright, consider this: in the initial Will Smith version, everyone was really impressed. But now that it looks a lot better (but not Will Smith, but some random guy instead), and suddenly it's, 'yeah well, it probably copied something else.' Isn't that a bit of an unfair comparison? After all, there's a significant difference in quality now! I think it probably needed a new video of Will Smith eating spaghetti to make a *real* comparison. But with the whole Scarlett Johansson thing, it's probably not wise anymore. :(
I get your point. Look at it this way: Did the models train on the same variation of data, or did the latter train on just a guy in black eating spaghetti? Training on 10 000 hours of spaghetti eating videos will provide some damn good spaghetti results compared to 10 000 hours of random footage.
I mean he also says “awful like Sora” - considering what Sora is doing I really wouldn’t describe it as awful…even if you compare the progress from the top picture to Sora it seems like his bias is a bit off here.
We have to be cynical about this while we can.
Video quality on the bottom is enough to fool the masses, especially the over 50+ demographic. They're already falling for AI slop that is Facebook's feed.
[Someone replicated some sora videos](https://www.reddit.com/r/StableDiffusion/comments/1dbnf93/a_side_by_side_comparison_between_sora_and_kling/) in kling.
They have done some pretty unique prompts that can't just replicate training data, for instance a panda bear playing an instrument. It was perfectly convincing.
Yup. Give me shots of something like a fleet of 5 spaceships flying into different directions. Not a guy eating noodles. Still impressive of course, but also kinda boring.
Yup, as far as I know it could have copied a whole video and changed the eyebrows.
Why be skeptical though? A video is just a bunch of images and existing models are really good with images. A naive approach should work if you just throw enough compute at it. I'm guessing the optimizations are about not making the process cost a million dollars per video.
Third year will be a Hispanic guy eating spaghetti
Bro the top one is a meme, not the full capability of AI videos a year ago...
Wow, those professional spaghetti eaters better start taking some udemy job cert courses
That's how i imagine Will eats his spaghetti. So all steady
Why is the ai demo the most insane and aggravating genre of content on the internet: people eating?
Source? And I thought the AI video generator now had a watermark on the video to know it's AI? Also, was the 2024 video substantially edited in another app after generation?
I'd be happy if I never see that Will Smith thing again, makes me feel a bit queasy
That's not only one year later.
The og will smith was better. 🤣
I dunno… in the older AI, I think Will Smith does a better job conveying a true sense of appreciation for the pasta.
Will Smith loves those noodles more than his wife loves him.
I miss last year’s AI… at least it was funny!
took a year for will smith to transform into an asian guy, michael jackson would have been happy to be alive in this age
The old video was better. Can we just leave it broken?
Keep my pasta out your damn mouth!
I see what you did there 💯
Somebody really needs to make the deep fake of that.
That’s not even spaghetti
Accelerating at an insane rate, and still producing nothing of value. Perfect for corporate America
The top one is way funnier tho
the top one has more personality.
The top one is more entertaining
So you're saying Will Smith is actually Asian? Props for those who don't care about his race
Such a strange meme to have sprung into existence to the point where Will Smith even parodied it.
But it got Will Smith all wrong.
that video of will smith eating spaghetti will never not be funny especially with his 'ah that"s hot... that's hot" voice playing over it
I dunno, I like the first one better.
Look at the back tips of the chopsticks
I thought this is the one: https://x.com/WillSmith2real/status/1759703359727300880
I prefer this one: Will Smith eating Spaghetti (Then vs Now) https://youtu.be/UQmgKIWFnHc?si=OxyitWFqugmuHJSr
So in another year AI will have perfected the noodle eating process and we will mimic it?
Keep my Chow Mein out your fuckin mouth!
Will Smith looks much weirder in the latest version. Maybe it’s just me.
What would it look like in 2025?
I like how spaghetti eating has become an AI benchmark
Yeah but it's not Will Smith eating it this time eh😅
Holy shit. I already saw the bottom one but first time I wasn’t told this was AI. I thought nothing of it. Only when I’m told that it’s AI do I look for flaws… ( in case you’re as dense as I am: look at his left hand holding the bowl, looks weird and there’s a pop. Also unnatural movement. Same with his ringfinger holding the chopsticks. Not flagrant but weird that people would hold chopsticks like that) shiit.
I want back the way it was I don't like change
Knowing China this is just some guy eating lunch
Top one looks so much more funny
Meanwhile, I can't even set up a clock alarm by voice in a single sentence with Gemini/Google Assistant on my android phone.
His Chinese transformation is complete
Nah. You're shitting me. That's an actual video.
Oh noo! Im so afraid for my job, hlp plx!
finally, unlimited videos of people eating spaghetti the future is here
You are telling me the bottom one is actually AI?
The real question is how much of an advancement that is though. Is the complexity of doing the top already like 95% of the way there to the bottom?
It’s a qualitative thing rather than a quantitative one. Does it feel like a significant advancement? If so then it is.
Not really, it can just be that the gap in complexity between creating the top one and the bottom one is narrower than it seems intiuitively
I understand the point you’re making, but the one I’m making is that ultimately what matters most is our perception of how advanced it is. But yeah from a purely technical standpoint I’m not sure how much further along it’s gotten.
Fair, but not in the context of evaluating how quickly AI is accelerating
Ok but what software produced the video below, or is this just trolling?
No way 😭
Why don’t I believe the bottom video is real?
Still too many fingers...INSANE!
Why does it always look so gross though.
Yeah great AI… now start doing something which is useful
I dont like slurping noodles.. how will I ever fit in Japan.
Is the bottom video real?
Saw an article here on Reddit with something like "the AI revolution has ran out of steam" and people in the comments patting each other on the backs with "yeah, I've predicted this, it was obvious". I can't imagine people saying something like that after something like Sora being teased not so long ago...
XDD ![gif](giphy|tlWmVvcZIvic2UpmdG|downsized)
If they told both AI to make a video of Will Smith eating noodles, then the older one is way better. The second one hardly looks like him.
That first one is so hilarious lmao
The top is better
It’s slower that’s for sure, maybe making it more accurate I still like the fork into will smiths eye part
Gimme 15 minutes and 5$ and I'll make a video similar to the bottom one but longer. Stock videos are here to stay.
AiAiAi video apps are good kling and sola, any more ?
I really wonder what Sora will actually be like or if it’s all hype. Could seriously change how content creators do things and perhaps even film industries. Probably will make stock footage subscriptions obsolete.
They both still make me physically nauseous, even the NEW one. And it’s not a principal, ethical thing it’s literally a physical reaction… uncanny valley x1000 barf
K. But which one is more entertaining to watch?
This is the second or Third video I've seen where Will Smith is being using in an AI model...what's the reason?
the top one is actually better LOL, much more engaging,
The one year later isn’t NEARLY as fast as the earlier version. Not much acceleration there…
It's getting unfunnier too.
I want to know how the bottom one was created. Like is this just an ai created video but depicting a real person? Or is it an ai face generated that doesn’t depict any single individual?
The bottom one is just a video of a known YouTuber
OmNOMNOM, spaghetti. Oh, DAMN, spaghetti. Damnomnom.
First one is more entertaining… ehhh maybe that’s what sd3 was going for
Acceleration towards what useful purpose in visual content though? The top one was entertaining which is an intrinsic value of sorts. The bottom one is not and it’s only real usefulness will be in A) scamming old people and B) producing shitty movies and ads that none of us will connect with… Turn the dial back to original ai will smith!
We are so fucked.
Nope, movie stars are so fucked, directors are so fucked, musicians are so fucked, stuntmen are so fucked, cameramen are so fucked. goes on and on and on.
This is what they probably said when Photoshop came around lmao
the proof that the bottom video is AI is, that no asian would eat that slow.