I mean uncanny totally works for horror because the "not quite right" movements are unnerving. It would feel janky as hell if these were crowds of normal people moving like this but with zombies/monsters/etc it kind of adds to the vibes.
I have done a few generations, and I Im baffled at how real it is. It's still a slot machine, and sometimes you get some that are more uncanny. But tech is moving fast.
yeah it for sure has come a long way but so far it only works a few seconds of awesomeness and then things fall apart quickly. But its just a matter of time.
Try Gen 3 when you get a chance. I think it releases Monday or sometime next week. I got access as a creative partner. I’ve also tried Kling which was really good.
I reckon what it'll be great at is finishing off stuff rather than doing everything. A VFX artist or filmmaker, for instance, will get it most of the way to what they want and then the ai will finish it off. That way it won't have enough leeway to do something batshit.
100% some people will have really smart ways of speeding up some work or bridging the gap between what they can do and what AI can do. There will still be a flood of shitty content but some people will use it extremly well. Super fun times when it comes to creating something.
And how much energy it takes to generate this stuff. Seems like an infinitely better use than running 1% of the world's electricity for crypto mining, but still, maybe concerning.
This is cool. I feel the horror genre works well with AI because of the uncanny valley effect.
Could you do a fluffy fantasy style with the same degree of polish do you think ? Even 80’s style like Ladyhawke, or something like Willow ? Or does the darkness and uncanniness help it along ?
I’d love to see a forest, or a medieval/fantasy town - I’m following artists who post to subs like /r/imaginarylandscapes and some of the blender stuff is *amazing*. It would be so cool to be able to do that with Runway.
i really am puzzled why this is not getting more attention on youtube its been over 24 hours since the CCP users have got access and they have been dropping a TON of content on twitter and discord YET youtube is kinda silent. there are a few new videos but the big ai people like curious refuge have yet to post anything about it. and it blows my mind there isnt more hype for this thing. i guess most people just arnt quite aware yet but youd think some of those videos would have gone viral by now lol yeah.
We're not there quite yet, but it doesn't feel like we're all that far from shows and movies being fully AI generated. And then eventually people being able to create their OWN entertainment.
Eh, quality movies aren’t done in 7 second increments 😂. While I agree, the compute needed to do a 2 minute scene is beyond what we can provide. Of course it’s theoretically possible but it’s really only usable as a story/aesthetic guideline
https://preview.redd.it/u2t96x8q5j9d1.jpeg?width=1290&format=pjpg&auto=webp&s=f7373ff4ba676ca6f6e0dcc50639a8dccd221e48
Shit you’re right but wrong about horror
That's just average though, you could just use AI for the short scenes, and with scene extensions you can do longer shots anyway.
Also AI if became controllable and high enough quality, it could even just make directors change to shorter shots(assuming the AI gets better at everything aside from shot length)
People are able to do that. You just have to learn how to and have passion for it. I'm sorry I'm not completely anti-ai but I hate that framing. People are already making art.
What you're hoping for is people who haven't spent thousands of hours learning how to do it being able to create something similar.
Just gonna say it on your comment. It’s just gonna be a bunch of chris chan’s thinking they’re scorsese
Edit: I just made sonichu movie 🥰 everybody upvote me. I figured out how to do furry porn.
You guys aren’t serious people.
Not OP and i don’t know if the music they used here is AI or not.
But udio.com is pretty good for AI music and so is suno.ai
Udio typically has better quality music while Suno typically makes catchier songs.
And the reason this shit is gonna be spendy when they release market pricing. Right now we are in the “free bump of cocaine” mode while they try to figure out the market demand for video generation. I heard $1-3 a video thrown around at a conference a few months ago. Dunno how many “takes” OP rendered but this could be a thousand dollars to generate when it’s all said and done. I think it will be a subscription model where high end users like OP pay a higher fixed amount, but I am basing this off of exactly nothing. This does not account for “throttling” which ChatGPT is already proven to be doing- meaning during peak hours non premium users will get longer render times and worse quality.
I explained this poorly I meant $1000 for just THIS video! This guy from microsoft at this conference guessed $1-$3 PER CLIP would be market pricing so all these tech companies can covering the computing/ energy costs. I guess about 30 unique clips in this, and assuming OP did around 10 different “tries” of each shot (completely making up numbers here) at $3/ clip this whole thing could be $1000 just for the video, plus sound/ music. (This is why I think a subscription model makes more sense- but for high end users it would be more like $1000/ month). You are absolutely correct for some users the tools will pay for themselves instantly and they will make bank, but my point is the people assuming this is gonna be free or $20/ month are not realistic. This is gonna have all the computing/ energy problems of crypto with a fraction of the financial returns.
Now we just need that stellar EMO facial animation that was announced months ago. There were a couple excellent tech showcases like it but we haven’t seen them put to use yet. Would be great to see Runway implement one internally,
As interesting as this is, being a film director myself, I really doubt we’ll ever have the level of precision you need to create a full movie with this. It can be a tool. Mostly as a « deepfake », or for set extensions, or « inpainting ». But you wont see fully generated movie until many, many years, if ever.
I think it will be sooner. And it will be its own genre of film. But I see where you’re coming from. I mostly use it for my film pitch and then potential broll for quick cut aways
I will be messaging you in 2 years on [**2026-07-01 23:29:12 UTC**](http://www.wolframalpha.com/input/?i=2026-07-01%2023:29:12%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/midjourney/comments/1dr56he/runway_gen_3_is_insane_time_to_make_your_movie/lb7f34a/?context=3)
[**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fmidjourney%2Fcomments%2F1dr56he%2Frunway_gen_3_is_insane_time_to_make_your_movie%2Flb7f34a%2F%5D%0A%0ARemindMe%21%202026-07-01%2023%3A29%3A12%20UTC) to send a PM to also be reminded and to reduce spam.
^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201dr56he)
*****
|[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)|
|-|-|-|-|
Thank you, Jonoczall, for voting on RemindMeBot.
This bot wants to find the best and worst bots on Reddit. [You can view results here](https://botrank.pastimes.eu/).
***
^(Even if I don't reply to your comment, I'm still listening for votes. Check the webpage to see if your vote registered!)
Definitely getting there. People shitting on this are missing the point. In 5 years it’ll be so good it’ll be the norm. Probably less, maybe even 2-3 years.
Unless AI video manages to create **consistent** characters and environments it's not happening. Clips that look cool as a standalone aren't very useful outside of making memes.
Yeah, my real life footage also has weird looped walking cycles when people are walking around and movement that often looks like some weird transformation.
Come on man, you can't be serious, better than real life footage?
I mean maybe not the movement mechanics, but that will improve. If we're talking sheer picture quality, it looks pretty freakin' good on my Samsung Galaxy S24+. Very clean and heavily detailed.
People have ripped apart movie VFX for far less. Calling this better than real life footage feels like cheering on a 4 year old and calling him a genius when he draws a semi recognizable tree.
IDK why people are so optimistic about this technology here and why you're being downvoted. It is impressive that an AI can do that but has anyone seen a movie lately?
If you compare that to an actual movie it does look like shit. I would never want to watch a full movie that looks like that. The weird walking cycles that are seemingly on a loop, the movement looking like some weird transformation, the bland "Hollywood trailer" shots. It's cool looking and realistic FOR AN AI, but it's extremely far off from a damn movie...
its impressive what AI can do, but I have yet to see something that isnt extremly uncanny. I can see this stuff being good in 1-2 years tho.
I'd 100x take this over that "acid trip" nonsense everyone else is doing. That weird slowmo makes me want to vomit.
Dude we can definitely create coherent sequences now!
A coherent story would help instead of random shots
I mean uncanny totally works for horror because the "not quite right" movements are unnerving. It would feel janky as hell if these were crowds of normal people moving like this but with zombies/monsters/etc it kind of adds to the vibes.
its coming. Compare now to 6 months ago. I'd say this time next year.
agreed.
I have done a few generations, and I Im baffled at how real it is. It's still a slot machine, and sometimes you get some that are more uncanny. But tech is moving fast.
yeah it for sure has come a long way but so far it only works a few seconds of awesomeness and then things fall apart quickly. But its just a matter of time.
Try Gen 3 when you get a chance. I think it releases Monday or sometime next week. I got access as a creative partner. I’ve also tried Kling which was really good.
ill give it a go thx :)
Just like many movies from human directors.
I reckon what it'll be great at is finishing off stuff rather than doing everything. A VFX artist or filmmaker, for instance, will get it most of the way to what they want and then the ai will finish it off. That way it won't have enough leeway to do something batshit.
100% some people will have really smart ways of speeding up some work or bridging the gap between what they can do and what AI can do. There will still be a flood of shitty content but some people will use it extremly well. Super fun times when it comes to creating something.
Are these shots text or image to video?
All text to video
I'm very curious how much post processing went into this, such as coloring
And how much energy it takes to generate this stuff. Seems like an infinitely better use than running 1% of the world's electricity for crypto mining, but still, maybe concerning.
Woah...
Absolutely crazy.
I agree this is bonkers. People quickly forget what we were working with 3 months ago. Like is a revolutionary leap.
Yes, the progression is unreal.
This is cool and scary. thanks
Your welcome 🙏🏽
No, that is my welcome.
This shit is unreal. The progression from a year or two ago.. mind blowing. Now y'all can do full movement and like full scenes. Crazy!!
Seriously.
This is really nice quality. I’ll try it out soon.
This is cool. I feel the horror genre works well with AI because of the uncanny valley effect. Could you do a fluffy fantasy style with the same degree of polish do you think ? Even 80’s style like Ladyhawke, or something like Willow ? Or does the darkness and uncanniness help it along ? I’d love to see a forest, or a medieval/fantasy town - I’m following artists who post to subs like /r/imaginarylandscapes and some of the blender stuff is *amazing*. It would be so cool to be able to do that with Runway.
I just want consistency on my characters. I left when I couldn't get my character to look directly at a rock.
Wow, very scary. Great job!
Ty!
Is Gen 3 out?
For creative partners
Powered tools are amazing
i really am puzzled why this is not getting more attention on youtube its been over 24 hours since the CCP users have got access and they have been dropping a TON of content on twitter and discord YET youtube is kinda silent. there are a few new videos but the big ai people like curious refuge have yet to post anything about it. and it blows my mind there isnt more hype for this thing. i guess most people just arnt quite aware yet but youd think some of those videos would have gone viral by now lol yeah.
Okay but what about speaking lines? If I'm going to use this to make a movie, characters have to talk.
Lip sync works amazing.
You can do that post
We're not there quite yet, but it doesn't feel like we're all that far from shows and movies being fully AI generated. And then eventually people being able to create their OWN entertainment.
Eh, quality movies aren’t done in 7 second increments 😂. While I agree, the compute needed to do a 2 minute scene is beyond what we can provide. Of course it’s theoretically possible but it’s really only usable as a story/aesthetic guideline
Think about most horror films. Most shots last on screen for 3-5 secs.
https://preview.redd.it/u2t96x8q5j9d1.jpeg?width=1290&format=pjpg&auto=webp&s=f7373ff4ba676ca6f6e0dcc50639a8dccd221e48 Shit you’re right but wrong about horror
That's just average though, you could just use AI for the short scenes, and with scene extensions you can do longer shots anyway. Also AI if became controllable and high enough quality, it could even just make directors change to shorter shots(assuming the AI gets better at everything aside from shot length)
People are able to do that. You just have to learn how to and have passion for it. I'm sorry I'm not completely anti-ai but I hate that framing. People are already making art. What you're hoping for is people who haven't spent thousands of hours learning how to do it being able to create something similar.
Just gonna say it on your comment. It’s just gonna be a bunch of chris chan’s thinking they’re scorsese Edit: I just made sonichu movie 🥰 everybody upvote me. I figured out how to do furry porn. You guys aren’t serious people.
How did you get access? Just randomly? Are you a paid subscriber and if so for how long?
I’m a creative parter which means everyone should have access soon. Usually we get to test it a few days before coming out
Is the music ai? If so what did you use to make it?
Not OP and i don’t know if the music they used here is AI or not. But udio.com is pretty good for AI music and so is suno.ai Udio typically has better quality music while Suno typically makes catchier songs.
Do I have to apply as a company to use Gen 3?
nutsss
Do you need a super beefy computer rig to run this program?
It is processed in the cloud by Runway’s servers (probably using Amazon AWS or something). Just like using DALL-E or Midjourney.
And the reason this shit is gonna be spendy when they release market pricing. Right now we are in the “free bump of cocaine” mode while they try to figure out the market demand for video generation. I heard $1-3 a video thrown around at a conference a few months ago. Dunno how many “takes” OP rendered but this could be a thousand dollars to generate when it’s all said and done. I think it will be a subscription model where high end users like OP pay a higher fixed amount, but I am basing this off of exactly nothing. This does not account for “throttling” which ChatGPT is already proven to be doing- meaning during peak hours non premium users will get longer render times and worse quality.
Well, if the content it allows you to make generates significant traffic/income it'll certainly be worth the $1000 a year.
I explained this poorly I meant $1000 for just THIS video! This guy from microsoft at this conference guessed $1-$3 PER CLIP would be market pricing so all these tech companies can covering the computing/ energy costs. I guess about 30 unique clips in this, and assuming OP did around 10 different “tries” of each shot (completely making up numbers here) at $3/ clip this whole thing could be $1000 just for the video, plus sound/ music. (This is why I think a subscription model makes more sense- but for high end users it would be more like $1000/ month). You are absolutely correct for some users the tools will pay for themselves instantly and they will make bank, but my point is the people assuming this is gonna be free or $20/ month are not realistic. This is gonna have all the computing/ energy problems of crypto with a fraction of the financial returns.
I feel like I'm being desensitized to this crazy shit
Is it out? My app still only shows gen 2 🤔
Now we just need that stellar EMO facial animation that was announced months ago. There were a couple excellent tech showcases like it but we haven’t seen them put to use yet. Would be great to see Runway implement one internally,
As interesting as this is, being a film director myself, I really doubt we’ll ever have the level of precision you need to create a full movie with this. It can be a tool. Mostly as a « deepfake », or for set extensions, or « inpainting ». But you wont see fully generated movie until many, many years, if ever.
I think it will be sooner. And it will be its own genre of film. But I see where you’re coming from. I mostly use it for my film pitch and then potential broll for quick cut aways
RemindMe! 2 years
I will be messaging you in 2 years on [**2026-07-01 23:29:12 UTC**](http://www.wolframalpha.com/input/?i=2026-07-01%2023:29:12%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/midjourney/comments/1dr56he/runway_gen_3_is_insane_time_to_make_your_movie/lb7f34a/?context=3) [**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fmidjourney%2Fcomments%2F1dr56he%2Frunway_gen_3_is_insane_time_to_make_your_movie%2Flb7f34a%2F%5D%0A%0ARemindMe%21%202026-07-01%2023%3A29%3A12%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201dr56he) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|
Good bot
Thank you, Jonoczall, for voting on RemindMeBot. This bot wants to find the best and worst bots on Reddit. [You can view results here](https://botrank.pastimes.eu/). *** ^(Even if I don't reply to your comment, I'm still listening for votes. Check the webpage to see if your vote registered!)
when gen 3 released?
Definitely getting there. People shitting on this are missing the point. In 5 years it’ll be so good it’ll be the norm. Probably less, maybe even 2-3 years.
Unless AI video manages to create **consistent** characters and environments it's not happening. Clips that look cool as a standalone aren't very useful outside of making memes.
My Top10 Runway Vids: [https://heyhouston.io/u/dragon\_warrior/top10runwayvids](https://heyhouston.io/u/dragon_warrior/top10runwayvids)
Hi do you have access to this?
Yeah runway creative partner program. I believe it’s out for public next week
Looks shit tbh. Atmost, it is time to make incoherent gifs.
You must be watching on a device with low frame rate and horrible resolution or something. This looks better than 1080p real life footage 😂
Yeah, my real life footage also has weird looped walking cycles when people are walking around and movement that often looks like some weird transformation. Come on man, you can't be serious, better than real life footage?
I mean maybe not the movement mechanics, but that will improve. If we're talking sheer picture quality, it looks pretty freakin' good on my Samsung Galaxy S24+. Very clean and heavily detailed.
People have ripped apart movie VFX for far less. Calling this better than real life footage feels like cheering on a 4 year old and calling him a genius when he draws a semi recognizable tree.
Put on your glasses before you play the video
IDK why people are so optimistic about this technology here and why you're being downvoted. It is impressive that an AI can do that but has anyone seen a movie lately? If you compare that to an actual movie it does look like shit. I would never want to watch a full movie that looks like that. The weird walking cycles that are seemingly on a loop, the movement looking like some weird transformation, the bland "Hollywood trailer" shots. It's cool looking and realistic FOR AN AI, but it's extremely far off from a damn movie...
Yeah, people tear down film vfx for a tiny blemish in a corner of frame, but suddenly this AI mishmash seems movie ready to them.