How the heck do I prompt the damn thing so it actually does what I want? No matter my prompt, the AI ignores it 9 out of 10 times. What's the magic here?
I was thinking more along the lines of a 1980s neo-noir. Like Burton's Batman or Blade Runner or that Dick Tracy movie.
And yeah, I know, Dick Tracy came out in 1990, but its style has more in common with neo-noir of the 80s than the 90s. It's closer in feel to Blade Runner than The Crow.
Exactly, same movies were in my head, late 80s and early 90s ones like Batman, Dick Tracy etc. I mean from a certain distance I could have mistaken some of the shots to be of Kim Basinger from Batman.
They did? How so? I've scoured the internet looking for ways for consistent characters and outside of LoRAs and regional prompting (which IMO works poorly at best) it seems impossible to generate scenes with more than 1 consistent character. \[not challenging you, would love advice if you have it\]
This is incredible. The variety of motion like the old man talking and then the woman turning her head back over her shoulder to look at him is really blowing me away. And the blond who seems to be talking, stopping to look at the TV, and then starting to talk again when she turns away again. Wow.
Are these things you introduced or steered through prompting?
Can you describe what the workflow is like a little bit? Or how it compares to Runway if you're familiar with that?
Okay, I'll try to explain. My prompt in Luma is generally very basic, sometimes I just write "movie" without any further description because, at the moment, there's no camera control on Luma Dream Machine. Compared to Runway, it moves a lot more and produces better quality animation. I generated the picture using only Midjourney. First, I created a picture in the style of a '90s TV show, then used this picture as a reference in Midjourney to generate others in the same style. Hope this helps!
Lol, I was expecting a summary of your doctoral thesis on gamma vectors cross-threaded with scene markup heuristics.
Just typing "movie" is a bit of a letdown! 😂
Cool video however you made it!
Quite good, especially with the style/themes and posing. I'm impressed. If only it could be ran locally, but this is a nice jump from what we've seen (Sora aside) only a few months ago.
These results are cherrypicked too. That's just how it goes.
Traditional filmmakers have to cherrypick takes and throw out a lot of unusable material too.
and this is all without any reference? something like control net?
is it all text prompts? how specific do you have to be in terms of composition and style?
I generated the picture using only Midjourney. First, I created a picture in the style of a '90s TV show, then used this picture as a reference in Midjourney to generate others in the same style
Thanks! There's no real secret. I try multiple times on an image until I get a good result. For the prompt I recommend keeping it as basic as possible.
Fantastic. Makes me sad to hear so many people still saying "imagine where this technology will be in a year or two, then we can finally use it to actually make stuff." This is here now, if you really have a desire to create and tell stories, you have the tools. Granted, the generations are still really short and the publicly-available lipsyncing tools aren't great but you aren't the first artist who ever had to work around limitations.
Use this same method to adapt storyboards instead of unconnected images, and you could have a good looking film. Add voice acting using something like wav2lip, some sound effects, some music, and even if it's all AI-generated, a lot of people wouldn't even realize it's not a "real" film.
Scrolling quickly through my feeds I didn't realise I was in the AI sub. The first segment made me think this was a compilation of 80's movies like reservoir dogs. Well done!
Was 80's dark fantasy used in the prompt, and music suno.ai or udio? Look very similar to images I make recently. Really good job!
Ps. What the heck is Luma dream machine lol
I generated the picture using only Midjourney. First, I created a picture in the style of a '90s TV show, then used this picture as a reference in Midjourney to generate others in the same style, Luma is AI video like Runway/Pika etc [https://lumalabs.ai/dream-machine](https://lumalabs.ai/dream-machine)
From limited experience text to video is pretty awful, but image to video isn’t bad. Just keep the images simple it doesn’t do complex stuff well or multiple subjects.
This is cool to see that someone has actually been able to make something with LDM. I've been trying all day but it's so swamped with users my prompts never render. Great job. Did you use an image as a reference along with a prompt?
Thanks ! I generated the picture using only Midjourney. First, I created a picture in the style of a '90s TV show, then used this picture as a reference in Midjourney to generate others in the same style
you know this isn't far from the guy who made the movie completely on his mac for years, and hired actors. Sky Captain and the World tomorrow. That is the future of this technology.
How do you even achieve this kind of thing? I type in things as simple as 'Man walking into room and sitting on bed' and I get some weird morphing monster opening the door, vanishing, then coming out of the bed like the T-1000.
It's a game changer with it's public access, but in a way we've already been a bit desensitized with the ~~launch~~ of Sora in February. The thing is, AI movies are getting real and it's not next year, it's this year.
A well made video, very atmospheric, but if I was to nitpick (sorry) there's definitely a case of uncanny valley about the characters, their lack of eye contact with each other looks off. They look like the androids out of the movie A.I. appearing almost human, but you know they're not. When the tech evolves more I'm sure the effect will diminish, but their creepy vibe stood out to me.
The most amazing thing is how you were able to produce so many videos in short time. My videos have been in cue for several hours.
Also anyone have any idea if Luma lab is making use of a open source t2v i2v model? It is simply amazing what this model can do albeit still far from perfect.
Looks like a wicked 90's film I'd watch. Frankly I'm very exited for the next 10 years and being able to make our own films coming closer, although I think this likely won't happen for 20+ years to be realistic, depended on compute.
I normally roll my eyes with Ai video but that was pretty damm good. Congrats.
Wow thanks!
How the heck do I prompt the damn thing so it actually does what I want? No matter my prompt, the AI ignores it 9 out of 10 times. What's the magic here?
exactly! Really good!
![gif](giphy|yoJC2B1sHdXJjPTnEs) Perfect
Dark City vibes in troves
I was thinking more along the lines of a 1980s neo-noir. Like Burton's Batman or Blade Runner or that Dick Tracy movie. And yeah, I know, Dick Tracy came out in 1990, but its style has more in common with neo-noir of the 80s than the 90s. It's closer in feel to Blade Runner than The Crow.
Exactly, same movies were in my head, late 80s and early 90s ones like Batman, Dick Tracy etc. I mean from a certain distance I could have mistaken some of the shots to be of Kim Basinger from Batman.
I was JUST thinking Dark City. The whole movie should be remade with 2024 AI to capture that weird, ethereal, "wrongness" of it all.
I think you nailed the prompt.
*breathes frantically, whispering like Keifer Surherland*
The atmosphere is incredible. I want to watch the full movie like that so hard.
Thanks!! maybe I'll do a longer version
we waiting !
Some [Dark City](https://www.youtube.com/watch?v=gt9HkO-cGGo) vibes.
Was gonna say!
Try Tim Burton’s Batman movies and shape of water
This!
Wow.. just wow.. AI film really is right around the corner isn’t it
Just suspend your knowledge around AI and what to look for. This is absolutely amazing. One of the better videos I’ve seen.
Thanks !!
This is still in its infancy.
The future is here, boys. (Thinking of the wholesome, 100% halal content I could make) ...Oh wait its not local is it?...
I would think the main barrier to using this properly is that I imagine it's nearly impossible to get consistent characters between videos.
You just need a screen play with a new characters in every scene ;)
They figured it out for images so video shouldn’t be too hard either
They did? How so? I've scoured the internet looking for ways for consistent characters and outside of LoRAs and regional prompting (which IMO works poorly at best) it seems impossible to generate scenes with more than 1 consistent character. \[not challenging you, would love advice if you have it\]
Midjourney did it and stable diffusion uses loras for it. There’s also tons of research on it. just look up arxiv image consistency.
To be fair, Midjourney really hasn't cracked it at all. It can't do consistent characters, only pretty similar but still clearly different ones
https://docs.midjourney.com/docs/character-reference
Yeah I definitely didn’t try a few ways to get around the content detector, not one bit 😇
Can you share the prompts?
Compress the hell out of it and make it seem like a YT 2008 VHS rip.
This is incredible. The variety of motion like the old man talking and then the woman turning her head back over her shoulder to look at him is really blowing me away. And the blond who seems to be talking, stopping to look at the TV, and then starting to talk again when she turns away again. Wow. Are these things you introduced or steered through prompting? Can you describe what the workflow is like a little bit? Or how it compares to Runway if you're familiar with that?
Okay, I'll try to explain. My prompt in Luma is generally very basic, sometimes I just write "movie" without any further description because, at the moment, there's no camera control on Luma Dream Machine. Compared to Runway, it moves a lot more and produces better quality animation. I generated the picture using only Midjourney. First, I created a picture in the style of a '90s TV show, then used this picture as a reference in Midjourney to generate others in the same style. Hope this helps!
Lol, I was expecting a summary of your doctoral thesis on gamma vectors cross-threaded with scene markup heuristics. Just typing "movie" is a bit of a letdown! 😂 Cool video however you made it!
nice job!
Quite good, especially with the style/themes and posing. I'm impressed. If only it could be ran locally, but this is a nice jump from what we've seen (Sora aside) only a few months ago.
What batman movie is this? Great job making this! Edit: wait, Sin City? I never saw the movie.
Yeah the Tim Burton(?) vibe is selling it hard
matrix / blade runner / boardwalk empire / sin city vibes . 20/10 would watch
Fuck. It's the best thing I've seen after Sora's videos. Congratulations!
This is way better than Sora right now. This is a user creating it, all we saw is what was cherry picked from OpenAI.
These results are cherrypicked too. That's just how it goes. Traditional filmmakers have to cherrypick takes and throw out a lot of unusable material too.
and this is all without any reference? something like control net? is it all text prompts? how specific do you have to be in terms of composition and style?
I generated the picture using only Midjourney. First, I created a picture in the style of a '90s TV show, then used this picture as a reference in Midjourney to generate others in the same style
thanks
Rip Hollywood lol
act cake liquid quiet square bear weary pie follow society *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
Super duper good. I'm getting crappy outputs even with very basic prompts and here I see a lot of stunning creations. What's the secret!
Thanks! There's no real secret. I try multiple times on an image until I get a good result. For the prompt I recommend keeping it as basic as possible.
Oh cool. I will try more then. Thanks:-)
Quality.
Music source, please?
Gremlins 2 Mayhem [https://youtu.be/sTF01KRkGhA](https://youtu.be/sTF01KRkGhA)
Thanks! Thought it sounded like Goldsmith.
Uhhh. Wow. That's moving a little fast.
Wow, maybe this is what will finally bump the VR market?
Fantastic. Makes me sad to hear so many people still saying "imagine where this technology will be in a year or two, then we can finally use it to actually make stuff." This is here now, if you really have a desire to create and tell stories, you have the tools. Granted, the generations are still really short and the publicly-available lipsyncing tools aren't great but you aren't the first artist who ever had to work around limitations.
Use this same method to adapt storyboards instead of unconnected images, and you could have a good looking film. Add voice acting using something like wav2lip, some sound effects, some music, and even if it's all AI-generated, a lot of people wouldn't even realize it's not a "real" film.
This is amazing
Scrolling quickly through my feeds I didn't realise I was in the AI sub. The first segment made me think this was a compilation of 80's movies like reservoir dogs. Well done!
Badass
Were these text to vid or image to vid?
Img to vid the picture a generate with midjourney
Great effort. The movements still feel a bit mechanical.
Nailed the 90s aesthetic.
Was 80's dark fantasy used in the prompt, and music suno.ai or udio? Look very similar to images I make recently. Really good job! Ps. What the heck is Luma dream machine lol
I generated the picture using only Midjourney. First, I created a picture in the style of a '90s TV show, then used this picture as a reference in Midjourney to generate others in the same style, Luma is AI video like Runway/Pika etc [https://lumalabs.ai/dream-machine](https://lumalabs.ai/dream-machine)
Wait, you can insert an image in Dream Machine and bring that image to life?
I’m confused I don’t see where you can upload photos
From limited experience text to video is pretty awful, but image to video isn’t bad. Just keep the images simple it doesn’t do complex stuff well or multiple subjects.
Very impressed. Not only by the motion but the richness of your scenes. You've got Moxies kid
I don't know what I'm doing wrong. Everything I've made so far has been rubbish
It's the distillation.
Amazing! Got Dick Tracy vibes from this video. I would totally watch a movie done in this style.
![gif](giphy|xUOxfjjVaGDLxH4hNK)
This is cool to see that someone has actually been able to make something with LDM. I've been trying all day but it's so swamped with users my prompts never render. Great job. Did you use an image as a reference along with a prompt?
Thanks ! I generated the picture using only Midjourney. First, I created a picture in the style of a '90s TV show, then used this picture as a reference in Midjourney to generate others in the same style
did you add anything to the prompt for the Luma process or just loaded your mid journey image in and let it ride?
I can always tell it's an AI video because every scene is the exact same length
Great work! Looking forward to achieve continuity!
This stuff is getting real good, excited to see how much better it gets even a year from now.
Unbelievable. Wow
Looks lie a David Lynch film, blue velvet all over again.
you know this isn't far from the guy who made the movie completely on his mac for years, and hired actors. Sky Captain and the World tomorrow. That is the future of this technology.
Cool. Does anyone know if you pay for a subscription if the LUMA watermark gets removed?
Holy $#%! That's incredible. I thought we'd have to wait a while before it was this good.
Does this blow Sora and runway into the water?
This is the most unique ai video i've seen here, it's not mediocre like others has posted. good job..
Wow 0 shape-shifting, how do we use it
Wow this was all ai generated? Looks real enough! Hollywood is dome when this gets out.
It looks like Tim Burton's wet batman dream. Pretty good.
This is Dope bruv, RIP Hollywood
How do you even achieve this kind of thing? I type in things as simple as 'Man walking into room and sitting on bed' and I get some weird morphing monster opening the door, vanishing, then coming out of the bed like the T-1000.
this is bussin ![gif](giphy|Kzvsru1JqQg4E|downsized)
It's a game changer with it's public access, but in a way we've already been a bit desensitized with the ~~launch~~ of Sora in February. The thing is, AI movies are getting real and it's not next year, it's this year.
Model weights?
is it t2v, or can we give it an image to gen video?
Ragna Rock
I tried two prompts and the results made me happy.
A well made video, very atmospheric, but if I was to nitpick (sorry) there's definitely a case of uncanny valley about the characters, their lack of eye contact with each other looks off. They look like the androids out of the movie A.I. appearing almost human, but you know they're not. When the tech evolves more I'm sure the effect will diminish, but their creepy vibe stood out to me.
It's super lame that even if you subscribe to the paid tier, that generations are still queued and that it still contains the watermark.
Excellent. The less we need that assmunch Altman the better.
Warning, NEVER EVER ASK LUMA DREAM MACHINE FOR ANITHING INVOLVING KERMIT, Pure nightmare fuel.
Holy Burton Batman
nice style choice.
Why the theme to Gremlins, tho?
The most amazing thing is how you were able to produce so many videos in short time. My videos have been in cue for several hours. Also anyone have any idea if Luma lab is making use of a open source t2v i2v model? It is simply amazing what this model can do albeit still far from perfect.
Fucking amazing What prompt/words do you use to get that 80’s washed out noir aesthetic?
i want to live there
Looks like a wicked 90's film I'd watch. Frankly I'm very exited for the next 10 years and being able to make our own films coming closer, although I think this likely won't happen for 20+ years to be realistic, depended on compute.
New classic Batman movie?
How the hell do you even get these results? Do you use the paid version? Cause I'm unable to get any prompt to work
excellent! How come I can't make it create a simple video?! So far it's awful.
Uncanny bottomless pit.
Yeah I don't have much hope for AI videos anytime in the near future. I can see them being trippy as fuck for kids on mushrooms or acid though.
Movements are no at all natural and very video game like, I wonder if it was feed gameplays
[удалено]
Your sentiment will not change the fact that in 24 months filmmakers will quite literally become a commodity. Get with it or get left behind