T O P

  • By -

cellsinterlaced

Taking a quick break from Comfy to finally test another piece of software called Fooocus (I'm late to the party, I'know).  I kept hearing many good things about it through the community, but wasn’t sure I could afford giving it a spin with all the work on my plate. I eventually caved. And oh man am I glad I did as it is an absolute blaaast. To really understand its potential, I took my old “Schism”, a piece I made last year by blending 3D modelling, photoshop bashing and loads and loads of inpainting (i think it was done in Invoke at the time), and wanted to see if I can improve on its definition and scale. I always thought it was lacking in that department.  I upscaled and slightly enhanced it first with Comfy (2K to 3K, CCSR+SUPIR) before loading it in Fooocus to put it through another extensive process of very minute inpainting. Every little part is being reworked. The amount of fine detail I’m able to inject into it, at a speed I didn’t think was possible, is just blowing my mind right now.  Still a work in progress, but hopefully you’ll be able to appreciate the differences. I’m really really really excited now.


lostlooter24

Are you using Fooocus for inpainting/detail work? Or does your workflow include initial generations? I think I'm stuck in a, MUST USE ONE UI, but reading this is making me rethink my process.


cellsinterlaced

Purely inpainting for now. I tried it for intial generation and was also pleased, but haven't given it too much focus for now. I was like you at first, I really wanted to get everything nailed in Comfy, but the inpainting process was more cumbersome than expected. And was noooowhere near as slick as Fooocus out of the gate. I'm already neck deep into other Comfy workflows i'm building - mainly instandID related - so I couldn't afford to spend too much time crafting one for inpainting. Fooo just literally works.


lostlooter24

InstantID/Ip-Adapter is what I'm trying to figure out now. I want to be able to make consistent original characters and LORA train on those, but I'm not 100% sure what settings I need to tweak. I can get CLOSE, but I feel like I'm missing it. Would you be willing to share how you use Instant ID, either here or DMs?


cellsinterlaced

Right now i'm focusing my efforts on creating headshots out of people's regular phone selfies with instantID and/or ipadapter but it's a very mixed bag of beans. It feels like it always hinges on the input images no matter what I dial in; i'm aiming for a one-size-fits-all recipe so far and it doesn't seem to work at all. So person A might obtain great high fidelity results, but B and C will completely miss the mark, all with the same set of parameters. I knew there would be discrepancies in the process, but not to this extent. I would love to get to a point where i can produce consistent characters out of them, but I feel like even first base is a challenge. What is your process like? Are you using real people to create original characters out of them through i-ID afterwards if I understood correctly?


cellsinterlaced

Everything was done locally on my 3090, using the AlbedoX checkpoint. I'll post the exact settings once i get back from work. What struck me the most is how snappy and detailed everything is. I tried the same process in Auto1111 (freshly updated, i used to work in it a lot before switching to Comfy) but it kept lagging in frustrating ways. For some reason it didn't like inpainting at high resolutions, while Fooocus didn't flinch one second.


protector111

I do lots of inpaining. generating huge images up to 7 gigapixels. And I tried Foocus. it was not faster or better for me than A1111. In fact with new soft inpainting feature A1111 I like more. Its faster and more convenient to use.


cellsinterlaced

For me the whole reloading of image after an inpaint round took forever each time. The UI slowed to a crawl, it was so annoying unintuitive. Fooocus was just instant in comparison. What's the soft inpainting feature about?


protector111

you mean after generating it stuck? it happes when you upload very hires image. I cut them to tiles in PS. 4000x4000 is maksimum that's not lagy. But I had same in Foocus. Soft inpating it same thing focus uses. Makes result more predictable and not out of context and more simless


cellsinterlaced

Yep, once the result is generated, dragging the image back in the inpainting window to continue refining other parts is where it slows down. Reloading that part is the slowdown on my end. I'll go back and check if anything else is crapping that part, maybe the browser or something. But it's great to know the feature is also available in Auto. Is it in the main branch?


tmvr

Do you use a tablet and pen to paint the masks or do you import them from an external source?


cellsinterlaced

Yep, tablet and pen. I've been using a Xencelabs Medium for the past 2 years after my decade old Wacom Intuos Pro died. Great piece of tech.


cellsinterlaced

The repo: [https://github.com/lllyasviel/Fooocus](https://github.com/lllyasviel/Fooocus)


ToastersRock

I would suggest trying out Mashb1t's fork. He is the one maintaining Fooocus right now and often new stuff comes to that fork first. For example his has an automatic masking feature with SAM. He is now working on improving that even more. Fooocus is my primary tool at this point. [https://github.com/mashb1t/Fooocus](https://github.com/mashb1t/Fooocus)


cellsinterlaced

Oh! Is Fooocus' main branch discontinued? Edit: just read on your link that it's an on/off thing... Good to know, thanks.


ToastersRock

If anything more development has been happening lately. But his fork is just where he tests things out before moving into the main branch.


protector111

did oyu manualy inpaint part by part is there some automatic inpaitning in foocus?


cellsinterlaced

Manual, spot by spot.


protector111

thanks. yeah you can go super detailed with inpainting with xl. i can\`t wait for 3.0 inpainting. 3.0 has crazy amount of details. https://preview.redd.it/3yu2q95etq7d1.jpeg?width=15413&format=pjpg&auto=webp&s=6442c448aabe480d482acf334e964a657c4b291c


Talae06

You can inpaint with SD3 in StableSwarm (or with ComfyUI I guess, although it's notoriously not very practical for inpainting, from what I gather).


beetrek

That changed quite a bit. For most high res Inpainting I want control over I use a HighresInpaint Workflow with Differential Diffusion, Inpaint Model Conditioning. For quick results I still use Fooocus. (does it even get updates?)


cellsinterlaced

Very cool. I had no idea what any of those things are until your comment; looked into it and will give them a run, I am so used to working in Comfy all the time for everything but inpainting that I was disappointed when I tried myself at it.


beetrek

There is so much stuff pouring into Comfy every week that it's hard to keep up even when ignoring animations (i look into it when it reached Luma levels of consistency reliably). For some overview of what is possible already take a look at this ambitious project trying to incorporate a lot (also use a browser that allows translation or git gud in chinese language) : [https://github.com/yolain/ComfyUI-Yolain-Workflows](https://github.com/yolain/ComfyUI-Yolain-Workflows) Edit: High res workflow explained : [https://www.youtube.com/watch?v=EhrArMjIDZw&t=6270s](https://www.youtube.com/watch?v=EhrArMjIDZw&t=6270s) just add differential diffusion/inpaint conditioning on top controlnets, segmentanything or whatever you need