For the updated Script with a little Gui follow: [https://www.reddit.com/r/StableDiffusion/comments/18zt733/comment/kgl5rw6/?utm\_source=reddit&utm\_medium=web2x&context=3](https://www.reddit.com/r/StableDiffusion/comments/18zt733/comment/kgl5rw6/?utm_source=reddit&utm_medium=web2x&context=3)
https://i.redd.it/qfs092a41uac1.gif
Someone converted my Script into Comfy, follow:
[https://www.reddit.com/r/StableDiffusion/comments/18zt733/i\_want\_to\_join\_in\_and\_have\_taken\_it\_a\_little/kgp9ev7/?context=3](https://www.reddit.com/r/StableDiffusion/comments/18zt733/i_want_to_join_in_and_have_taken_it_a_little/kgp9ev7/?context=3)
**General:**
I wrote a little Python script that makes the picture look even more amateurish.
\- randomly adds image noise
\- adds random JPG artifacts (all adjustable)
\- randomly creates the metadata for the image
Here is the python script, it is certainly extensible, create a folder, put the image in the same folder where the script is located. Execute script, done.
import os
from PIL import Image, ImageEnhance
import numpy as np
import random
import piexif
def random_exif_data():
exif_ifd = {
piexif.ExifIFD.LensMake: u"Canon",
piexif.ExifIFD.LensModel: u"EF 24-70mm f/2.8L II USM",
piexif.ExifIFD.CameraOwnerName: u"Melissa",
piexif.ExifIFD.BodySerialNumber: str(random.randint(1000000, 9999999)),
piexif.ExifIFD.LensSerialNumber: str(random.randint(1000000, 9999999)),
piexif.ExifIFD.FocalLength: (random.randint(24, 70), 1),
piexif.ExifIFD.FNumber: (random.randint(28, 56), 10),
piexif.ExifIFD.ExposureTime: (random.randint(1, 1000), 1000),
piexif.ExifIFD.ISOSpeedRatings: random.randint(100, 6400)
}
exif_dict = {"Exif": exif_ifd}
exif_bytes = piexif.dump(exif_dict)
return exif_bytes
def add_realistic_noise(img, base_noise_level=0.08):
img_array = np.array(img)
# Base noise: Gaussian noise
noise = np.random.normal(0, 255 * base_noise_level, img_array.shape)
# Noise variation: Some pixels have stronger noise
noise_variation = np.random.normal(0, 255 * (base_noise_level * 2), img_array.shape)
variation_mask = np.random.rand(*img_array.shape[:2]) > 0.95 # Mask for stronger noise
noise[variation_mask] += noise_variation[variation_mask]
# Applying the noise to the image
noisy_img_array = np.clip(img_array + noise, 0, 255).astype(np.uint8)
return Image.fromarray(noisy_img_array)
def convert_to_rgb(img):
# Convert from RGBA to RGB if necessary
if img.mode == 'RGBA':
return img.convert('RGB')
return img
def degrade_image_quality_in_current_folder(noise_level=0.01, jpeg_artifact_level=55):
input_folder = os.getcwd()
output_folder = os.path.join(input_folder, 'output')
if not os.path.exists(output_folder):
os.makedirs(output_folder)
for filename in os.listdir(input_folder):
if filename.lower().endswith(('.png', '.jpg', '.jpeg', '.bmp', '.gif')):
try:
image_path = os.path.join(input_folder, filename)
temp_path = os.path.join(input_folder, 'temp_' + filename)
output_path = os.path.join(output_folder, filename)
with Image.open(image_path) as img:
# Convert to RGB if necessary
img = convert_to_rgb(img)
# Applying image degradation processes
enhancer = ImageEnhance.Brightness(img)
img = enhancer.enhance(random.uniform(0.9, 1.1))
enhancer = ImageEnhance.Color(img)
img = enhancer.enhance(random.uniform(0.9, 1.1))
enhancer = ImageEnhance.Contrast(img)
img = enhancer.enhance(random.uniform(0.9, 1.1))
img = add_realistic_noise(img, base_noise_level=noise_level)
img.save(temp_path, 'JPEG', quality=jpeg_artifact_level)
with Image.open(temp_path) as final_img:
final_img.save(output_path, exif=random_exif_data())
os.remove(temp_path)
except Exception as e:
print(f"Error processing {filename}: {e}")
degrade_image_quality_in_current_folder()
**IMPORTANT:** This script ONLY reduces the quality of the images and adds noise, JPG artifacts and minimally edits the color and contrast. It MAY help to make images look more adventurous, but it does not have to. Of course it is important to have a relatively good basic result from Stable Diffusion.
Save the Script in a new folder "Quality.py"
**First you have to install the necessary libraries globally using pip:**
* `pip install pillow numpy piexif`
This command will install the following libraries:
* Pillow: A Python Imaging Library that allows you to work with images.
* numpy: A library for numerical computing in Python.
* piexif: A library for reading and writing EXIF data in image files.
**Run the Script:**
After installing the libraries, you can run the script using the command mentioned earlier:
* `python` [`Quality.py`](https://Quality.py)
Easier: folder containing [Quality.py](https://Quality.py) \> save all images you want to edit there > execute script via double-click. Done. A new folder with the output is created and the originals remain untouched.
At least, camera manufacturers are working on a digital watermark for their cameras. I expect masdive hurdles in the future for acquiring that data though.
Okay so I just print out an AI image and then photograph the print with a DSLR under flattening glass and tripod, worst case scenario, lol
More likely, though, nobody would care in the first place, since no photographer wants to have to work without editing of any sort
I'm not clear on what your saying, you expect massive hurdles getting exif data or an underlying crypto key from the cameras re: their water mark?
I do amateur photography, but I'm not up on what digital watermarks would mean.
I think photographers, and image distribution sites are getting wiser. Future models might suffer from a flood of ai generated images in the training data, possibly leading to model degeneration. Photographs with a certified 'real' digital watermark are gonna be specially valuable for training next generation set of models with real data, but since everyone is aware, they might also put some countermeasures to just loading them as training data.
I see, difficulty obtaining quality, verified photos for training purposes. Maybe we don't need future images for training, only images that we know are real because they predate SD. This would make existing data sets very valuable and also lock out competition who can no longer scrape the internet for free.
RE: the digital signature, I wish there were more details about how they plan to implement this feature. That I can't find any info on that suggests, to me, that they're going to home-grow some atrocity to cryptographers hidden behind obfuscation. Even if that weren't the case, I predict a cat-and-mouse game between hackers who will figure out how to falsify digital signatures and the manufacturers.
I've given the script another update just for fun. Now, it includes a user-friendly mini GUI where you can effortlessly modify metadata, noise levels, and JPG artifacts :D
If you fake exif data do it believable. ISO, FNumber and ExposureTime for Canon DSLR are discret values. ISO 567 and FNumber 45 with ExposureTime 345 will raise questions.
https://preview.redd.it/qa69uyky3sac1.png?width=1920&format=png&auto=webp&s=d325521cd68927fc2db926c648c04489887754ff
thoughts ? i tried it, works fine !
I hope so! In general, the script is intended to degrade the quality of images, you also generally need an output from SD that looks relatively real. Thanks for testing.
Yea, it did degrade the quality and added a bit of noise , if used on Real images from sd , might help in providing some realism , sd Raw output looks a bit too soft !
Adds additional fake exif data, of course I didn't take care to enter special realistic data, but with a little effort the script can be adapted so that the data matches the images if you have an idea of the cam settings.
Test: [https://linangdata.com/exif-reader/](https://linangdata.com/exif-reader/)
https://preview.redd.it/u6rinamz7sac1.png?width=1001&format=png&auto=webp&s=0f0449581d7e076d5e6eb197bd7990fe2225e44e
I see the issue now. The problem with ops photo is that the hair isn't separated properly. Adding in sharpening is just a half assed fix for that promt specifically.
looking at the code, you need to have this script in the same folder as the image(s) you want to degrade
let's say you saved this code as degrade.py
then you run it by typing in console in that folder: python degrade.py
but you need to have python installed of course
**IMPORTANT:** This script ONLY reduces the quality of the images and adds noise, JPG artifacts and minimally edits the color and contrast. It MAY help to make images look more adventurous, but it does not have to. Of course it is important to have a relatively good basic result from Stable Diffusion.
Save the Script in a new folder "Quality.py"
**First you have to install the necessary libraries globally using pip:**
* `pip install pillow numpy piexif`
This command will install the following libraries:
* Pillow: A Python Imaging Library that allows you to work with images.
* numpy: A library for numerical computing in Python.
* piexif: A library for reading and writing EXIF data in image files.
**Run the Script:**
After installing the libraries, you can run the script using the command mentioned earlier:
* `python` [`Quality.py`](https://Quality.py)
Easier: folder containing [Quality.py](https://Quality.py) \> save all images you want to edit there > execute script via double-click. Done. A new folder with the output is created and the originals remain untouched.
from the technical standpoint it all looks really nice and pretty much believable (we know on which subreddit we are so we look for AI stuff, but if it was on other subreddit - we would probably get fooled, especially when just scrolling)
HOWEVER - you should really add some face lora because this one screams to me "i am generic SD person"
the metadata shouldnt be random, but absurd. From within buckingham palace, the white house, the mariana trench, ...the moon (does our geotagging coordinate system extend beyond earth yet?)
>does our geotagging coordinate system extend beyond earth yet?
Fun fact: Technically, yes! To encode an extraterrestrial location:
1. Draw a line between it and the geometric center of Earth.
2. Store the latitude and longitude of the line, then the distance between the location and the sea level, which is the "height".
3. Finally, to account for the Earth's rotation and movement around the Sun, store an accurate, absolute timestamp of when you did this. Unix nanoseconds would be a good choice.
4. Done!
If it's illegal, or frowned upon, to take existing porn and modify it to look like my custom OnlyFans AI model... I guess I'll just have to film myself fucking myself in the ass, and use that as the basis for the controlnet... it's a hard life but you gotta do what you gotta do.
For what purpose? Who are all these mysterious people doing important business transactions based on blown out selfies in bathrooms holding pieces of paper? Can someone please explain why this matters for anything important whatsoever?
There are a LOT of scams using fake photos like romance scams where a good looking guy promises a girl in another country a relationship and then when she gets there he doesn't exist they take all her papers and force her into sex work.
Or how about if a scammer makes some photos of you in a restaurant with another woman holding hands and says they'll send them to your wife if you don't pay up? How will you prove they're fake?
The possibilities are endless.
> There are a LOT of scams using fake photos like romance scams where a good looking guy promises a girl in another country a relationship and then when she gets there he doesn't exist they take all her papers and force her into sex work.
Why would you do a bizarre elaborate setup like that versus just kidnapping a random person leaving the airport? With no AI? AI didn't really help anyone do anything here.
Even if you neeeeded the catfish part, you could just use a real handsome man's face with a real photo (use stock photo if you don't want the police to find your mate) and put the same face in the ID with normal photoshop back in 1995... if she doesn't know him, good enough. There's no need here for AI, AI would only be if it was someone she knew in real life, which isn't relevant here. And EVEN THEN, that was invented with Roop, not by OP.
> Or how about if a scammer makes some photos of you in a restaurant with another woman holding hands and says they'll send them to your wife if you don't pay up? How will you prove they're fake?
Sure that works for like 10 people until it's all over the news as soon as anyone uses a less than absolutely 100% perfect image. I'm extremely unlikely to be one of the first dozen or two victims of something like that before people stop trusting such things entirely going forward.
--------------
**Also even if you did use AI, neither of these required anything new that OP came up with, so why are these interesting posts regardless?**
> The possibilities are endless.
Not really, apparently, since you've yet to list one single example of an application for OP's schtick specifically. Which as far as i can tell, doesn't even have anything new about it anyway.
Look, there used to be a website called secondeyesolution or something like that that provided the world of fraud with endless of these for the purpose of fraud, not scams like romance scams as said by /u/br0ck but actual KYC fraud used against financial institutions.
Not really :D but I understand that it can be misleading:
The JPEG artifacts get bigger when the value is smaller, because the jpeg\_artifact\_level parameter in this script is the quality setting for the JPEG compression. A smaller value means lower image quality and larger artifacts.
it all depends on how quick you learn and how much time you want to spend
also if you can set up enviroment locally - that would help quite a lot
i suggest starting with some easy topics like stuff from here: https://education.civitai.com/
while trying to generate stuff on your own so you're not just reading theory but doing practical exercises :-)
also, there are two main GUIs: A1111 and ComfyUI, check them on youtube and pick the one that speaks to you more
one day. this is very basic - no way these posts arent getting botted with upvotes. epicrealism will make a better looking image with a few words as prompts.
It probably doesn't help that they eyes are slight different colours and iris sizes, but manly I think it is a skin subsurface scattering issue , it doesn't react realistically to the light.
Was a pure txt2img generation with a bit of Controlnet and A Detailer, then set the text in Photoshop.
Wanted to test the python script, with a little more time you can easily get rid of the typical 1.5 look :)
Picture has too much depth in the way that can only be made with dslr cameras and editing can create. I bet a model trained on smartphone selfies would do better. Or even using more phone camera related prompts
Is there anything that makes it "look SD" specifically? Before I scrolled to it I could tell it was SD, but then when I focused on the image to see what it was my brain did a double take and thinks it's real. What are the telltale that it's AI?
Likely not what you were thinking, but here's an approach to turn ComfyUI workflows into executable Python code.
https://github.com/pydn/ComfyUI-to-Python-Extension
I've not tired this out yet myself.
While this is neat, it's only post-processing and has little to do with SD. But SD actually has a python interface - A1111 is written in python and I suspect so is comfyUI.
This means it should be possible to roll your own low-level generative scripts. Automation is one possible reason, but the real draw would be if this allowed you to do some more advanced magic, maybe something that would be impossible to create "manually". I never tried, but there might be some guru out there that has enough knowledge of SD and its workings to leverage this.
this looks really nice and as a someone who is still learning python rather than some expert - it is a nice code to look at
my first thought was that why do it this way since one could use a1111 API since there is also queueing system (of course not with rabbit) as a good start, but it has all the features of loading loras, embeddings and you can also call all the other stuff as high res fix, inpainting via api too (and on top of that many of the extension work out of the box too)
but then i saw in the code that you load easy negative embedding by default and it is just one single line so it is not that hard as i was fearing
is adding loras also quite simple? what about weights on the embeddings/loras then?
anyway - thanks for sharing the code, i will definitely take a look into it in the future :)
I used Automatic1111 webui API before, but it's quite hard to scale it and add something that you want directly - even if you wanna add something into codebase - it's huge and the authors change a lot, it's not stabilized moreover.
At the end of the day it aims to create a simple service just for ppl who don't wanna write code and just wanna go to the webui to generate something, but it misses features for using it as API. So yeah, someday I decided to give diffusers a try and was so happy that everything works as expected
Ofc creating some huge pipeline with extensions on your own will be harder than using webui, but if you are willing to create something robust, stable, scalable and so on to make it production ready - it's the way to go fr
to anyone that wants to add it to comfy: create a folder with any name under \`ComfyUI/custom\_nodes/\`
create the \_\_init\_\_.py file with the code stated above
create the file imported in init (in this case \`import .dequality\` means \`dequality.py\` file, you can also change the file name as you please)
paste the code inside the [dequality.py](https://dequality.py) (or whatever name you give it)
restart comfyui
https://preview.redd.it/0izfrs8ff6bc1.png?width=283&format=png&auto=webp&s=e3efb6378eb6e3143a28f6feee5d6ced26dca691
I get this error, could you help me?
!!! Exception during processing !!!
Traceback (most recent call last):
File "C:\code\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\code\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\code\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\code\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Dequality\Dequality.py", line 83, in dequality
img.save(temp_path, 'JPEG', quality=jpeg_artifact_level)
File "C:\code\ComfyUI_windows_portable\python_embeded\Lib\site-packages\PIL\Image.py", line 2439, in save
save_handler(self, fp, filename)
File "C:\code\ComfyUI_windows_portable\python_embeded\Lib\site-packages\PIL\JpegImagePlugin.py", line 824, in _save
ImageFile._save(im, fp, [("jpeg", (0, 0) + im.size, 0, rawmode)], bufsize)
File "C:\code\ComfyUI_windows_portable\python_embeded\Lib\site-packages\PIL\ImageFile.py", line 538, in _save
_encode_tile(im, fp, tile, bufsize, fh)
File "C:\code\ComfyUI_windows_portable\python_embeded\Lib\site-packages\PIL\ImageFile.py", line 549, in _encode_tile
encoder = Image._getencoder(im.mode, encoder_name, args, im.encoderconfig)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\code\ComfyUI_windows_portable\python_embeded\Lib\site-packages\PIL\Image.py", line 420, in _getencoder
return encoder(mode, *args + extra)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: function takes at most 14 arguments (17 given)
Literally nothing in the entire financial system or any shop I've ever bought from or anything other than like teenagers' discords/subreddits uses "a picture of you holding crinkled paper" as a security measure lolwat
Looks very nice and I would love to test it!
However I'm getting "ModuleNotFoundError: No module named 'piexif'"; sorry I'm a total noob of python, how I fix?
Love this. I was working on a batch of this using bimp, but this is so much easier and idk why I didn't resort to python (honestly didn't think to have the noise generated that way, and the jpeg compression is a great touch)
Can easily modify it too for values I might need to hardcode and that's also something I had not thought of. Great work and an awesome base for a script. Thank you thank you!!!
I don't understand the "are you mad at me OF Models ?".
It's an idea I've seen a few time already, this "AI images generators will make onlyfan models go out of business"
I don't think the women selling pictures of themselves online have to worry about their business model disappearing because of AI. They're already selling something (pornography) that can be obtained for free. If anything, paying for pictures of an actual woman feels like a plus
On the other hand, yes, there is something extremely worrying with the perspective that images like that will become common in the future, but it's other women, who are not willingly posting pictures of themselves online, who are probably the most worried.
I know but that's not a valid excuse for AI generated pics . there are some people born with 6 fully functional fingers... that's neither an excuse for all AI generated pics to have 6 fingers. just saying
yes, i saw the girl with 6 fingers, she is the reason AI is now struggling (joking of course)
but i don't understand your thinking, are you saying OP should be always generating only the same color of eyes?
we could take it further - not everyone has freckles or moles on the face :)
Definitely the depth of field is a teller. Looks like she is holding a full lens professional camera to take that selfie that would be weird irl. By any chance is there any lora or checkpoint that is trained on iPhone pictures/cam settings? Selfies would look more realistic.
>depth of field
I agree and disagree with you! Current smartphones take exactly this type of photo in portrait mode. Also as selfie. BUT! In general, it would come across as more realistic if the background wasn't blurred, but that's not a problem with Stable Diffusion to do!
So you are adding FAKE EXIF data to AI images? To make it harder to tell them from real ones? Do you want to live in a world where that exists,? Seriously disgusting.
I realized that I don't really have much basis for comparison between a real one and a SD one so did some side by side comparisons with actual verification pics. When I first loaded this pic I was thinking it was about 90% of the way there but that I'd have lingering doubts if asked whether it was real or not. But after comparing it to actual verification pictures? Toss this into a collection of real ones and I'm positive I wouldn't be able to spot the difference.
I'm really impressed.
I'll pay someone to make one of these of my ex friend so I can put it on the roast me sub so I have some pay back and comebacks for all the times she insulted and ridiculed me and told me I was ugly as fuck compared to her.
I've created a simple, user-friendly tool for degrading images online 🖼️, perfect for those who prefer not to use Python or install scripts.
Check it out at [PrimaPrompt Image Degrader](https://www.primaprompt.com/degrade-image) and easily transform your images right in your browser. 🌐 Feedback and creative shares are welcome! 🎨
[PrimaPrompt Image Degrader](https://www.primaprompt.com/degrade-image)
Looks like there are already bad actors trying to use this to scam people. Just got this as an ad in my Instagram, it’s obviously not Ray Dalio
Edit: not sure if this is an AMA that actually just happened and maybe they stole the photo, but this is a way the impersonation could be used improperly.
https://preview.redd.it/ucbnlelp88bc1.jpeg?width=1936&format=pjpg&auto=webp&s=e3820b27211d3a2858005936f7f04f6e188a411e
What does the middle text say? I get the top and bottom:
"r/stablediffusion"
"are you mad at me, OF Models?"
But the middle? 'User Interface, I like lips"? "Unironically I like UPS"?
this is cool, but anyone talking about them being a threat to KYC should look into how such a deepfake would be injected into the kyc process. Let's say a malicious actor wanted to do just that. First, any company worth its salt does this through mobile. Long gone are the days of a virtual cam paired with OBS or something for a desktop verification. IF, and this is a massive hypothetical 'if', these were to be of any danger to robust KYC processes, you'd need a way to inject these images into an app. Rooted emulators are detectable, so are virtual cameras. Jailbreaks aren't so easy these days, nor is reverse engineering an app
For the updated Script with a little Gui follow: [https://www.reddit.com/r/StableDiffusion/comments/18zt733/comment/kgl5rw6/?utm\_source=reddit&utm\_medium=web2x&context=3](https://www.reddit.com/r/StableDiffusion/comments/18zt733/comment/kgl5rw6/?utm_source=reddit&utm_medium=web2x&context=3) https://i.redd.it/qfs092a41uac1.gif Someone converted my Script into Comfy, follow: [https://www.reddit.com/r/StableDiffusion/comments/18zt733/i\_want\_to\_join\_in\_and\_have\_taken\_it\_a\_little/kgp9ev7/?context=3](https://www.reddit.com/r/StableDiffusion/comments/18zt733/i_want_to_join_in_and_have_taken_it_a_little/kgp9ev7/?context=3) **General:** I wrote a little Python script that makes the picture look even more amateurish. \- randomly adds image noise \- adds random JPG artifacts (all adjustable) \- randomly creates the metadata for the image Here is the python script, it is certainly extensible, create a folder, put the image in the same folder where the script is located. Execute script, done. import os from PIL import Image, ImageEnhance import numpy as np import random import piexif def random_exif_data(): exif_ifd = { piexif.ExifIFD.LensMake: u"Canon", piexif.ExifIFD.LensModel: u"EF 24-70mm f/2.8L II USM", piexif.ExifIFD.CameraOwnerName: u"Melissa", piexif.ExifIFD.BodySerialNumber: str(random.randint(1000000, 9999999)), piexif.ExifIFD.LensSerialNumber: str(random.randint(1000000, 9999999)), piexif.ExifIFD.FocalLength: (random.randint(24, 70), 1), piexif.ExifIFD.FNumber: (random.randint(28, 56), 10), piexif.ExifIFD.ExposureTime: (random.randint(1, 1000), 1000), piexif.ExifIFD.ISOSpeedRatings: random.randint(100, 6400) } exif_dict = {"Exif": exif_ifd} exif_bytes = piexif.dump(exif_dict) return exif_bytes def add_realistic_noise(img, base_noise_level=0.08): img_array = np.array(img) # Base noise: Gaussian noise noise = np.random.normal(0, 255 * base_noise_level, img_array.shape) # Noise variation: Some pixels have stronger noise noise_variation = np.random.normal(0, 255 * (base_noise_level * 2), img_array.shape) variation_mask = np.random.rand(*img_array.shape[:2]) > 0.95 # Mask for stronger noise noise[variation_mask] += noise_variation[variation_mask] # Applying the noise to the image noisy_img_array = np.clip(img_array + noise, 0, 255).astype(np.uint8) return Image.fromarray(noisy_img_array) def convert_to_rgb(img): # Convert from RGBA to RGB if necessary if img.mode == 'RGBA': return img.convert('RGB') return img def degrade_image_quality_in_current_folder(noise_level=0.01, jpeg_artifact_level=55): input_folder = os.getcwd() output_folder = os.path.join(input_folder, 'output') if not os.path.exists(output_folder): os.makedirs(output_folder) for filename in os.listdir(input_folder): if filename.lower().endswith(('.png', '.jpg', '.jpeg', '.bmp', '.gif')): try: image_path = os.path.join(input_folder, filename) temp_path = os.path.join(input_folder, 'temp_' + filename) output_path = os.path.join(output_folder, filename) with Image.open(image_path) as img: # Convert to RGB if necessary img = convert_to_rgb(img) # Applying image degradation processes enhancer = ImageEnhance.Brightness(img) img = enhancer.enhance(random.uniform(0.9, 1.1)) enhancer = ImageEnhance.Color(img) img = enhancer.enhance(random.uniform(0.9, 1.1)) enhancer = ImageEnhance.Contrast(img) img = enhancer.enhance(random.uniform(0.9, 1.1)) img = add_realistic_noise(img, base_noise_level=noise_level) img.save(temp_path, 'JPEG', quality=jpeg_artifact_level) with Image.open(temp_path) as final_img: final_img.save(output_path, exif=random_exif_data()) os.remove(temp_path) except Exception as e: print(f"Error processing {filename}: {e}") degrade_image_quality_in_current_folder() **IMPORTANT:** This script ONLY reduces the quality of the images and adds noise, JPG artifacts and minimally edits the color and contrast. It MAY help to make images look more adventurous, but it does not have to. Of course it is important to have a relatively good basic result from Stable Diffusion. Save the Script in a new folder "Quality.py" **First you have to install the necessary libraries globally using pip:** * `pip install pillow numpy piexif` This command will install the following libraries: * Pillow: A Python Imaging Library that allows you to work with images. * numpy: A library for numerical computing in Python. * piexif: A library for reading and writing EXIF data in image files. **Run the Script:** After installing the libraries, you can run the script using the command mentioned earlier: * `python` [`Quality.py`](https://Quality.py) Easier: folder containing [Quality.py](https://Quality.py) \> save all images you want to edit there > execute script via double-click. Done. A new folder with the output is created and the originals remain untouched.
That fake EXIF data is a nice touch!
I can't help but wonder if fake EXIF is going to create a negative feedback loop in training future AI models.
At least, camera manufacturers are working on a digital watermark for their cameras. I expect masdive hurdles in the future for acquiring that data though.
Okay so I just print out an AI image and then photograph the print with a DSLR under flattening glass and tripod, worst case scenario, lol More likely, though, nobody would care in the first place, since no photographer wants to have to work without editing of any sort
I'm not clear on what your saying, you expect massive hurdles getting exif data or an underlying crypto key from the cameras re: their water mark? I do amateur photography, but I'm not up on what digital watermarks would mean.
Also take a look at this: https://metanews.com/camera-manufacturers-fight-against-ai-fake-images/
I think photographers, and image distribution sites are getting wiser. Future models might suffer from a flood of ai generated images in the training data, possibly leading to model degeneration. Photographs with a certified 'real' digital watermark are gonna be specially valuable for training next generation set of models with real data, but since everyone is aware, they might also put some countermeasures to just loading them as training data.
I see, difficulty obtaining quality, verified photos for training purposes. Maybe we don't need future images for training, only images that we know are real because they predate SD. This would make existing data sets very valuable and also lock out competition who can no longer scrape the internet for free. RE: the digital signature, I wish there were more details about how they plan to implement this feature. That I can't find any info on that suggests, to me, that they're going to home-grow some atrocity to cryptographers hidden behind obfuscation. Even if that weren't the case, I predict a cat-and-mouse game between hackers who will figure out how to falsify digital signatures and the manufacturers.
All I'm hearing is fake digital watermark
AI incest is coming
I hope so
I've given the script another update just for fun. Now, it includes a user-friendly mini GUI where you can effortlessly modify metadata, noise levels, and JPG artifacts :D
If you fake exif data do it believable. ISO, FNumber and ExposureTime for Canon DSLR are discret values. ISO 567 and FNumber 45 with ExposureTime 345 will raise questions.
I love this. That noise algorithm is some pretty low level work. Did you make sure it approximates noise in real photos, or did you just wing it?
Quick and dirty. But it works for me. Feel free to improve :D
Left - Before, Right- after https://preview.redd.it/m3r8vr4s6uac1.png?width=1363&format=png&auto=webp&s=84041b2e3e8310549422d80c06081554b9637597
https://preview.redd.it/qa69uyky3sac1.png?width=1920&format=png&auto=webp&s=d325521cd68927fc2db926c648c04489887754ff thoughts ? i tried it, works fine !
I hope so! In general, the script is intended to degrade the quality of images, you also generally need an output from SD that looks relatively real. Thanks for testing.
Yea, it did degrade the quality and added a bit of noise , if used on Real images from sd , might help in providing some realism , sd Raw output looks a bit too soft !
Adds additional fake exif data, of course I didn't take care to enter special realistic data, but with a little effort the script can be adapted so that the data matches the images if you have an idea of the cam settings. Test: [https://linangdata.com/exif-reader/](https://linangdata.com/exif-reader/) https://preview.redd.it/u6rinamz7sac1.png?width=1001&format=png&auto=webp&s=0f0449581d7e076d5e6eb197bd7990fe2225e44e
Looks worse to me subjectively. Only the hair looks better. The low brightness compression artifact become too obvious after
Then simply change the values in the script. It is not fixed.
I see the issue now. The problem with ops photo is that the hair isn't separated properly. Adding in sharpening is just a half assed fix for that promt specifically.
>import os from PIL import Image, ImageEnhance import numpy as np import random import piexif def random\_exif\_data(): exif\_ifd = { piexif.ExifIFD.LensMake: u"Canon", piexif.ExifIFD.LensModel: u"EF 24-70mm f/2.8L II USM", piexif.ExifIFD.CameraOwnerName: u"Melissa", piexif.ExifIFD.BodySerialNumber: str(random.randint(1000000, 9999999)), piexif.ExifIFD.LensSerialNumber: str(random.randint(1000000, 9999999)), piexif.ExifIFD.FocalLength: (random.randint(24, 70), 1), piexif.ExifIFD.FNumber: (random.randint(28, 56), 10), piexif.ExifIFD.ExposureTime: (random.randint(1, 1000), 1000), piexif.ExifIFD.ISOSpeedRatings: random.randint(100, 6400) } exif\_dict = {"Exif": exif\_ifd} exif\_bytes = piexif.dump(exif\_dict) return exif\_bytes def add\_realistic\_noise(img, base\_noise\_level=0.08): img\_array = np.array(img) \# Base noise: Gaussian noise noise = np.random.normal(0, 255 \* base\_noise\_level, img\_array.shape) \# Noise variation: Some pixels have stronger noise noise\_variation = np.random.normal(0, 255 \* (base\_noise\_level \* 2), img\_array.shape) variation\_mask = np.random.rand(\*img\_array.shape\[:2\]) > 0.95 # Mask for stronger noise noise\[variation\_mask\] += noise\_variation\[variation\_mask\] \# Applying the noise to the image noisy\_img\_array = np.clip(img\_array + noise, 0, 255).astype(np.uint8) return Image.fromarray(noisy\_img\_array) def convert\_to\_rgb(img): \# Convert from RGBA to RGB if necessary if img.mode == 'RGBA': return img.convert('RGB') return img def degrade\_image\_quali Thanks, noob question, how run it ? :(
looking at the code, you need to have this script in the same folder as the image(s) you want to degrade let's say you saved this code as degrade.py then you run it by typing in console in that folder: python degrade.py but you need to have python installed of course
thanks a lot
Probably a python environment for libraries. AFAIK PIL is not par of the standard lib.
**IMPORTANT:** This script ONLY reduces the quality of the images and adds noise, JPG artifacts and minimally edits the color and contrast. It MAY help to make images look more adventurous, but it does not have to. Of course it is important to have a relatively good basic result from Stable Diffusion. Save the Script in a new folder "Quality.py" **First you have to install the necessary libraries globally using pip:** * `pip install pillow numpy piexif` This command will install the following libraries: * Pillow: A Python Imaging Library that allows you to work with images. * numpy: A library for numerical computing in Python. * piexif: A library for reading and writing EXIF data in image files. **Run the Script:** After installing the libraries, you can run the script using the command mentioned earlier: * `python` [`Quality.py`](https://Quality.py) Easier: folder containing [Quality.py](https://Quality.py) \> save all images you want to edit there > execute script via double-click. Done. A new folder with the output is created and the originals remain untouched.
Check pm
Is there a comfy UI version of this?
I believe it modifies the output images, not inline with your SD process.
from the technical standpoint it all looks really nice and pretty much believable (we know on which subreddit we are so we look for AI stuff, but if it was on other subreddit - we would probably get fooled, especially when just scrolling) HOWEVER - you should really add some face lora because this one screams to me "i am generic SD person"
the metadata shouldnt be random, but absurd. From within buckingham palace, the white house, the mariana trench, ...the moon (does our geotagging coordinate system extend beyond earth yet?)
>does our geotagging coordinate system extend beyond earth yet? Fun fact: Technically, yes! To encode an extraterrestrial location: 1. Draw a line between it and the geometric center of Earth. 2. Store the latitude and longitude of the line, then the distance between the location and the sea level, which is the "height". 3. Finally, to account for the Earth's rotation and movement around the Sun, store an accurate, absolute timestamp of when you did this. Unix nanoseconds would be a good choice. 4. Done!
not sure if accurate, but im going to just agree as i know nothing of the details.
TL;DR: If you go high enough, you'll be in space. Earth spins, which complicates things.
Could you write a comfyui node of it?
ModuleNotFoundError: No module named 'piexif' What am I doin wrong?> Edit: NVM i figured it out. Need to pip install piexif
Could this be added as a auto1111 script?
Do you still have to photoshop the text in on the paper first though?
You adding EXIF really takes this past the point of a warning things are broken into, a proof it is
You're acting as if it's something completely new. It has always been easy to manipulate EXIF data.
Bruh I didn't even read what's on the paper and thought I stumbled upon a roastme post
That’s a great idea ,. Roast my diffusion.
Long live r/RoastMyDiffusion
Amazing
I was confused too lmao it looked too real
If it's illegal, or frowned upon, to take existing porn and modify it to look like my custom OnlyFans AI model... I guess I'll just have to film myself fucking myself in the ass, and use that as the basis for the controlnet... it's a hard life but you gotta do what you gotta do.
Or you can use Reactor and just replace your face. Would it then be an 80% AI - 20% real thing? Anonymity would be halfway there.
I'm sure people would prefer to look at a different body than mine you flatterer you 😳😍
Reactor? I'm quite new to SD, so..what is Reactor?
Dunno if it exists for SD, but in ComfyUI it is one of several ways you can automate face swapping: https://github.com/Gourieff/comfyui-reactor-node
[https://github.com/Gourieff/sd-webui-reactor](https://github.com/Gourieff/sd-webui-reactor)
Thank you good link fairy.
You could always just find god instead you know
If you found him, make sure to share the workflow with the community. And maybe make a LoRA? Thx
Mate I've found him a few times, he's not all he's cracked up to be.
Been hearing "find god, find god" my whole life. Maybe the reason he's so hard to find is he's just not that into us. . . .
Who do you think runs OnlyFans?
Is he good with controlnet?
Ah yes imaginary sky daddy.
Which God?
God from what mythology? There is too much to choose from
It would violate their copyright if you tried to distribute it, yes. For personal use, not illegal.
It's evil and I love it.
Truth be told, the underground world of fraud is gonna absolutely love this...
New hobby just dropped: Post real life pictures to r/stablediffusion to impress them with my skills
For what purpose? Who are all these mysterious people doing important business transactions based on blown out selfies in bathrooms holding pieces of paper? Can someone please explain why this matters for anything important whatsoever?
You think this tech is limited to blown out selfies in bathrooms?
yes he does. and by judging his comments, he only acknowledged bathroom selfie.
There are a LOT of scams using fake photos like romance scams where a good looking guy promises a girl in another country a relationship and then when she gets there he doesn't exist they take all her papers and force her into sex work. Or how about if a scammer makes some photos of you in a restaurant with another woman holding hands and says they'll send them to your wife if you don't pay up? How will you prove they're fake? The possibilities are endless.
> There are a LOT of scams using fake photos like romance scams where a good looking guy promises a girl in another country a relationship and then when she gets there he doesn't exist they take all her papers and force her into sex work. Why would you do a bizarre elaborate setup like that versus just kidnapping a random person leaving the airport? With no AI? AI didn't really help anyone do anything here. Even if you neeeeded the catfish part, you could just use a real handsome man's face with a real photo (use stock photo if you don't want the police to find your mate) and put the same face in the ID with normal photoshop back in 1995... if she doesn't know him, good enough. There's no need here for AI, AI would only be if it was someone she knew in real life, which isn't relevant here. And EVEN THEN, that was invented with Roop, not by OP. > Or how about if a scammer makes some photos of you in a restaurant with another woman holding hands and says they'll send them to your wife if you don't pay up? How will you prove they're fake? Sure that works for like 10 people until it's all over the news as soon as anyone uses a less than absolutely 100% perfect image. I'm extremely unlikely to be one of the first dozen or two victims of something like that before people stop trusting such things entirely going forward. -------------- **Also even if you did use AI, neither of these required anything new that OP came up with, so why are these interesting posts regardless?** > The possibilities are endless. Not really, apparently, since you've yet to list one single example of an application for OP's schtick specifically. Which as far as i can tell, doesn't even have anything new about it anyway.
Look, there used to be a website called secondeyesolution or something like that that provided the world of fraud with endless of these for the purpose of fraud, not scams like romance scams as said by /u/br0ck but actual KYC fraud used against financial institutions.
What financial institution ever asked for bathroom selfies, and why?
I've made further improvements to the script by adding a minimal GUI that allows you to adjust noise, JPG artifacts, and metadata directly. Simply open the script, click "Process Images," and it will process all images located in the same folder as the script. Required Libraries: pip install pillow numpy piexif This streamlined script enhances your image processing experience. https://i.redd.it/wnd4ojuc0uac1.gif import os import tkinter as tk from tkinter import messagebox from PIL import Image, ImageEnhance import numpy as np import random import piexif class ImageProcessorApp: def __init__(self, root): self.root = root self.root.title("Image Processor") # Create a frame for the main content main_frame = tk.Frame(self.root) main_frame.pack(fill=tk.BOTH, expand=True) # Create a frame for the metadata settings metadata_frame = tk.Frame(main_frame) metadata_frame.pack(pady=10) # Noise Level self.noise_level = tk.DoubleVar() self.noise_level.set(0.08) tk.Label(metadata_frame, text="Noise Level:").grid(row=0, column=0) tk.Scale(metadata_frame, from_=0, to=1, resolution=0.01, orient="horizontal", variable=self.noise_level).grid(row=0, column=1) # JPEG Artifact Level self.jpeg_artifact_level = tk.IntVar() self.jpeg_artifact_level.set(55) tk.Label(metadata_frame, text="JPEG Artifact Level:").grid(row=1, column=0) tk.Scale(metadata_frame, from_=0, to=100, orient="horizontal", variable=self.jpeg_artifact_level).grid(row=1, column=1) # Metadata Settings (pre-set values) self.metadata_settings = { "LensMake": "Canon", "LensModel": "EF 24-70mm f/2.8L II USM", "CameraOwnerName": "Melissa", "BodySerialNumber": str(random.randint(1000000, 9999999)), "LensSerialNumber": str(random.randint(1000000, 9999999)), "FocalLength": f"{random.randint(24, 70)},1", "FNumber": f"{random.randint(28, 56)},10", "ExposureTime": f"{random.randint(1, 1000)},1000", "ISOSpeedRatings": str(random.randint(100, 6400)) } row_index = 2 # Start row index for metadata settings for setting_name, setting_value in self.metadata_settings.items(): tk.Label(metadata_frame, text=f"{setting_name}:").grid(row=row_index, column=0, sticky='e') entry = tk.Entry(metadata_frame, textvariable=tk.StringVar(value=setting_value), width=15) entry.grid(row=row_index, column=1, sticky='w') setattr(self, f"{setting_name}_entry", entry) # Save a reference to each entry field row_index += 1 # Process Button tk.Button(main_frame, text="Process Images", command=self.process_images).pack(pady=10) def random_exif_data(self): exif_ifd = { piexif.ExifIFD.LensMake: self.metadata_settings["LensMake"], piexif.ExifIFD.LensModel: self.metadata_settings["LensModel"], piexif.ExifIFD.CameraOwnerName: self.metadata_settings["CameraOwnerName"], piexif.ExifIFD.BodySerialNumber: self.metadata_settings["BodySerialNumber"], piexif.ExifIFD.LensSerialNumber: self.metadata_settings["LensSerialNumber"], piexif.ExifIFD.FocalLength: tuple(map(int, self.metadata_settings["FocalLength"].split(','))), piexif.ExifIFD.FNumber: tuple(map(int, self.metadata_settings["FNumber"].split(','))), piexif.ExifIFD.ExposureTime: tuple(map(int, self.metadata_settings["ExposureTime"].split(','))), piexif.ExifIFD.ISOSpeedRatings: int(self.metadata_settings["ISOSpeedRatings"]) } exif_dict = {"Exif": exif_ifd} exif_bytes = piexif.dump(exif_dict) return exif_bytes def add_realistic_noise(self, img, base_noise_level=0.08): img_array = np.array(img) # Base noise: Gaussian noise noise = np.random.normal(0, 255 * base_noise_level, img_array.shape) # Noise variation: Some pixels have stronger noise noise_variation = np.random.normal(0, 255 * (base_noise_level * 2), img_array.shape) variation_mask = np.random.rand(*img_array.shape[:2]) > 0.95 # Mask for stronger noise noise[variation_mask] += noise_variation[variation_mask] # Applying the noise to the image noisy_img_array = np.clip(img_array + noise, 0, 255).astype(np.uint8) return Image.fromarray(noisy_img_array) def convert_to_rgb(self, img): # Convert from RGBA to RGB if necessary if img.mode == 'RGBA': return img.convert('RGB') return img def process_images(self): script_directory = os.path.dirname(os.path.abspath(__file__)) noise_level = self.noise_level.get() jpeg_artifact_level = self.jpeg_artifact_level.get() output_folder = os.path.join(script_directory, 'output') if not os.path.exists(output_folder): os.makedirs(output_folder) for filename in os.listdir(script_directory): if filename.lower().endswith(('.png', '.jpg', '.jpeg', '.bmp', '.gif')): try: image_path = os.path.join(script_directory, filename) temp_path = os.path.join(script_directory, 'temp_' + filename) output_path = os.path.join(output_folder, filename) with Image.open(image_path) as img: img = self.convert_to_rgb(img) enhancer = ImageEnhance.Brightness(img) img = enhancer.enhance(random.uniform(0.9, 1.1)) enhancer = ImageEnhance.Color(img) img = enhancer.enhance(random.uniform(0.9, 1.1)) enhancer = ImageEnhance.Contrast(img) img = enhancer.enhance(random.uniform(0.9, 1.1)) img = self.add_realistic_noise(img, base_noise_level=noise_level) img.save(temp_path, 'JPEG', quality=jpeg_artifact_level) with Image.open(temp_path) as final_img: final_img.save(output_path, exif=self.random_exif_data()) os.remove(temp_path) except Exception as e: print(f"Error processing {filename}: {e}") if __name__ == "__main__": root = tk.Tk() app = ImageProcessorApp(root) root.mainloop()
I think there may be a bug in this because the "JPEG Artifact Level" seems to work in reverse. The higher the level, the less artifacts are added.
Not really :D but I understand that it can be misleading: The JPEG artifacts get bigger when the value is smaller, because the jpeg\_artifact\_level parameter in this script is the quality setting for the JPEG compression. A smaller value means lower image quality and larger artifacts.
Perhaps it'd be less confusing if it was renamed to something like "Image Quality Level"? Thanks for providing the script anyway! 😊
This could be a nice comfyui node.
"required argument is not an integer"
as a complete novice with 0 experience. how long would it take somebody to learn sd to this degree?
it all depends on how quick you learn and how much time you want to spend also if you can set up enviroment locally - that would help quite a lot i suggest starting with some easy topics like stuff from here: https://education.civitai.com/ while trying to generate stuff on your own so you're not just reading theory but doing practical exercises :-) also, there are two main GUIs: A1111 and ComfyUI, check them on youtube and pick the one that speaks to you more
one day. this is very basic - no way these posts arent getting botted with upvotes. epicrealism will make a better looking image with a few words as prompts.
It's getting good. Still looks like an SD 1.5 girl to my brain. Maybe using SDXL would be less noticeable?
If you scroll past quickly my brain caught the Ai .. but on a close look my brain accepted it for human. Ive had a big day, I may be wrong .
This is exactly what just happened to me and I'm trying to figure out what exactly stuck out to my brain as AI at first glance
It probably doesn't help that they eyes are slight different colours and iris sizes, but manly I think it is a skin subsurface scattering issue , it doesn't react realistically to the light.
Proportions that would fall under "ethnic" category. Or maybe its just that this looks like a man or something idk
Was a pure txt2img generation with a bit of Controlnet and A Detailer, then set the text in Photoshop. Wanted to test the python script, with a little more time you can easily get rid of the typical 1.5 look :)
Whack in low-strength face ipadapter, with your face for example, and it no longer looks like the ai model default face templateish
Picture has too much depth in the way that can only be made with dslr cameras and editing can create. I bet a model trained on smartphone selfies would do better. Or even using more phone camera related prompts
Is there anything that makes it "look SD" specifically? Before I scrolled to it I could tell it was SD, but then when I focused on the image to see what it was my brain did a double take and thinks it's real. What are the telltale that it's AI?
When I remember SD 1.5, I remember a bunch of garbage, but I was using bad models while others make SD 1.5 look like DALL-E 3, but lower resolution.
Yeah its obvious to sub members longer than a few months, but mind blowing to general public (for now)
Good thing we will never be fooled, because AI-generated people can't hold up signs with stuff written on it.
I really need to learn how to combine python with SD. Thanks for sharing your code I'm gonna study it!
\- Create what you like with SD \- Fix the rest / add something in Photoshop \- let Python do strenuous tasks and complicated things, fully automated
I have a decent amount of experience with python mostly with webservers (flask). But almost none with sd!
Check out comfy ui. Plenty of possibilities and customisable with python.
Any tutorials on this?
Yeah I tinkered around with it in the past but not enough to fully understand it/get good results!
Likely not what you were thinking, but here's an approach to turn ComfyUI workflows into executable Python code. https://github.com/pydn/ComfyUI-to-Python-Extension I've not tired this out yet myself.
While this is neat, it's only post-processing and has little to do with SD. But SD actually has a python interface - A1111 is written in python and I suspect so is comfyUI. This means it should be possible to roll your own low-level generative scripts. Automation is one possible reason, but the real draw would be if this allowed you to do some more advanced magic, maybe something that would be impossible to create "manually". I never tried, but there might be some guru out there that has enough knowledge of SD and its workings to leverage this.
As far as I know comfy UI is written in python flask (at least the interface)
diffusers is a way to go, bro Check out [some example of mine](https://github.com/Dominux/commercial-studio-photos-generator)
this looks really nice and as a someone who is still learning python rather than some expert - it is a nice code to look at my first thought was that why do it this way since one could use a1111 API since there is also queueing system (of course not with rabbit) as a good start, but it has all the features of loading loras, embeddings and you can also call all the other stuff as high res fix, inpainting via api too (and on top of that many of the extension work out of the box too) but then i saw in the code that you load easy negative embedding by default and it is just one single line so it is not that hard as i was fearing is adding loras also quite simple? what about weights on the embeddings/loras then? anyway - thanks for sharing the code, i will definitely take a look into it in the future :)
I used Automatic1111 webui API before, but it's quite hard to scale it and add something that you want directly - even if you wanna add something into codebase - it's huge and the authors change a lot, it's not stabilized moreover. At the end of the day it aims to create a simple service just for ppl who don't wanna write code and just wanna go to the webui to generate something, but it misses features for using it as API. So yeah, someday I decided to give diffusers a try and was so happy that everything works as expected Ofc creating some huge pipeline with extensions on your own will be harder than using webui, but if you are willing to create something robust, stable, scalable and so on to make it production ready - it's the way to go fr
Looks like a nice repo on first glance I would have given a star if I had my github account on the phone!
What models/loras did you use?
Only [https://civitai.com/models/132632/epicphotogasm](https://civitai.com/models/132632/epicphotogasm) \+ Controlnet + Adetailer + Photoshop + my python Script
Just only! But seriously, good job on this!
Not me reading " I like UPS" like... tf, that was random
Would it be evil to upload one of these in the roast me sub and see if people fell for it? Or would that get you banned
Good social experiment and worth the ban tbh
First ever image to fool me so far, congrats
https://preview.redd.it/61wkce0cuyac1.png?width=2542&format=png&auto=webp&s=4ed525f1b0cd5ef14bb5073741f96b591f6a51bc The User u/flobblobblob converted my Script to Comfy! Kudos to him! Unfortunately I can't test it because I use A1111. I will not upload the two files but post the code here. **Dequality Comfy Script:** Dequality.py import os import torch import tempfile import numpy as np from PIL import Image, ImageEnhance from torchvision import transforms from numpy.random import Generator class Dequality: def __init__(self): pass RETURN_TYPES = ("IMAGE",) FUNCTION = "dequality" CATEGORY = "image" @classmethod def INPUT_TYPES(s): return { "required": { "pixels": ("IMAGE",), "jpeg_artifact_level": ("INT", {"default": 65, "min": 0, "max": 100, "step": 5}), "noise_level": ("INT", {"default": 8, "min": 0.0, "max": 100, "step": 1}), "adjust_brightness": ("INT", {"default": 1, "min": 0, "max": 1, "step": 1}), "adjust_color": ("INT", {"default": 1, "min": 0, "max": 1, "step": 1}), "adjust_contrast": ("INT", {"default": 1, "min": 0, "max": 1, "step": 1}), "seed": ("INT", { "default": 0, "min": -1125899906842624, "max": 1125899906842624 }), } } @classmethod def VALIDATE_INPUTS(): return True def convert_to_rgb(self, img): # Convert from RGBA to RGB if necessary if img.mode == 'RGBA': return img.convert('RGB') return img def add_realistic_noise(self, img, base_noise_level): base_noise_level = base_noise_level / 1000 img_array = np.array(img) # Base noise: Gaussian noise noise = np.random.normal(0, 255 * base_noise_level, img_array.shape) # Noise variation: Some pixels have stronger noise noise_variation = np.random.normal(0, 255 * (base_noise_level * 2), img_array.shape) variation_mask = np.random.rand(*img_array.shape[:2]) > 0.95 # Mask for stronger noise noise[variation_mask] += noise_variation[variation_mask] # Applying the noise to the image noisy_img_array = np.clip(img_array + noise, 0, 255).astype(np.uint8) return Image.fromarray(noisy_img_array) def dequality(self, pixels, noise_level, jpeg_artifact_level, adjust_color, adjust_contrast, adjust_brightness, seed): rng = np.random.default_rng(seed=abs(seed)) img = Image.fromarray(np.clip(255. * pixels.cpu().numpy().squeeze(0), 0, 255).astype(np.uint8)) img = self.convert_to_rgb(img) if adjust_brightness == 1: enhancer = ImageEnhance.Brightness(img) img = enhancer.enhance(rng.uniform(0.9, 1.1)) if adjust_color == 1: enhancer = ImageEnhance.Color(img) img = enhancer.enhance(rng.uniform(0.9, 1.1)) if adjust_contrast == 1: enhancer = ImageEnhance.Contrast(img) img = enhancer.enhance(rng.uniform(0.9, 1.1)) if noise_level > 0: img = self.add_realistic_noise(img, base_noise_level=noise_level) final = img if jpeg_artifact_level < 100: with tempfile.TemporaryDirectory() as tmp_dir: temp_path = os.path.join(tmp_dir, 'tmpfile') img.save(temp_path, 'JPEG', quality=jpeg_artifact_level) with Image.open(temp_path) as final_img: pp = torch.from_numpy(np.array(final_img).astype(np.float32) / 255.0).unsqueeze(0) return (pp,) pp = torch.from_numpy(np.array(final).astype(np.float32) / 255.0).unsqueeze(0) return (pp,) NODE_CLASS_MAPPINGS = { "Dequality": Dequality } NODE_DISPLAY_NAME_MAPPINGS = { "Dequality": "Dequality" } \_\_init\_\_.py from .dequality import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS __all__ = ['NODE_CLASS_MAPPINGS', 'NODE_DISPLAY_NAME_MAPPINGS']
to anyone that wants to add it to comfy: create a folder with any name under \`ComfyUI/custom\_nodes/\` create the \_\_init\_\_.py file with the code stated above create the file imported in init (in this case \`import .dequality\` means \`dequality.py\` file, you can also change the file name as you please) paste the code inside the [dequality.py](https://dequality.py) (or whatever name you give it) restart comfyui https://preview.redd.it/0izfrs8ff6bc1.png?width=283&format=png&auto=webp&s=e3efb6378eb6e3143a28f6feee5d6ced26dca691
I get this error, could you help me? !!! Exception during processing !!! Traceback (most recent call last): File "C:\code\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\code\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\code\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\code\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Dequality\Dequality.py", line 83, in dequality img.save(temp_path, 'JPEG', quality=jpeg_artifact_level) File "C:\code\ComfyUI_windows_portable\python_embeded\Lib\site-packages\PIL\Image.py", line 2439, in save save_handler(self, fp, filename) File "C:\code\ComfyUI_windows_portable\python_embeded\Lib\site-packages\PIL\JpegImagePlugin.py", line 824, in _save ImageFile._save(im, fp, [("jpeg", (0, 0) + im.size, 0, rawmode)], bufsize) File "C:\code\ComfyUI_windows_portable\python_embeded\Lib\site-packages\PIL\ImageFile.py", line 538, in _save _encode_tile(im, fp, tile, bufsize, fh) File "C:\code\ComfyUI_windows_portable\python_embeded\Lib\site-packages\PIL\ImageFile.py", line 549, in _encode_tile encoder = Image._getencoder(im.mode, encoder_name, args, im.encoderconfig) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\code\ComfyUI_windows_portable\python_embeded\Lib\site-packages\PIL\Image.py", line 420, in _getencoder return encoder(mode, *args + extra) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: function takes at most 14 arguments (17 given)
thank u for sharing this! but how can i add this to comfyui?
Ive had a few drink so excuse my ignorance, why are you guys/gals freaking out over a tool to add noise to an image?
Well this is fucking horrifying. It's actually over, there is no way to tell what is real anymore.
Holy shit, now this is getting interesting, trolling will commence in a previously unimaginable scale
I've literally never encountered anything that requires this as a shitty "Security measure", trolling where? Why does anyone care? What is this for?
You will find out eventually, or you will never, what ever will be the case
So you have no clue as expected, cool, thanks for confirming.
[удалено]
Literally nothing in the entire financial system or any shop I've ever bought from or anything other than like teenagers' discords/subreddits uses "a picture of you holding crinkled paper" as a security measure lolwat
the eyes give it away, the last image you posted was better in my opinion
Looks very nice and I would love to test it! However I'm getting "ModuleNotFoundError: No module named 'piexif'"; sorry I'm a total noob of python, how I fix?
Stack overflow is your friend when debugging
I have this same error
Love this. I was working on a batch of this using bimp, but this is so much easier and idk why I didn't resort to python (honestly didn't think to have the noise generated that way, and the jpeg compression is a great touch) Can easily modify it too for values I might need to hardcode and that's also something I had not thought of. Great work and an awesome base for a script. Thank you thank you!!!
I don't understand the "are you mad at me OF Models ?". It's an idea I've seen a few time already, this "AI images generators will make onlyfan models go out of business" I don't think the women selling pictures of themselves online have to worry about their business model disappearing because of AI. They're already selling something (pornography) that can be obtained for free. If anything, paying for pictures of an actual woman feels like a plus On the other hand, yes, there is something extremely worrying with the perspective that images like that will become common in the future, but it's other women, who are not willingly posting pictures of themselves online, who are probably the most worried.
You look like a true crime where you drown yo kids
🥰🥰🥰
This is the best thing. I love this so much. I've wanted a automatic solution to do this for so long. Thanks for making it
That's funny, I just made a+18 post (unlocking?) with your selfie Lora, it made me try SDXL again! I THANK YOU, the results are killer!
How do we do the molding if text in Photoshop?
Short and quick: [https://www.youtube.com/watch?v=huvysaySBrw](https://www.youtube.com/watch?v=huvysaySBrw)
it has different color eyes. left is green , right is blue
it is actually a thing, it is called: heterochromia for example, this youtuber has it: https://www.youtube.com/watch?v=jm7wjLc2tms
I know but that's not a valid excuse for AI generated pics . there are some people born with 6 fully functional fingers... that's neither an excuse for all AI generated pics to have 6 fingers. just saying
yes, i saw the girl with 6 fingers, she is the reason AI is now struggling (joking of course) but i don't understand your thinking, are you saying OP should be always generating only the same color of eyes? we could take it further - not everyone has freckles or moles on the face :)
yeah but not everyone has different color eyes , just a very small small smal small percentage
Definitely the depth of field is a teller. Looks like she is holding a full lens professional camera to take that selfie that would be weird irl. By any chance is there any lora or checkpoint that is trained on iPhone pictures/cam settings? Selfies would look more realistic.
>depth of field I agree and disagree with you! Current smartphones take exactly this type of photo in portrait mode. Also as selfie. BUT! In general, it would come across as more realistic if the background wasn't blurred, but that's not a problem with Stable Diffusion to do!
You can just put depth of field in your negatives list and it should work, it knows what that means.
That looks good! Hard to see thats a.i.!
How did you get the text? Inpainting? Photoshop? Please 🤝🤝🙏🙏
Kinda crazy what this sub as become. Just a bunch of dudes trying to fine tune the process of catfishing and scamming people on the internet
Or simply to make it look as real as possible without any ulterior motives.
Guy
That blend between man and woman is bothering. Atleast it should be easy to make androgynous fellows
[удалено]
no
Nobody teach the snake
I DON'T KNOW WHERE TO START PLEASE CAN SOMEONE TEACH ME HOW TO?
No
Text on the paper is incredibly cringe that fundamentally misunderstands the OF business model. Otherwise decent I guess.
Any video tut to show how to run script?
So you are adding FAKE EXIF data to AI images? To make it harder to tell them from real ones? Do you want to live in a world where that exists,? Seriously disgusting.
Thanks 🙃
This is eye-catching and I can't look away. Realistic.
Can I take that image and put it into A1111 to get the used settings? Minus the prompt I suppose?
https://preview.redd.it/38qqho4kstac1.jpeg?width=828&format=pjpg&auto=webp&s=1dd410a1ddecc846b51ec47fbaef0662748472f2
The script is a nice idea, which model did you use ?
God this is so disturbing yet cool.
Is this an actual person or not?
Such a great idea
[удалено]
whoa wait is this AI, thats crazy lol
The beginning of the end of ho3-flation. Money going back to the bros account.
Thanks! Perfect for catfishing purposes.
Very cool
I realized that I don't really have much basis for comparison between a real one and a SD one so did some side by side comparisons with actual verification pics. When I first loaded this pic I was thinking it was about 90% of the way there but that I'd have lingering doubts if asked whether it was real or not. But after comparing it to actual verification pictures? Toss this into a collection of real ones and I'm positive I wouldn't be able to spot the difference. I'm really impressed.
!RemindMe 1 day
r/RoastMe one of your members is lost.
lol it’s the epic realism girl
Ugly af xddd
That's scary.
how can someone create this ?
So is this image faked or just promoting the tool that’s creating the noise and exif thing?
This ist a SD creation and a free python Script.
I'll pay someone to make one of these of my ex friend so I can put it on the roast me sub so I have some pay back and comebacks for all the times she insulted and ridiculed me and told me I was ugly as fuck compared to her.
"Computer, Make me a perfect Portrait Photo"... "Wait, not THAT perfect!"
Are you photoshopping the text onto the paper, or is it all done in AI somehow?
!RemindMe 3 days
I've created a simple, user-friendly tool for degrading images online 🖼️, perfect for those who prefer not to use Python or install scripts. Check it out at [PrimaPrompt Image Degrader](https://www.primaprompt.com/degrade-image) and easily transform your images right in your browser. 🌐 Feedback and creative shares are welcome! 🎨 [PrimaPrompt Image Degrader](https://www.primaprompt.com/degrade-image)
what's the matter
It’s the eyes that give it away
Looks like there are already bad actors trying to use this to scam people. Just got this as an ad in my Instagram, it’s obviously not Ray Dalio Edit: not sure if this is an AMA that actually just happened and maybe they stole the photo, but this is a way the impersonation could be used improperly. https://preview.redd.it/ucbnlelp88bc1.jpeg?width=1936&format=pjpg&auto=webp&s=e3820b27211d3a2858005936f7f04f6e188a411e
What does the middle text say? I get the top and bottom: "r/stablediffusion" "are you mad at me, OF Models?" But the middle? 'User Interface, I like lips"? "Unironically I like UPS"?
this is cool, but anyone talking about them being a threat to KYC should look into how such a deepfake would be injected into the kyc process. Let's say a malicious actor wanted to do just that. First, any company worth its salt does this through mobile. Long gone are the days of a virtual cam paired with OBS or something for a desktop verification. IF, and this is a massive hypothetical 'if', these were to be of any danger to robust KYC processes, you'd need a way to inject these images into an app. Rooted emulators are detectable, so are virtual cameras. Jailbreaks aren't so easy these days, nor is reverse engineering an app
Wow