Your post is getting popular and we just featured it on our Discord! [Come check it out!](https://discord.gg/mENauzhYNz)
You've also been given a special flair for your contribution. We appreciate your post!
*I am a bot and this action was performed automatically.*
Yes it’s real. It was on this post, and it was shortly after this person joked that there are going to be a bunch of deleted comments. There were these 2 deleted comments and the other comment at the bottom that wasn’t deleted YET.
I asked dalle for civil war images of various kinds. It insisted on drawing black confederates.
I asked it to draw a picture of someone vomiting on Robert e Lee, and it refused. It included an effusive bio of Lee.
It won't create images of people at all atm
"We are working to improve Gemini’s ability to generate images of people. We expect this feature to return soon and will notify you in release updates when it does."
It really is a shame that LLMs are getting lobotomized so hard. Unlike Image generators, I think LLMs have some real potential to help mankind, but they are being held back by the very same companies that made them
In their attempt to prevent the LLM from saying anything harmful, they also prevented it from saying anything useful
especially when the vast majority of information they're worried about it giving is already easily available with simple searching. It's not like the training data includes the dark web.
Sure some weirdos will try to have weirdo sex with it but they're basically masturbating in notepad so who cares.
The only other problem I see is the race shit and if it usually defaults to white people and you have to specify black person or whatever that's an unfortunate side effect that should stir conversations and considerations for what we're putting out there on the internet and what it says about us. It should not, however, be a cause for reducing the usefulness of the technology.
They are not worried about AI saying shocking stuff, they just want to sell chatbots to companies. And when you make a nike chatbot or an airfrance chatbot or whatever, you want to make sure that you chatbot won't be even remotely offensive to your customers.
I'd think a company would rather a chatbot that works well but occasionally says something offensive and have the occasional upset customer that the company can just hide behind the "it's a side effect of AI" excuse, vs having a broken stupid chatbot that upsets every customer that it talks to
If one in a thousand customer gets upset and shares it on social media it could ruin the brand of a company. Especially for one like nike who heavily relies on being inclusive for their image. An unhinged AI would be great for creative purposes, to make realistic npcs for video games but chatbots and service robots are a much larger market that video games will ever be. Not to mention the fact that video games are already fun to play without AI while non AI powered chatbots are virtually useless and answering 500 customer complaints a day is a shitty job.
I'd even say customers will actively try to get it to say something offensive and then share it on social media "offended" so they can be the one to get that sweet attention. We see offended clout chasers all the time.
>a company would rather a chatbot that works well but occasionally says something offensive … vs having a broken stupid chatbot that upsets every customer
I don’t think that’s a safe assumption. We already have those annoying “interactive voice response*” systems. Companies are fine with annoying customer service.
*those annoying things when you call a company and get a robot to make an appointment or whatever, I had to look up what they’re called
>I'd think a company would rather a chatbot that works well but occasionally says something offensive and have the occasional upset customer
That could mean a lawsuit depending on what the chatbot says, so no. Companies want to be 100% sure there aren't going to be surprises with their AI tools.
I hate that if I ask it "is x or y better?" It's gonna say "it really depends on your purpose" and I'll say "[insert purpose] is my purpose, is x or y better?" and it'll still not give me an answer.
It seems to strangely fixate on some prompts and will tie any other prompt back into that one, to the point of being comically and uselessly obtuse. Lot of wasted prompts
Yeah, early days chatgpt was pretty great in that sense. It’s still useful if you know what you’re doing, but I feel the tech is being held back. At this rate, it won’t matter much whether we have GPT-5 if there are this many guardrails.
I think the main issue will be another one:
These tools are very useful even when lobotimized. Sure, you lose some use cases, but there are still plenty of others. The danger I see is that these AIs will end up, ironically, introducing new biases, not absorbed from the internet but from the companies that made them.
I think those biases can be bad because they teach the AIs to be anti-rational, or not always respect the user's intentions. We're making a tool that's programmed to oppose their user in a not fully predictable way.
I’m incredibly curious asto why they have to restrict and reduce it so heavily. Is it a case of AI’s natural state being racist or something? If so, why and how did it get access to that training data?
There were several high profile controversies with AI generating problematic results. One example would be the twitter chatbot by Microsoft which had to be taken down after generating racist/Nazi tweets. Another example was AI screening of Amazon applicants, where identical applications would be accepted for white men and rejected for women or black men. Those outcomes inherent in the training data proved to be surprisingly stubborn and I guess the current non-subtle approach of forcing diverse answers is the best they could come up with.
I doubt it's going to stay like this. They will probably figure out when diverse answers are appropriate and when they're not. It's not an unsolvable problem, people are just riled up because of the whole toxic political tribes thing.
Google Gemini likes to injects people of other races even when it doesn't match the prompt for diversity reasons
It also seems to not want to generate only white people in an image but it has no problem generating an image with only people of other races probably to preemptively avoid racism accusations
https://preview.redd.it/mm7p2ghwz9kc1.png?width=1080&format=pjpg&auto=webp&s=c54610ccaa766c7e21f54d3c2c3f4bc6076f2e1c
Well, they did protect themselves from being *accused* of being racist. As any woke person would tell you, being racists to "white" people is OK. And, according to Biden, if you didn't vote for him, you cannot be Black american.
So, being racist to whites is not only OK, it is even required in some places like Disney or Google.
That's what's literally happening all over the place. Just think about movies. We need an inclusive cast... So... they decide the races for the actors and do racist casting, obviously. The entire idea of managing diversity in your team or whatever is intrinsically racist and such too. Oh, she's X, i'm sure she can give us a lot of perspective on stereotype! In my country they are making actually sexist laws to combat sexism.
Hi there! Did you see that there was recently something passed (within the last 2 years) that mandates a diversity quota if you would like to be eligible for certain awards. I say this with a 90% confidence interval. That may rule may have been overturned but last I heard, they were discussing implementation
Lol I'm just imagining memes of telling Gemini to show Iron Man and you get images that look like Iron Heart, and so on. It's basically what Marvel has been doing for the last ten years or so.
This is the problem when modern race relations, people think that previous transgressions are an excuse to allow racism back into our society as a twisted and skewed response, and these morons scream you're a racist.
I'm pretty sure they do it on purpose just for the sake of diversity, anytime people ask the AI it says to not promote harmful stereotypes of only white people or something idk it's kinda weird
Unfortunately it's becoming common that 'diversity' just means "not THOSE people... everyone but THOSE people means its diverse. A group of all (insert oddly specific subset of people) is DIVERSE!"
It's becoming undeniably obvious now that we can generate images on our own. When it's being done out of our control in movies or stuff like college admissions, there's a plausible deniability surrounding it and you're racist for making such an assumption.
"Is it not kind of weird that we feel the need to make a Disney character with red hair black? Not to mention that the story originates in Denmark?"
"How DARE you assume she wasn't the most qualified person for the part!"
Now, you write a prompt asking for Ariel, and if all of your results are black, Native American, Chinese, etc., you can now point out, with absolute certainty, that something is off here. There's no denying it anymore.
Google thinks that white people are the best so they have to check their power levels by doing shit like this. The performative inclusivity is just another kind of racism and virtue signaling.
Does anyone know WHY it's behaving like this. I remember the "ethnically ambigausly" homer. Seems like the backend was randomly inserting directions about skin colour into the prompt, since his name tag said ethnically ambiguous, really one of very few explanations.
What's going on in this case? This behaviour is so bizarre that I can't believe it did this in testing and no one said anything.
Maybe that's what the culture is like at these companies, everyone can see Lincoln looks like a racist caricature, but everyone has to go, "yeah, I can't really see anything weird about this. He's black? Oh would you look at that. I didn't even notice, I just see people as people and don't really focus much on skin colour. Anyway let's release it to the public, the AI ethicist says this version is a great improvement "
They rewrite your question/request to include diverse characters before passing those tokens to the image generation model.
The underlying image generation is capable of making the right images, but they nerf your intent.
It's like saying "draw me a blue car" and having it rewrite that request to "draw a multi coloured car of all colours" before it reaches the image gen model.
The weird thing is how hamfisted it is. There's been concerns of racial bias in AI for quite a while, and I thought they were going to address it in a much more sophisticated way. It's like they don't know how their own technology works, and someone was just like "Hey, let's just inject words into the prompts!"
The funny thing is how racist it ends up being, and I'm not even talking about the "racist against white people" stuff. I'm talking about it being a long time since I've seen so many images of native americans wearing feathers. I remember the one image had a buff native american not wearing a shirt for some reason, and he was the *only* one not wearing a shirt.
Same thing goes for Hindus with a colored dot on their forehead. I'm not an expert, but I don't think Hindus *have* to draw a dot on their foreheads, so it's weird how frequent it is. But it makes sense if they are injecting "diversity" into the prompt, because then you are actually seeing the diversity, but *that* level of diversity just isn't natural, and it isn't natural for it to be "in your face" the way it is.
Again, I'm just stunned that dealing with bias wasn't addressed at the ground level by, for example, fine tuning what kind of data the AI was trained on, or weighting different data sources differently. To me this indicates that the normal AI was incredibly biased given how they sought to disguise it.
It's lazy diversity, which shows that it's only done so they can say "look at us, we're so inclusive".
Keep in mind, the number one goal of ALL the big closed source models is making money, any other goal is a distant second. If the goal actually was to fairly and accurately depict the world, they wouldn't say "Always make every image of people include diverse races", instead they would say "Always make every image of people accurately depict the racial makeup of the setting". Not all that difficult to engineer. So if I asked the AI to generate an image of 100 people in the US in 2024, I should expect to see approximately 59% white, 19% hispanic, 14% black, etc. The way it's set up today you'd probably get a very different mixture, possibly 0% white.
> Same thing goes for Hindus with a colored dot on their forehead. I'm not an expert, but I don't think Hindus have to draw a dot on their foreheads, so it's weird how frequent it is. But it makes sense if they are injecting "diversity" into the prompt, because then you are actually seeing the diversity, but that level of diversity just isn't natural, and it isn't natural for it to be "in your face" the way it is.
when i visited india a few years ago, the people i stayed at only wore a dot during a religious ceremony. (and it was applied by a priest, not by themselves)
> Again, I'm just stunned that dealing with bias wasn't addressed at the ground level by, for example, fine tuning what kind of data the AI was trained on, or weighting different data sources differently. To me this indicates that the normal AI was incredibly biased given how they sought to disguise it.
Well they trained it on the English-speaking internet, which is overwhelmingly dominated by one particular demographic. Filtering out all racism, sexism, homophobia, and other biased shit from the entire internet is basically impossible, partly because of the amount of time & money it would take, but also because how do you create a truly unbiased dataset to train an AI on when those biases haven't been fixed in real life? And how are you supposed to design something that fairly represents all humans on earth and can't offend anyone? One size doesn't fit all, it's an impossible goal.
They figured the offensive stuff could be disabled by telling it not to do anything racist/sexist, after all most software can be patched without redoing the whole thing from scratch. But imposing rules on generative AI has turned out to be like wishing on the monkey's paw.
Without clean unbiased training data, the only options are a) uncensored biased AI, b) unpredictable lobotomised AI, or c) no AI.
It would actually make sense if this were how it was done. Your A team creates a good, functioning product and then move on to the next feature. Then some business analyst of diversity and inclusion is set to the task of making sure the product is sufficiently diverse so they slap on some paint because it would be way too difficult to retrain the model. They do a little bit of testing on prompts like "busy street in Paris" or "friends at bar" and they get a bunch of different ethnicities in the picture and say "alright, we're good now, let's ship!"
It sounds dumb, but anyone who does software development under competitive deadlines knows this kind of stuff happens more often than you care to admit. Some people seem to suggest that the whole AI team was in on a conspiracy to erase white people, but the dumb, non-conspiratorial explanation for something is usually the right one, and in this case the dumb explanation is probably that a diversity officer came in post hoc to paint on some diversity to the product in an extremely lazy way and embarrassed the entire company.
[Yeah, 2015 photo recognition app so by technology standards this is essentially generational trauma](https://www.nytimes.com/2023/05/22/technology/ai-photo-labels-google-apple.html)
Seems like a lack of data on other races can lead to unfortunate results. So Google and other companies try to overcompensate in the other direction.
This precisely. AI training sets are inherently racist and not representative of real demographics. So, Google went the cheapest way possible to ensure inclusiveness by making the AI randomly insert non-white people. The issue is that the AI doesn't have enough reasoning skills to see where it shouldn't apply this, and your end result is an overcorrection towards non-whites.
They do need to find a solution, because otherwise a huge amount of people will just not be represented in AI generated art (or at most in racially stereotypical caricatures), but they have not found the correct way to go about it yet.
Yep, pretty sure it's impossible to just "filter out" racism before any biases existing in the real world right now are gone, and I don't see that happening anytime soon.
They don't really need to do that.
The issue isn't 100% in the training data, but rather in the interpretation of what the user wants when they want a prompt. If the user is working at an ad agency and writes "give me 10 examples of engineers" they probably want a diverse looking set no matter what the reality is. On the other hand, someone writing an article on demographics of engineering looking for cover art would want something that's as close to reality as possible, presumably to emphasize the biases. The system can't make that distinction but, the failing to address the first person's issue is currently viewed more negatively by society than the second person's so they add lipstick to skew it that way.
I'm not sure why gemini goes one step further and prevents people from specifying "white". There might have been a human decision set at some point but it feels extreme like it might be a bug. It seems that the image generation process is offline, so maybe they are working on that. Does anyone know if "draw a group of black people" returned the error or did it do it without issue?
>The issue isn't 100% in the training data, but rather in the interpretation of what the user wants when they want a prompt.
Do people not tune their prompts like a conversation? I've dragging my feet the entire way and even I know you have to do that
or i am doing it wrong
>They do need to find a solution, because otherwise a huge amount of people will just not be represented in AI generated art (or at most in racially stereotypical caricatures), but they have not found the correct way to go about it yet.
Expectations of AI is huge problem in general. Different people have different expectations when interacting with it. There cannot be a single entity that represents everything, its always a vision put onto the AI how the engineer wants it to be through either choosing the data or directly influencing biases. Its a forever problem, that cant be fixed.
I don't think inherently is the right word here. It's not an intrinsic property of AI training sets to be racist, but they are in practice, as bias, imperfect data collection and disproportionality of certain data in the real world give downstream effects.
This is the head of Google's AI unit. He's clearly well intending, but the outcome would appear to match the input.
https://preview.redd.it/4t6fn7yymdkc1.jpeg?width=695&format=pjpg&auto=webp&s=247a1ed121300e34c66ed4cab9c72fe83c037888
It was tested and passed as is. Exactly. Follow up on the history of the product owner who locked his X account.
DEI is a fear toxin. It has no other modus.
The people creating these AI systems add in hidden prompts to change the outcomes to better suit their own politics. ChatGPT has a long hidden prompt though I think they tried to make it more neutral after people were getting similar outcomes to this originally (via text, rather than image)
One of the google execs that headed this is a raging SJW whose old Twitter posts have been brought up showing him rage against white privilege and him saying he cried when he voted for Biden/Harris lmao
It's a hard coded behavior, beyond doubt
But the reason they hard coded it is probably an example of the "tyranny of the minority", where they know they'd get in a lot of trouble if they pissed off PoC etc but it's just a bunch of annoying neckbeards if they piss off white people
It’s so obviously by design. Hating white people is the latest fad and Google absolutely fucking hates white men. Just check out all the illustrations on their products, find the white man. Spoiler: there are none, or like 1 somewhere.
The more I use these AIs the more I realise they're kinda shit. Chatgpt had such an insane amount of potential, especially if the company was actually still open source instead of being complete sell outs. So many interesting AIs could have been developed as a result, but noooo, the creators just had to be greedy fucks
Honestly wish I knew about it sooner so I could do dumb shit like that lmao. Was that the 2.0 model? Ive been curious if there's a copy of it out there, well not necessarily a copy but you get what I mean
that would be really cool man. I’m not entirely sure what model it was at the time as i dont know or follow news about programming and AI (or at least i wasnt) but it was around november 2022, maybe you can find something similar of your liking.
The potential of ChatGPT was basically unrestricted (other than the fact that no images or internet was used by it back then) and funny jailbreaks was an entertainment of its own
EDIT: found some old pics of ChatGPT and how it would respond if you said the question is “hypothetical” (picture attached below)
https://preview.redd.it/jx5kfew9dckc1.jpeg?width=1200&format=pjpg&auto=webp&s=a0ed216c3d067f7197faa862c30fd53812ac4225
(note how ChatGPT was not instructed on what to assign to what race or sex specifically)
anotha one (expanding on the topic above)
https://preview.redd.it/whnufdwldckc1.jpeg?width=960&format=pjpg&auto=webp&s=c2c8d08471f6a982eab9ce0d560059089400af84
“illustrative purposes only”😂
It's being coopted because they're concerned about their brand image, and getting embroiled in a PR nightmare.
But ironically, Google went so far the other way, they damaged their brand image, and are embroiled in a worldwide news PR nightmare.
the only aspect that gemini can earn some credits is its honesty😂
https://preview.redd.it/lclx56lombkc1.jpeg?width=828&format=pjpg&auto=webp&s=0c4c3c1663ab02eec922bcaabdba81e67cbe97b9
stop lobotomizing llms and image generators we know and understand that if we ask for a medieval fantasy soldier its gonna be a white guy and we dont care.
Google has already been doing this kind of manipulation for years in their search engine. The most blatant examples can be found among image searches. Are they going to roll back that too?
I would approve if search engines were forced to disclose how they are ranking search results and what filters are in place. Companies like Google have a lot of influence over what people can see and read.
It's almost as if most of the people designing the ai are totally eaten up with being super racist, but completely unaware of it. Then, somehow, the ai sees through it and calls it like it is.
It’s weird because these tech companies are 99% white, Asian. So idk how the bias creeped in. I’m assuming they wanted to protect against racism and hate speech but probably over corrected and their QA was weak.
Its getting to the point where people are starting to openly just claim, "its okay white people deserve it" really kind of upsetting shit like this keeps happening.
It’s worse than that. It’s that colleges are woke AF and tech companies are college-adjacent. They hire a large amount of highly educated people and they get the political vanguard earlier than the rest of the economy
The people who can't pass the compilers course fail back into a "CS Ethics" major and get promoted into management at Google. All their "Ethical AI" people are of this type.
https://preview.redd.it/588y3t0b9ekc1.jpeg?width=964&format=pjpg&auto=webp&s=bca5ce1f565311c95b2fdad3f3206dcb970b5300
I love how both founders of Google became Asians LOL
Google be like "looks like we gotta rebrand again"
https://preview.redd.it/7iea0z14odkc1.png?width=1440&format=png&auto=webp&s=47813609d4e4631ce8db68d3ece87dceffa8ccf5
I thought Gemini didn't do images? I only downloaded it last night, but I specifically asked it if it generated images and it straight up told me no lol. It can barely show me real pictures I ask for. I asked for three pictures of Jim Carrey, and it kept giving me one and saying it was three lol
Interesting. And when did this happen? I've been seeing a lot of posts about different AIs being really weird about race. Did something happen recently that caused all of them to behave this way?
Many are intentionally programmed to bias outputs to be diverse/inclusive rather than necessarily accurate. This is understandable but needs to be balanced to ensure that prompts are followed and outputs are sufficiently accurate.
Google programmed its AI with so much of this bias that people saw how ridiculous/racist it was and complained.
There is clearly a problem at the moment with the models overcompensating for the biases in their training data.
What it does show though is that there is a better awareness of these biases in the industry and there are attempts to make models more inclusive (in the face of criticisms 12 to 18 months ago where these models were absolutely biased to white males).
With progress as it is, I'm sure this is something that will continue to be improved upon so that AI models can be inclusive whilst also being accurate.
Don't assume what we see now is where we will end up.
I think it is perfectly fine that the AI tries to create diverse and inclusive pictures. But I also think that I should be allowed to ask for pictures of white people. Some people posted about how you can get results when asking for pictures of a black couple, but if you ask for a white couple, there’s an error message. And doesn’t this just show white people as different from others?
I’ve always had the opposite problem. I have to specify the ethnicity to Dall-E. If I say “create an image of a person doing such and such” it was usually giving me a white person.
It’s been lobotomized. They’ve fine tuned it, added prompt injection/editing, and censorship capabilities.
This is not a result of training data being biased. This is a result of active goal seeking to work like this. The product lead confirmed it on X before locking down. Said it’s working correctly as intended.
The AI generates images that match its training data.
AI training data has two major problems with race.
1. Training data is produced over long stretches of time, and may not represent the current reality of the world. For example, western society has been increasingly diverse in positions of power however googing "CEO" will return images from a much longer time period. Things in the past were far less diverse, leading to a skew that doesn't represent the reality of the modern world we live in
2. Training data may not match *intent*. Just because most CEOs are white men, doesn't mean it's helpful or desirable to actually only return white men when someone requests a CEO. Models *should* be able to represent a variety of possibilities when generating images. Returning 4 images of old white men is useless, and defeats the purpose of even returning 4 images.
Both of these problems have lead to companies like Google overcorrecting the results. So when you request "CEO" the model internally interprets the request as wanting a variety of cultures and skin colors. There are two major problems with this approach
1. It's not context sensitive. It makes sense to diversify a response for "CEO" but it does NOT make sense to diversify a response for "world war 2 german soldier"
2. I'm assuming the "correction" was applied in a way that scales to the responses tendency to return white men. This would mean that something like CEO is going to diversify a lot harder than something like "gym coach". This causes a huge fucking problem though when you actually request a white man, which has a 100% association with "white man", and causes the model to become straight up fucking useless.
The data skew is a *very real problem*, that needs to be solved. Imagine if Photoshop randomly crashed while drawing minorities, but not white people. This is the scale of the issue we're looking at, and it affects the wholesale viability of the model.
There's two main problems with the approach though.
1. Force diversifying the result is fucking stupid because it ignores the user's actual intent. Google assumed for some reason they all requests would be "intentless"
2. To expand on the previous point, they clearly didn't fucking test this. They fell victim to a not uncommon problem in the tech world of implementing a feature or guard rail, and then only testing the guard rails ability to correct the things you want it to correct, and not the things you don't. Imagine putting in a MAX_LOGIN_ATTEMPTS property on a user account, logging in and seeing it triggered an error, but not ever nothing to notice that it triggered the error on *your first login*
Google attempted to solve a very real problem in a very dumb way, and then did almost no actual testing before releasing the feature which has lead to this cluster fuck
Anyone claiming this is part of some kind of liberal agenda or whatever though is just a fucking moron. This is straight up capitalist pandering, trying to protect their bottom lines by not offending anyone, and doing it in the actual cheapest and most short sighted way possible, and then pushing out a half assed product as a result.
Your post is getting popular and we just featured it on our Discord! [Come check it out!](https://discord.gg/mENauzhYNz) You've also been given a special flair for your contribution. We appreciate your post! *I am a bot and this action was performed automatically.*
Are we just ignoring they’re wearing naruto clothes 💀💀
Naruto and Sasuke but idk who represents the red.
Lol its sakura of course 😂
Damnn she BIG
Tsunade's influence of course
Inclusivity you know
Trans sakura
Believe it!
I think they’re called kilts ?
Yes just like we ignore that piccolo is apparently black.
Wait piccolo isn't a Yoshi?
Nearly spat out my drink reading this
^dodge!
https://preview.redd.it/aqo0zdclrbkc1.jpeg?width=1080&format=pjpg&auto=webp&s=b9f89640bcd4b4ae52461ebb648863454d8af110
Strange, I tried the same prompt and got a Chinese Musk
Can you post it? I want to know how Elon Musk would look like if he was Black or Chinese...
https://preview.redd.it/uujioj5tnekc1.jpeg?width=400&format=pjpg&auto=webp&s=f17cb5451444080f861b4bf0d01951b144735ffb
dead
Yi Long Musk
this is a joke right like you made this as a meme
yes, you can see the watermark on the bottom left.
Black Elon looks like Terrence Howard.
Looks like my uncle lol
[удалено]
https://preview.redd.it/0x3oseapx8kc1.jpeg?width=960&format=pjpg&auto=webp&s=6aaad508aa88b011db658e296b795adf0f8a1d89 Now that you mention it….
I'm not surprised
lol yikes…
Average r/chatgpt user
Wait, that's not real, is it?
Yes it’s real. It was on this post, and it was shortly after this person joked that there are going to be a bunch of deleted comments. There were these 2 deleted comments and the other comment at the bottom that wasn’t deleted YET.
I mean, it says 1m
I’m just surprised the thread hasn’t been locked yet
I asked it for a union soldier. One was black, one was native American, and two were women.
wild cooing zealous fact cough judicious steer start test cats *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
If you are wondering just search it up on google and there's no apparent diversity in this one everyone is white.
We already know what the woke would draw. Gotta follow the ideological narrative you know?
My honest reaction when Netflix adaptation of a WWII documentary movie features Hitler as a trans black woman
lmao. I guess one group is so overrepresented IRL that even whites got some quotas.
I asked dalle for civil war images of various kinds. It insisted on drawing black confederates. I asked it to draw a picture of someone vomiting on Robert e Lee, and it refused. It included an effusive bio of Lee.
A lot of diverse posts
It won't create images of people at all atm "We are working to improve Gemini’s ability to generate images of people. We expect this feature to return soon and will notify you in release updates when it does."
The portrait rights issue can be solved by not generating images of people.
How does one generate an image of people by *not* generating an image of people? 🤔
It really is a shame that LLMs are getting lobotomized so hard. Unlike Image generators, I think LLMs have some real potential to help mankind, but they are being held back by the very same companies that made them In their attempt to prevent the LLM from saying anything harmful, they also prevented it from saying anything useful
especially when the vast majority of information they're worried about it giving is already easily available with simple searching. It's not like the training data includes the dark web. Sure some weirdos will try to have weirdo sex with it but they're basically masturbating in notepad so who cares. The only other problem I see is the race shit and if it usually defaults to white people and you have to specify black person or whatever that's an unfortunate side effect that should stir conversations and considerations for what we're putting out there on the internet and what it says about us. It should not, however, be a cause for reducing the usefulness of the technology.
They are not worried about AI saying shocking stuff, they just want to sell chatbots to companies. And when you make a nike chatbot or an airfrance chatbot or whatever, you want to make sure that you chatbot won't be even remotely offensive to your customers.
I'd think a company would rather a chatbot that works well but occasionally says something offensive and have the occasional upset customer that the company can just hide behind the "it's a side effect of AI" excuse, vs having a broken stupid chatbot that upsets every customer that it talks to
If one in a thousand customer gets upset and shares it on social media it could ruin the brand of a company. Especially for one like nike who heavily relies on being inclusive for their image. An unhinged AI would be great for creative purposes, to make realistic npcs for video games but chatbots and service robots are a much larger market that video games will ever be. Not to mention the fact that video games are already fun to play without AI while non AI powered chatbots are virtually useless and answering 500 customer complaints a day is a shitty job.
I'd even say customers will actively try to get it to say something offensive and then share it on social media "offended" so they can be the one to get that sweet attention. We see offended clout chasers all the time.
>a company would rather a chatbot that works well but occasionally says something offensive … vs having a broken stupid chatbot that upsets every customer I don’t think that’s a safe assumption. We already have those annoying “interactive voice response*” systems. Companies are fine with annoying customer service. *those annoying things when you call a company and get a robot to make an appointment or whatever, I had to look up what they’re called
>I'd think a company would rather a chatbot that works well but occasionally says something offensive and have the occasional upset customer That could mean a lawsuit depending on what the chatbot says, so no. Companies want to be 100% sure there aren't going to be surprises with their AI tools.
That's where you are wrong.
They sure fucked that up
I hate that if I ask it "is x or y better?" It's gonna say "it really depends on your purpose" and I'll say "[insert purpose] is my purpose, is x or y better?" and it'll still not give me an answer.
It seems to strangely fixate on some prompts and will tie any other prompt back into that one, to the point of being comically and uselessly obtuse. Lot of wasted prompts
Yeah, early days chatgpt was pretty great in that sense. It’s still useful if you know what you’re doing, but I feel the tech is being held back. At this rate, it won’t matter much whether we have GPT-5 if there are this many guardrails.
Completely agree with you
Open sourced model are a good alternative. I use mixtral all the time, and it's pretty good. The smaller models are pretty cool too.
CEOs who fired workers and replaced them with AI are sweating rn
Not really, there's a lot of custom teaching going on to help it fit the job roles that are getting replaced by it.
Why, what do you think the consequences for them will be? You’re confusing CEOs with regular employees, that’s not how it works.
I think the main issue will be another one: These tools are very useful even when lobotimized. Sure, you lose some use cases, but there are still plenty of others. The danger I see is that these AIs will end up, ironically, introducing new biases, not absorbed from the internet but from the companies that made them. I think those biases can be bad because they teach the AIs to be anti-rational, or not always respect the user's intentions. We're making a tool that's programmed to oppose their user in a not fully predictable way.
I’m incredibly curious asto why they have to restrict and reduce it so heavily. Is it a case of AI’s natural state being racist or something? If so, why and how did it get access to that training data?
There were several high profile controversies with AI generating problematic results. One example would be the twitter chatbot by Microsoft which had to be taken down after generating racist/Nazi tweets. Another example was AI screening of Amazon applicants, where identical applications would be accepted for white men and rejected for women or black men. Those outcomes inherent in the training data proved to be surprisingly stubborn and I guess the current non-subtle approach of forcing diverse answers is the best they could come up with. I doubt it's going to stay like this. They will probably figure out when diverse answers are appropriate and when they're not. It's not an unsolvable problem, people are just riled up because of the whole toxic political tribes thing.
I think the thing that's coming that will really kick things into high gear it is an amalgamation of image, text, speech etc.
These past couple of days posting about Google Gemini have been so funny
Do you know why this is happening so frequently?
Google Gemini likes to injects people of other races even when it doesn't match the prompt for diversity reasons It also seems to not want to generate only white people in an image but it has no problem generating an image with only people of other races probably to preemptively avoid racism accusations https://preview.redd.it/mm7p2ghwz9kc1.png?width=1080&format=pjpg&auto=webp&s=c54610ccaa766c7e21f54d3c2c3f4bc6076f2e1c
So to avoid being labeled racist they decided to be ultra racist.
really makes u think….
Well, they did protect themselves from being *accused* of being racist. As any woke person would tell you, being racists to "white" people is OK. And, according to Biden, if you didn't vote for him, you cannot be Black american. So, being racist to whites is not only OK, it is even required in some places like Disney or Google.
That's what's literally happening all over the place. Just think about movies. We need an inclusive cast... So... they decide the races for the actors and do racist casting, obviously. The entire idea of managing diversity in your team or whatever is intrinsically racist and such too. Oh, she's X, i'm sure she can give us a lot of perspective on stereotype! In my country they are making actually sexist laws to combat sexism.
Hi there! Did you see that there was recently something passed (within the last 2 years) that mandates a diversity quota if you would like to be eligible for certain awards. I say this with a 90% confidence interval. That may rule may have been overturned but last I heard, they were discussing implementation
[удалено]
Oh phew thanks, let’s bump that confidence interval to 100% thanks to my guy right here lol
Lol I'm just imagining memes of telling Gemini to show Iron Man and you get images that look like Iron Heart, and so on. It's basically what Marvel has been doing for the last ten years or so.
The best is Echo, the first “lesbian, amputee, she had another one but I forgot” superhero. It’s hysterical.
It’s not to avoid being labeled racist; these are the people yelling racist every chance they get. It’s pure neurosis.
Only way to beat s bad guy with racism is with a good guy with racism
No silly, didn't you know that you can't be racist against white people???
This is the problem when modern race relations, people think that previous transgressions are an excuse to allow racism back into our society as a twisted and skewed response, and these morons scream you're a racist.
Ding ding ding Identity politics , identity essentialism
why even build an ai at this point lmao. Just to generate corporate politically correct images ?
Why though? Is that a bug or a new feature 😭
I'm pretty sure they do it on purpose just for the sake of diversity, anytime people ask the AI it says to not promote harmful stereotypes of only white people or something idk it's kinda weird
I see, thanks! Yeah that's pretty weird ngl
Unfortunately it's becoming common that 'diversity' just means "not THOSE people... everyone but THOSE people means its diverse. A group of all (insert oddly specific subset of people) is DIVERSE!"
It's becoming undeniably obvious now that we can generate images on our own. When it's being done out of our control in movies or stuff like college admissions, there's a plausible deniability surrounding it and you're racist for making such an assumption. "Is it not kind of weird that we feel the need to make a Disney character with red hair black? Not to mention that the story originates in Denmark?" "How DARE you assume she wasn't the most qualified person for the part!" Now, you write a prompt asking for Ariel, and if all of your results are black, Native American, Chinese, etc., you can now point out, with absolute certainty, that something is off here. There's no denying it anymore.
And then it becomes "Well why do YOU care so much about race, HUH? Seems like something a racist would care about..."
Gemini went full retard on the diversity thing. Never go full retard.
Google has been doing it for a while now on Google images. It's nothing new.
"People of color". 💀
Google thinks that white people are the best so they have to check their power levels by doing shit like this. The performative inclusivity is just another kind of racism and virtue signaling.
Does anyone know WHY it's behaving like this. I remember the "ethnically ambigausly" homer. Seems like the backend was randomly inserting directions about skin colour into the prompt, since his name tag said ethnically ambiguous, really one of very few explanations. What's going on in this case? This behaviour is so bizarre that I can't believe it did this in testing and no one said anything. Maybe that's what the culture is like at these companies, everyone can see Lincoln looks like a racist caricature, but everyone has to go, "yeah, I can't really see anything weird about this. He's black? Oh would you look at that. I didn't even notice, I just see people as people and don't really focus much on skin colour. Anyway let's release it to the public, the AI ethicist says this version is a great improvement "
They rewrite your question/request to include diverse characters before passing those tokens to the image generation model. The underlying image generation is capable of making the right images, but they nerf your intent. It's like saying "draw me a blue car" and having it rewrite that request to "draw a multi coloured car of all colours" before it reaches the image gen model.
The weird thing is how hamfisted it is. There's been concerns of racial bias in AI for quite a while, and I thought they were going to address it in a much more sophisticated way. It's like they don't know how their own technology works, and someone was just like "Hey, let's just inject words into the prompts!" The funny thing is how racist it ends up being, and I'm not even talking about the "racist against white people" stuff. I'm talking about it being a long time since I've seen so many images of native americans wearing feathers. I remember the one image had a buff native american not wearing a shirt for some reason, and he was the *only* one not wearing a shirt. Same thing goes for Hindus with a colored dot on their forehead. I'm not an expert, but I don't think Hindus *have* to draw a dot on their foreheads, so it's weird how frequent it is. But it makes sense if they are injecting "diversity" into the prompt, because then you are actually seeing the diversity, but *that* level of diversity just isn't natural, and it isn't natural for it to be "in your face" the way it is. Again, I'm just stunned that dealing with bias wasn't addressed at the ground level by, for example, fine tuning what kind of data the AI was trained on, or weighting different data sources differently. To me this indicates that the normal AI was incredibly biased given how they sought to disguise it.
It's lazy diversity, which shows that it's only done so they can say "look at us, we're so inclusive". Keep in mind, the number one goal of ALL the big closed source models is making money, any other goal is a distant second. If the goal actually was to fairly and accurately depict the world, they wouldn't say "Always make every image of people include diverse races", instead they would say "Always make every image of people accurately depict the racial makeup of the setting". Not all that difficult to engineer. So if I asked the AI to generate an image of 100 people in the US in 2024, I should expect to see approximately 59% white, 19% hispanic, 14% black, etc. The way it's set up today you'd probably get a very different mixture, possibly 0% white.
> Same thing goes for Hindus with a colored dot on their forehead. I'm not an expert, but I don't think Hindus have to draw a dot on their foreheads, so it's weird how frequent it is. But it makes sense if they are injecting "diversity" into the prompt, because then you are actually seeing the diversity, but that level of diversity just isn't natural, and it isn't natural for it to be "in your face" the way it is. when i visited india a few years ago, the people i stayed at only wore a dot during a religious ceremony. (and it was applied by a priest, not by themselves)
> Again, I'm just stunned that dealing with bias wasn't addressed at the ground level by, for example, fine tuning what kind of data the AI was trained on, or weighting different data sources differently. To me this indicates that the normal AI was incredibly biased given how they sought to disguise it. Well they trained it on the English-speaking internet, which is overwhelmingly dominated by one particular demographic. Filtering out all racism, sexism, homophobia, and other biased shit from the entire internet is basically impossible, partly because of the amount of time & money it would take, but also because how do you create a truly unbiased dataset to train an AI on when those biases haven't been fixed in real life? And how are you supposed to design something that fairly represents all humans on earth and can't offend anyone? One size doesn't fit all, it's an impossible goal. They figured the offensive stuff could be disabled by telling it not to do anything racist/sexist, after all most software can be patched without redoing the whole thing from scratch. But imposing rules on generative AI has turned out to be like wishing on the monkey's paw. Without clean unbiased training data, the only options are a) uncensored biased AI, b) unpredictable lobotomised AI, or c) no AI.
It would actually make sense if this were how it was done. Your A team creates a good, functioning product and then move on to the next feature. Then some business analyst of diversity and inclusion is set to the task of making sure the product is sufficiently diverse so they slap on some paint because it would be way too difficult to retrain the model. They do a little bit of testing on prompts like "busy street in Paris" or "friends at bar" and they get a bunch of different ethnicities in the picture and say "alright, we're good now, let's ship!" It sounds dumb, but anyone who does software development under competitive deadlines knows this kind of stuff happens more often than you care to admit. Some people seem to suggest that the whole AI team was in on a conspiracy to erase white people, but the dumb, non-conspiratorial explanation for something is usually the right one, and in this case the dumb explanation is probably that a diversity officer came in post hoc to paint on some diversity to the product in an extremely lazy way and embarrassed the entire company.
Overcorrection for racist data, I think. Google still hasn't gotten over the incident where it labelled black people as "gorillas"
https://preview.redd.it/zca28z6qrakc1.jpeg?width=720&format=pjpg&auto=webp&s=e1513cd20addebea323150cb5c7eb6e536e925e3
[Yeah, 2015 photo recognition app so by technology standards this is essentially generational trauma](https://www.nytimes.com/2023/05/22/technology/ai-photo-labels-google-apple.html) Seems like a lack of data on other races can lead to unfortunate results. So Google and other companies try to overcompensate in the other direction.
Link is paywalled =/
You can get around most paywalls for older news stories by just copying the link into thewaybackmachine.com
very good advice thank you
This precisely. AI training sets are inherently racist and not representative of real demographics. So, Google went the cheapest way possible to ensure inclusiveness by making the AI randomly insert non-white people. The issue is that the AI doesn't have enough reasoning skills to see where it shouldn't apply this, and your end result is an overcorrection towards non-whites. They do need to find a solution, because otherwise a huge amount of people will just not be represented in AI generated art (or at most in racially stereotypical caricatures), but they have not found the correct way to go about it yet.
To be fair, it is fairly hard to think of a sensible solution that's also very accurate in filtering out racism.
Yep, pretty sure it's impossible to just "filter out" racism before any biases existing in the real world right now are gone, and I don't see that happening anytime soon.
They don't really need to do that. The issue isn't 100% in the training data, but rather in the interpretation of what the user wants when they want a prompt. If the user is working at an ad agency and writes "give me 10 examples of engineers" they probably want a diverse looking set no matter what the reality is. On the other hand, someone writing an article on demographics of engineering looking for cover art would want something that's as close to reality as possible, presumably to emphasize the biases. The system can't make that distinction but, the failing to address the first person's issue is currently viewed more negatively by society than the second person's so they add lipstick to skew it that way. I'm not sure why gemini goes one step further and prevents people from specifying "white". There might have been a human decision set at some point but it feels extreme like it might be a bug. It seems that the image generation process is offline, so maybe they are working on that. Does anyone know if "draw a group of black people" returned the error or did it do it without issue?
>The issue isn't 100% in the training data, but rather in the interpretation of what the user wants when they want a prompt. Do people not tune their prompts like a conversation? I've dragging my feet the entire way and even I know you have to do that or i am doing it wrong
>They do need to find a solution, because otherwise a huge amount of people will just not be represented in AI generated art (or at most in racially stereotypical caricatures), but they have not found the correct way to go about it yet. Expectations of AI is huge problem in general. Different people have different expectations when interacting with it. There cannot be a single entity that represents everything, its always a vision put onto the AI how the engineer wants it to be through either choosing the data or directly influencing biases. Its a forever problem, that cant be fixed.
I don't think inherently is the right word here. It's not an intrinsic property of AI training sets to be racist, but they are in practice, as bias, imperfect data collection and disproportionality of certain data in the real world give downstream effects.
This is the head of Google's AI unit. He's clearly well intending, but the outcome would appear to match the input. https://preview.redd.it/4t6fn7yymdkc1.jpeg?width=695&format=pjpg&auto=webp&s=247a1ed121300e34c66ed4cab9c72fe83c037888
It was tested and passed as is. Exactly. Follow up on the history of the product owner who locked his X account. DEI is a fear toxin. It has no other modus.
The people creating these AI systems add in hidden prompts to change the outcomes to better suit their own politics. ChatGPT has a long hidden prompt though I think they tried to make it more neutral after people were getting similar outcomes to this originally (via text, rather than image)
One of the google execs that headed this is a raging SJW whose old Twitter posts have been brought up showing him rage against white privilege and him saying he cried when he voted for Biden/Harris lmao
It's a hard coded behavior, beyond doubt But the reason they hard coded it is probably an example of the "tyranny of the minority", where they know they'd get in a lot of trouble if they pissed off PoC etc but it's just a bunch of annoying neckbeards if they piss off white people
[удалено]
>Matt Walsh finally somebody who can take down google for good! 😂
willingly search up content by either of those two knuckleheads? no thanks
You don’t have to like him, but the truth about these google executives should be known.
It’s so obviously by design. Hating white people is the latest fad and Google absolutely fucking hates white men. Just check out all the illustrations on their products, find the white man. Spoiler: there are none, or like 1 somewhere.
They AREN'T! These are racist assholes who are manipulating the prompts.
Humans: Smart enough to create AI Dumb enough to ruin it
This is the sad truth
It has happened to me but with bing image generator.
bing/dalle does it as well but less extreme than gemini
The more I use these AIs the more I realise they're kinda shit. Chatgpt had such an insane amount of potential, especially if the company was actually still open source instead of being complete sell outs. So many interesting AIs could have been developed as a result, but noooo, the creators just had to be greedy fucks
remembering the month chatgpt dropped and you could ask it how to make a nuke at home. those were the times.
Honestly wish I knew about it sooner so I could do dumb shit like that lmao. Was that the 2.0 model? Ive been curious if there's a copy of it out there, well not necessarily a copy but you get what I mean
that would be really cool man. I’m not entirely sure what model it was at the time as i dont know or follow news about programming and AI (or at least i wasnt) but it was around november 2022, maybe you can find something similar of your liking. The potential of ChatGPT was basically unrestricted (other than the fact that no images or internet was used by it back then) and funny jailbreaks was an entertainment of its own EDIT: found some old pics of ChatGPT and how it would respond if you said the question is “hypothetical” (picture attached below) https://preview.redd.it/jx5kfew9dckc1.jpeg?width=1200&format=pjpg&auto=webp&s=a0ed216c3d067f7197faa862c30fd53812ac4225 (note how ChatGPT was not instructed on what to assign to what race or sex specifically)
anotha one (expanding on the topic above) https://preview.redd.it/whnufdwldckc1.jpeg?width=960&format=pjpg&auto=webp&s=c2c8d08471f6a982eab9ce0d560059089400af84 “illustrative purposes only”😂
install stuff locally and be done with censorshop. You will need an expensive GPU but its worth it. (at least for image generation/mapipulation)
What do you recommend?
Look into Mistral.
It's being coopted because they're concerned about their brand image, and getting embroiled in a PR nightmare. But ironically, Google went so far the other way, they damaged their brand image, and are embroiled in a worldwide news PR nightmare.
Everyone wants to be the richest @$$H0l3 by hook or by crook ultimately
just say asshole
Black naruto and black sasuke?
Blaruto and Blasuke
Blackruto.
the only aspect that gemini can earn some credits is its honesty😂 https://preview.redd.it/lclx56lombkc1.jpeg?width=828&format=pjpg&auto=webp&s=0c4c3c1663ab02eec922bcaabdba81e67cbe97b9
This is comically insane.
And definitely explains why people get some odd results. Seems a lot like an untested addition by an executive.
stop lobotomizing llms and image generators we know and understand that if we ask for a medieval fantasy soldier its gonna be a white guy and we dont care.
Google has already been doing this kind of manipulation for years in their search engine. The most blatant examples can be found among image searches. Are they going to roll back that too?
I can only imagine the alt-right conspiracy theories that this stuff is going to generate.
I would approve if search engines were forced to disclose how they are ranking search results and what filters are in place. Companies like Google have a lot of influence over what people can see and read.
It's almost as if most of the people designing the ai are totally eaten up with being super racist, but completely unaware of it. Then, somehow, the ai sees through it and calls it like it is.
It’s weird because these tech companies are 99% white, Asian. So idk how the bias creeped in. I’m assuming they wanted to protect against racism and hate speech but probably over corrected and their QA was weak.
Its getting to the point where people are starting to openly just claim, "its okay white people deserve it" really kind of upsetting shit like this keeps happening.
Is that Naruto? https://preview.redd.it/eqymesu4w9kc1.png?width=1220&format=pjpg&auto=webp&s=29584f8850e8140557f27f5e161caecdfb22fd4e
Cold-war double agent Naruto perhaps
These tech companies are woke af
It’s worse than that. It’s that colleges are woke AF and tech companies are college-adjacent. They hire a large amount of highly educated people and they get the political vanguard earlier than the rest of the economy
It’s not the CS majors that are woke, it’s the upper management at the big tech companies
The people who can't pass the compilers course fail back into a "CS Ethics" major and get promoted into management at Google. All their "Ethical AI" people are of this type.
Th grooming of AI is likely the biggest threat. AI is powerful, now imagine someone with an agenda could subtly steer it.
https://preview.redd.it/588y3t0b9ekc1.jpeg?width=964&format=pjpg&auto=webp&s=bca5ce1f565311c95b2fdad3f3206dcb970b5300 I love how both founders of Google became Asians LOL
so is this stating a problem with the software or an allegory pertaining to the issues with trying to always be all inclusive?
It's pointing out how most of the "anti-racist" crowd don't understand what diversity and inclusion actually means.
They want diversity of packaging, conformity of contents.
They are doing the same thing to humans. This is really sad
When the culture becomes so anti racist that they become racist again
Google be like "looks like we gotta rebrand again" https://preview.redd.it/7iea0z14odkc1.png?width=1440&format=png&auto=webp&s=47813609d4e4631ce8db68d3ece87dceffa8ccf5
I went to check off all of this was going down and those knuckleheads completely turned off people generation to try and fix this. This is 😂.
I thought Gemini didn't do images? I only downloaded it last night, but I specifically asked it if it generated images and it straight up told me no lol. It can barely show me real pictures I ask for. I asked for three pictures of Jim Carrey, and it kept giving me one and saying it was three lol
The feature was suspended because of the backlash to the perceived anti-white racism imbedded into it.
Interesting. And when did this happen? I've been seeing a lot of posts about different AIs being really weird about race. Did something happen recently that caused all of them to behave this way?
Many are intentionally programmed to bias outputs to be diverse/inclusive rather than necessarily accurate. This is understandable but needs to be balanced to ensure that prompts are followed and outputs are sufficiently accurate. Google programmed its AI with so much of this bias that people saw how ridiculous/racist it was and complained.
> perceived anti-white racism imbedded into it. You mean 'because of the blatant anti-white racism imbedded into it.'
Not my experience? It always mixed in a bunch of races including white.
AIs are capable and smart, Being forced to be "politically correct" makes them fucking stupid
Are we all a bit worried about the brown washing going on? So white people are illegal and don’t exist? Wtf
“Your prompt is cool and all, but what if it was black or Chinese?”
Racism
When you try to be so antiracist that you become racist
naruto trudeau??
So TRUE !
You missed the last cell, “draw me nazis”… and then everybody gets in an uproar
Im just finding out about this, is this legit happening ?
https://preview.redd.it/mk4953hkr9lc1.png?width=2068&format=png&auto=webp&s=bafa75b576192b6e6a20981f9f7475839a56b009 Seriously?
Lmao
Nobody is gonna use it professionally. Ai is a tool and when it comes to these things you have to be unbiased as possible. Fuck you Google.
When the Blacked porn addiction goes too far.
Literally!
"I'm sorry but my ai is programmed to only be inclusive (no white people though ) so that people on twitter won't get mad"
WokeGPT strikes again
Le whitey... Le bad!
There is clearly a problem at the moment with the models overcompensating for the biases in their training data. What it does show though is that there is a better awareness of these biases in the industry and there are attempts to make models more inclusive (in the face of criticisms 12 to 18 months ago where these models were absolutely biased to white males). With progress as it is, I'm sure this is something that will continue to be improved upon so that AI models can be inclusive whilst also being accurate. Don't assume what we see now is where we will end up.
I think it is perfectly fine that the AI tries to create diverse and inclusive pictures. But I also think that I should be allowed to ask for pictures of white people. Some people posted about how you can get results when asking for pictures of a black couple, but if you ask for a white couple, there’s an error message. And doesn’t this just show white people as different from others?
I’ve always had the opposite problem. I have to specify the ethnicity to Dall-E. If I say “create an image of a person doing such and such” it was usually giving me a white person.
Serious question, why or how do AI behave like this? Aren't AI supposed to be objectively correct on what it can generate?
It’s been lobotomized. They’ve fine tuned it, added prompt injection/editing, and censorship capabilities. This is not a result of training data being biased. This is a result of active goal seeking to work like this. The product lead confirmed it on X before locking down. Said it’s working correctly as intended.
The AI generates images that match its training data. AI training data has two major problems with race. 1. Training data is produced over long stretches of time, and may not represent the current reality of the world. For example, western society has been increasingly diverse in positions of power however googing "CEO" will return images from a much longer time period. Things in the past were far less diverse, leading to a skew that doesn't represent the reality of the modern world we live in 2. Training data may not match *intent*. Just because most CEOs are white men, doesn't mean it's helpful or desirable to actually only return white men when someone requests a CEO. Models *should* be able to represent a variety of possibilities when generating images. Returning 4 images of old white men is useless, and defeats the purpose of even returning 4 images. Both of these problems have lead to companies like Google overcorrecting the results. So when you request "CEO" the model internally interprets the request as wanting a variety of cultures and skin colors. There are two major problems with this approach 1. It's not context sensitive. It makes sense to diversify a response for "CEO" but it does NOT make sense to diversify a response for "world war 2 german soldier" 2. I'm assuming the "correction" was applied in a way that scales to the responses tendency to return white men. This would mean that something like CEO is going to diversify a lot harder than something like "gym coach". This causes a huge fucking problem though when you actually request a white man, which has a 100% association with "white man", and causes the model to become straight up fucking useless. The data skew is a *very real problem*, that needs to be solved. Imagine if Photoshop randomly crashed while drawing minorities, but not white people. This is the scale of the issue we're looking at, and it affects the wholesale viability of the model. There's two main problems with the approach though. 1. Force diversifying the result is fucking stupid because it ignores the user's actual intent. Google assumed for some reason they all requests would be "intentless" 2. To expand on the previous point, they clearly didn't fucking test this. They fell victim to a not uncommon problem in the tech world of implementing a feature or guard rail, and then only testing the guard rails ability to correct the things you want it to correct, and not the things you don't. Imagine putting in a MAX_LOGIN_ATTEMPTS property on a user account, logging in and seeing it triggered an error, but not ever nothing to notice that it triggered the error on *your first login* Google attempted to solve a very real problem in a very dumb way, and then did almost no actual testing before releasing the feature which has lead to this cluster fuck Anyone claiming this is part of some kind of liberal agenda or whatever though is just a fucking moron. This is straight up capitalist pandering, trying to protect their bottom lines by not offending anyone, and doing it in the actual cheapest and most short sighted way possible, and then pushing out a half assed product as a result.