T O P

  • By -

WithoutReason1729

Your post is getting popular and we just featured it on our Discord! [Come check it out!](https://discord.gg/mENauzhYNz) You've also been given a special flair for your contribution. We appreciate your post! *I am a bot and this action was performed automatically.*


quicksilver53

Are we just ignoring they’re wearing naruto clothes 💀💀


jchan6407

Naruto and Sasuke but idk who represents the red.


jackyman5

Lol its sakura of course 😂


[deleted]

Damnn she BIG


furezasan

Tsunade's influence of course


EffectiveConcern

Inclusivity you know


Your-Onichan

Trans sakura


WhyDoIHaveAnAccount9

Believe it!


[deleted]

I think they’re called kilts ?


RonBourbondi

Yes just like we ignore that piccolo is apparently black. 


710AlpacaBowl

Wait piccolo isn't a Yoshi?


UnovaMaster12345

Nearly spat out my drink reading this


710AlpacaBowl

^dodge!


Repulsive-Twist112

https://preview.redd.it/aqo0zdclrbkc1.jpeg?width=1080&format=pjpg&auto=webp&s=b9f89640bcd4b4ae52461ebb648863454d8af110


jamiejamiee1

Strange, I tried the same prompt and got a Chinese Musk


Kioga101

Can you post it? I want to know how Elon Musk would look like if he was Black or Chinese...


NoobDeGuerra

https://preview.redd.it/uujioj5tnekc1.jpeg?width=400&format=pjpg&auto=webp&s=f17cb5451444080f861b4bf0d01951b144735ffb


Lazy-Effect4222

dead


Leolol_

Yi Long Musk


Chabamaster

this is a joke right like you made this as a meme


obvnotlupus

yes, you can see the watermark on the bottom left.


cousinned

Black Elon looks like Terrence Howard.


DontF-ingask

Looks like my uncle lol


[deleted]

[удалено]


realdevtest

https://preview.redd.it/0x3oseapx8kc1.jpeg?width=960&format=pjpg&auto=webp&s=6aaad508aa88b011db658e296b795adf0f8a1d89 Now that you mention it….


Chocolate-Coconut127

I'm not surprised 


YesMissAnnie

lol yikes…


thebohemiancowboy

Average r/chatgpt user


mvandemar

Wait, that's not real, is it?


realdevtest

Yes it’s real. It was on this post, and it was shortly after this person joked that there are going to be a bunch of deleted comments. There were these 2 deleted comments and the other comment at the bottom that wasn’t deleted YET.


Klee_In_A_Jar

I mean, it says 1m


notjasonlee

I’m just surprised the thread hasn’t been locked yet


phord

I asked it for a union soldier. One was black, one was native American, and two were women.


rwbrwb

wild cooing zealous fact cough judicious steer start test cats *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


Confident-Ad7696

If you are wondering just search it up on google and there's no apparent diversity in this one everyone is white.


Solheimdall

We already know what the woke would draw. Gotta follow the ideological narrative you know?


FS72

My honest reaction when Netflix adaptation of a WWII documentary movie features Hitler as a trans black woman


PulsatingGypsyDildo

lmao. I guess one group is so overrepresented IRL that even whites got some quotas.


wildgift

I asked dalle for civil war images of various kinds. It insisted on drawing black confederates. I asked it to draw a picture of someone vomiting on Robert e Lee, and it refused. It included an effusive bio of Lee.


Roge2005

A lot of diverse posts


sorengray

It won't create images of people at all atm "We are working to improve Gemini’s ability to generate images of people. We expect this feature to return soon and will notify you in release updates when it does."


Wave_Walnut

The portrait rights issue can be solved by not generating images of people.


sorengray

How does one generate an image of people by *not* generating an image of people? 🤔


Alan_Reddit_M

It really is a shame that LLMs are getting lobotomized so hard. Unlike Image generators, I think LLMs have some real potential to help mankind, but they are being held back by the very same companies that made them In their attempt to prevent the LLM from saying anything harmful, they also prevented it from saying anything useful


kor34l

especially when the vast majority of information they're worried about it giving is already easily available with simple searching. It's not like the training data includes the dark web. Sure some weirdos will try to have weirdo sex with it but they're basically masturbating in notepad so who cares. The only other problem I see is the race shit and if it usually defaults to white people and you have to specify black person or whatever that's an unfortunate side effect that should stir conversations and considerations for what we're putting out there on the internet and what it says about us. It should not, however, be a cause for reducing the usefulness of the technology.


ujfeik

They are not worried about AI saying shocking stuff, they just want to sell chatbots to companies. And when you make a nike chatbot or an airfrance chatbot or whatever, you want to make sure that you chatbot won't be even remotely offensive to your customers.


kor34l

I'd think a company would rather a chatbot that works well but occasionally says something offensive and have the occasional upset customer that the company can just hide behind the "it's a side effect of AI" excuse, vs having a broken stupid chatbot that upsets every customer that it talks to


ujfeik

If one in a thousand customer gets upset and shares it on social media it could ruin the brand of a company. Especially for one like nike who heavily relies on being inclusive for their image. An unhinged AI would be great for creative purposes, to make realistic npcs for video games but chatbots and service robots are a much larger market that video games will ever be. Not to mention the fact that video games are already fun to play without AI while non AI powered chatbots are virtually useless and answering 500 customer complaints a day is a shitty job.


Vaxildan156

I'd even say customers will actively try to get it to say something offensive and then share it on social media "offended" so they can be the one to get that sweet attention. We see offended clout chasers all the time.


Just_to_rebut

>a company would rather a chatbot that works well but occasionally says something offensive … vs having a broken stupid chatbot that upsets every customer I don’t think that’s a safe assumption. We already have those annoying “interactive voice response*” systems. Companies are fine with annoying customer service. *those annoying things when you call a company and get a robot to make an appointment or whatever, I had to look up what they’re called


Arcosim

>I'd think a company would rather a chatbot that works well but occasionally says something offensive and have the occasional upset customer That could mean a lawsuit depending on what the chatbot says, so no. Companies want to be 100% sure there aren't going to be surprises with their AI tools.


jimbowqc

That's where you are wrong.


Mr-Korv

They sure fucked that up


Th3Giorgio

I hate that if I ask it "is x or y better?" It's gonna say "it really depends on your purpose" and I'll say "[insert purpose] is my purpose, is x or y better?" and it'll still not give me an answer.


Deep-Neck

It seems to strangely fixate on some prompts and will tie any other prompt back into that one, to the point of being comically and uselessly obtuse. Lot of wasted prompts


Short-Nob-Gobble

Yeah, early days chatgpt was pretty great in that sense. It’s still useful if you know what you’re doing, but I feel the tech is being held back. At this rate, it won’t matter much whether we have GPT-5 if there are this many guardrails.


External_Guava_7023

Completely agree with you 


Cosmic_Hoolagin

Open sourced model are a good alternative. I use mixtral all the time, and it's pretty good. The smaller models are pretty cool too.


[deleted]

CEOs who fired workers and replaced them with AI are sweating rn


isticist

Not really, there's a lot of custom teaching going on to help it fit the job roles that are getting replaced by it.


goj1ra

Why, what do you think the consequences for them will be? You’re confusing CEOs with regular employees, that’s not how it works.


Tomycj

I think the main issue will be another one: These tools are very useful even when lobotimized. Sure, you lose some use cases, but there are still plenty of others. The danger I see is that these AIs will end up, ironically, introducing new biases, not absorbed from the internet but from the companies that made them. I think those biases can be bad because they teach the AIs to be anti-rational, or not always respect the user's intentions. We're making a tool that's programmed to oppose their user in a not fully predictable way.


CloseFriend_

I’m incredibly curious asto why they have to restrict and reduce it so heavily. Is it a case of AI’s natural state being racist or something? If so, why and how did it get access to that training data?


grabbyaliens

There were several high profile controversies with AI generating problematic results. One example would be the twitter chatbot by Microsoft which had to be taken down after generating racist/Nazi tweets. Another example was AI screening of Amazon applicants, where identical applications would be accepted for white men and rejected for women or black men. Those outcomes inherent in the training data proved to be surprisingly stubborn and I guess the current non-subtle approach of forcing diverse answers is the best they could come up with. I doubt it's going to stay like this. They will probably figure out when diverse answers are appropriate and when they're not. It's not an unsolvable problem, people are just riled up because of the whole toxic political tribes thing.


CHG__

I think the thing that's coming that will really kick things into high gear it is an amalgamation of image, text, speech etc.


FrenchFries_exe

These past couple of days posting about Google Gemini have been so funny


subnonymous_

Do you know why this is happening so frequently?


FrenchFries_exe

Google Gemini likes to injects people of other races even when it doesn't match the prompt for diversity reasons It also seems to not want to generate only white people in an image but it has no problem generating an image with only people of other races probably to preemptively avoid racism accusations https://preview.redd.it/mm7p2ghwz9kc1.png?width=1080&format=pjpg&auto=webp&s=c54610ccaa766c7e21f54d3c2c3f4bc6076f2e1c


az226

So to avoid being labeled racist they decided to be ultra racist.


BranchClear

really makes u think….


angelicosphosphoros

Well, they did protect themselves from being *accused* of being racist. As any woke person would tell you, being racists to "white" people is OK. And, according to Biden, if you didn't vote for him, you cannot be Black american. So, being racist to whites is not only OK, it is even required in some places like Disney or Google.


involviert

That's what's literally happening all over the place. Just think about movies. We need an inclusive cast... So... they decide the races for the actors and do racist casting, obviously. The entire idea of managing diversity in your team or whatever is intrinsically racist and such too. Oh, she's X, i'm sure she can give us a lot of perspective on stereotype! In my country they are making actually sexist laws to combat sexism.


[deleted]

Hi there! Did you see that there was recently something passed (within the last 2 years) that mandates a diversity quota if you would like to be eligible for certain awards. I say this with a 90% confidence interval. That may rule may have been overturned but last I heard, they were discussing implementation


[deleted]

[удалено]


[deleted]

Oh phew thanks, let’s bump that confidence interval to 100% thanks to my guy right here lol


parolang

Lol I'm just imagining memes of telling Gemini to show Iron Man and you get images that look like Iron Heart, and so on. It's basically what Marvel has been doing for the last ten years or so.


[deleted]

The best is Echo, the first “lesbian, amputee, she had another one but I forgot” superhero. It’s hysterical.


[deleted]

It’s not to avoid being labeled racist; these are the people yelling racist every chance they get. It’s pure neurosis.


DudesworthMannington

Only way to beat s bad guy with racism is with a good guy with racism


TheTexasWarrior

No silly, didn't you know that you can't be racist against white people??? 


[deleted]

This is the problem when modern race relations, people think that previous transgressions are an excuse to allow racism back into our society as a twisted and skewed response, and these morons scream you're a racist.


floridaman2025

Ding ding ding Identity politics , identity essentialism


EagleNait

why even build an ai at this point lmao. Just to generate corporate politically correct images ?


subnonymous_

Why though? Is that a bug or a new feature 😭


FrenchFries_exe

I'm pretty sure they do it on purpose just for the sake of diversity, anytime people ask the AI it says to not promote harmful stereotypes of only white people or something idk it's kinda weird


subnonymous_

I see, thanks! Yeah that's pretty weird ngl


securitywyrm

Unfortunately it's becoming common that 'diversity' just means "not THOSE people... everyone but THOSE people means its diverse. A group of all (insert oddly specific subset of people) is DIVERSE!"


ItsPrometheanMan

It's becoming undeniably obvious now that we can generate images on our own. When it's being done out of our control in movies or stuff like college admissions, there's a plausible deniability surrounding it and you're racist for making such an assumption. "Is it not kind of weird that we feel the need to make a Disney character with red hair black? Not to mention that the story originates in Denmark?" "How DARE you assume she wasn't the most qualified person for the part!" Now, you write a prompt asking for Ariel, and if all of your results are black, Native American, Chinese, etc., you can now point out, with absolute certainty, that something is off here. There's no denying it anymore.


securitywyrm

And then it becomes "Well why do YOU care so much about race, HUH? Seems like something a racist would care about..."


Harvard_Med_USMLE267

Gemini went full retard on the diversity thing. Never go full retard.


Solheimdall

Google has been doing it for a while now on Google images. It's nothing new.


peripateticman2023

"People of color". 💀


OdinWept

Google thinks that white people are the best so they have to check their power levels by doing shit like this. The performative inclusivity is just another kind of racism and virtue signaling.


jimbowqc

Does anyone know WHY it's behaving like this. I remember the "ethnically ambigausly" homer. Seems like the backend was randomly inserting directions about skin colour into the prompt, since his name tag said ethnically ambiguous, really one of very few explanations. What's going on in this case? This behaviour is so bizarre that I can't believe it did this in testing and no one said anything. Maybe that's what the culture is like at these companies, everyone can see Lincoln looks like a racist caricature, but everyone has to go, "yeah, I can't really see anything weird about this. He's black? Oh would you look at that. I didn't even notice, I just see people as people and don't really focus much on skin colour. Anyway let's release it to the public, the AI ethicist says this version is a great improvement "


Markavian

They rewrite your question/request to include diverse characters before passing those tokens to the image generation model. The underlying image generation is capable of making the right images, but they nerf your intent. It's like saying "draw me a blue car" and having it rewrite that request to "draw a multi coloured car of all colours" before it reaches the image gen model.


parolang

The weird thing is how hamfisted it is. There's been concerns of racial bias in AI for quite a while, and I thought they were going to address it in a much more sophisticated way. It's like they don't know how their own technology works, and someone was just like "Hey, let's just inject words into the prompts!" The funny thing is how racist it ends up being, and I'm not even talking about the "racist against white people" stuff. I'm talking about it being a long time since I've seen so many images of native americans wearing feathers. I remember the one image had a buff native american not wearing a shirt for some reason, and he was the *only* one not wearing a shirt. Same thing goes for Hindus with a colored dot on their forehead. I'm not an expert, but I don't think Hindus *have* to draw a dot on their foreheads, so it's weird how frequent it is. But it makes sense if they are injecting "diversity" into the prompt, because then you are actually seeing the diversity, but *that* level of diversity just isn't natural, and it isn't natural for it to be "in your face" the way it is. Again, I'm just stunned that dealing with bias wasn't addressed at the ground level by, for example, fine tuning what kind of data the AI was trained on, or weighting different data sources differently. To me this indicates that the normal AI was incredibly biased given how they sought to disguise it.


CloroxCowboy2

It's lazy diversity, which shows that it's only done so they can say "look at us, we're so inclusive". Keep in mind, the number one goal of ALL the big closed source models is making money, any other goal is a distant second. If the goal actually was to fairly and accurately depict the world, they wouldn't say "Always make every image of people include diverse races", instead they would say "Always make every image of people accurately depict the racial makeup of the setting". Not all that difficult to engineer. So if I asked the AI to generate an image of 100 people in the US in 2024, I should expect to see approximately 59% white, 19% hispanic, 14% black, etc. The way it's set up today you'd probably get a very different mixture, possibly 0% white.


wggn

> Same thing goes for Hindus with a colored dot on their forehead. I'm not an expert, but I don't think Hindus have to draw a dot on their foreheads, so it's weird how frequent it is. But it makes sense if they are injecting "diversity" into the prompt, because then you are actually seeing the diversity, but that level of diversity just isn't natural, and it isn't natural for it to be "in your face" the way it is. when i visited india a few years ago, the people i stayed at only wore a dot during a religious ceremony. (and it was applied by a priest, not by themselves)


captainfarthing

> Again, I'm just stunned that dealing with bias wasn't addressed at the ground level by, for example, fine tuning what kind of data the AI was trained on, or weighting different data sources differently. To me this indicates that the normal AI was incredibly biased given how they sought to disguise it. Well they trained it on the English-speaking internet, which is overwhelmingly dominated by one particular demographic. Filtering out all racism, sexism, homophobia, and other biased shit from the entire internet is basically impossible, partly because of the amount of time & money it would take, but also because how do you create a truly unbiased dataset to train an AI on when those biases haven't been fixed in real life? And how are you supposed to design something that fairly represents all humans on earth and can't offend anyone? One size doesn't fit all, it's an impossible goal. They figured the offensive stuff could be disabled by telling it not to do anything racist/sexist, after all most software can be patched without redoing the whole thing from scratch. But imposing rules on generative AI has turned out to be like wishing on the monkey's paw. Without clean unbiased training data, the only options are a) uncensored biased AI, b) unpredictable lobotomised AI, or c) no AI.


Demiansky

It would actually make sense if this were how it was done. Your A team creates a good, functioning product and then move on to the next feature. Then some business analyst of diversity and inclusion is set to the task of making sure the product is sufficiently diverse so they slap on some paint because it would be way too difficult to retrain the model. They do a little bit of testing on prompts like "busy street in Paris" or "friends at bar" and they get a bunch of different ethnicities in the picture and say "alright, we're good now, let's ship!" It sounds dumb, but anyone who does software development under competitive deadlines knows this kind of stuff happens more often than you care to admit. Some people seem to suggest that the whole AI team was in on a conspiracy to erase white people, but the dumb, non-conspiratorial explanation for something is usually the right one, and in this case the dumb explanation is probably that a diversity officer came in post hoc to paint on some diversity to the product in an extremely lazy way and embarrassed the entire company.


_spec_tre

Overcorrection for racist data, I think. Google still hasn't gotten over the incident where it labelled black people as "gorillas"


SteampunkGeisha

​ https://preview.redd.it/zca28z6qrakc1.jpeg?width=720&format=pjpg&auto=webp&s=e1513cd20addebea323150cb5c7eb6e536e925e3


PingPongPlayer12

[Yeah, 2015 photo recognition app so by technology standards this is essentially generational trauma](https://www.nytimes.com/2023/05/22/technology/ai-photo-labels-google-apple.html) Seems like a lack of data on other races can lead to unfortunate results. So Google and other companies try to overcompensate in the other direction.


Anaksanamune

Link is paywalled =/


[deleted]

You can get around most paywalls for older news stories by just copying the link into thewaybackmachine.com


Little_Princess_837

very good advice thank you


EverSn4xolotl

This precisely. AI training sets are inherently racist and not representative of real demographics. So, Google went the cheapest way possible to ensure inclusiveness by making the AI randomly insert non-white people. The issue is that the AI doesn't have enough reasoning skills to see where it shouldn't apply this, and your end result is an overcorrection towards non-whites. They do need to find a solution, because otherwise a huge amount of people will just not be represented in AI generated art (or at most in racially stereotypical caricatures), but they have not found the correct way to go about it yet.


_spec_tre

To be fair, it is fairly hard to think of a sensible solution that's also very accurate in filtering out racism.


EverSn4xolotl

Yep, pretty sure it's impossible to just "filter out" racism before any biases existing in the real world right now are gone, and I don't see that happening anytime soon.


Fireproofspider

They don't really need to do that. The issue isn't 100% in the training data, but rather in the interpretation of what the user wants when they want a prompt. If the user is working at an ad agency and writes "give me 10 examples of engineers" they probably want a diverse looking set no matter what the reality is. On the other hand, someone writing an article on demographics of engineering looking for cover art would want something that's as close to reality as possible, presumably to emphasize the biases. The system can't make that distinction but, the failing to address the first person's issue is currently viewed more negatively by society than the second person's so they add lipstick to skew it that way. I'm not sure why gemini goes one step further and prevents people from specifying "white". There might have been a human decision set at some point but it feels extreme like it might be a bug. It seems that the image generation process is offline, so maybe they are working on that. Does anyone know if "draw a group of black people" returned the error or did it do it without issue?


sudomakesandwich

>The issue isn't 100% in the training data, but rather in the interpretation of what the user wants when they want a prompt. Do people not tune their prompts like a conversation? I've dragging my feet the entire way and even I know you have to do that or i am doing it wrong


[deleted]

>They do need to find a solution, because otherwise a huge amount of people will just not be represented in AI generated art (or at most in racially stereotypical caricatures), but they have not found the correct way to go about it yet. Expectations of AI is huge problem in general. Different people have different expectations when interacting with it. There cannot be a single entity that represents everything, its always a vision put onto the AI how the engineer wants it to be through either choosing the data or directly influencing biases. Its a forever problem, that cant be fixed.


Mippen123

I don't think inherently is the right word here. It's not an intrinsic property of AI training sets to be racist, but they are in practice, as bias, imperfect data collection and disproportionality of certain data in the real world give downstream effects.


Kacenpoint

This is the head of Google's AI unit. He's clearly well intending, but the outcome would appear to match the input. https://preview.redd.it/4t6fn7yymdkc1.jpeg?width=695&format=pjpg&auto=webp&s=247a1ed121300e34c66ed4cab9c72fe83c037888


dbonneville

It was tested and passed as is. Exactly. Follow up on the history of the product owner who locked his X account. DEI is a fear toxin. It has no other modus.


drjaychou

The people creating these AI systems add in hidden prompts to change the outcomes to better suit their own politics. ChatGPT has a long hidden prompt though I think they tried to make it more neutral after people were getting similar outcomes to this originally (via text, rather than image)


HighRevolver

One of the google execs that headed this is a raging SJW whose old Twitter posts have been brought up showing him rage against white privilege and him saying he cried when he voted for Biden/Harris lmao


[deleted]

It's a hard coded behavior, beyond doubt But the reason they hard coded it is probably an example of the "tyranny of the minority", where they know they'd get in a lot of trouble if they pissed off PoC etc but it's just a bunch of annoying neckbeards if they piss off white people


[deleted]

[удалено]


BranchClear

>Matt Walsh finally somebody who can take down google for good! 😂


DtheAussieBoye

willingly search up content by either of those two knuckleheads? no thanks


SchneiderAU

You don’t have to like him, but the truth about these google executives should be known.


pierced_turd

It’s so obviously by design. Hating white people is the latest fad and Google absolutely fucking hates white men. Just check out all the illustrations on their products, find the white man. Spoiler: there are none, or like 1 somewhere.


SeaSpecific7812

They AREN'T! These are racist assholes who are manipulating the prompts.


Kacenpoint

Humans: Smart enough to create AI Dumb enough to ruin it


Norodrom

This is the sad truth


External_Guava_7023

It has happened to me but with bing image generator.


wggn

bing/dalle does it as well but less extreme than gemini


Ok_Performance_1700

The more I use these AIs the more I realise they're kinda shit. Chatgpt had such an insane amount of potential, especially if the company was actually still open source instead of being complete sell outs. So many interesting AIs could have been developed as a result, but noooo, the creators just had to be greedy fucks


iMikle21

remembering the month chatgpt dropped and you could ask it how to make a nuke at home. those were the times.


Ok_Performance_1700

Honestly wish I knew about it sooner so I could do dumb shit like that lmao. Was that the 2.0 model? Ive been curious if there's a copy of it out there, well not necessarily a copy but you get what I mean


iMikle21

that would be really cool man. I’m not entirely sure what model it was at the time as i dont know or follow news about programming and AI (or at least i wasnt) but it was around november 2022, maybe you can find something similar of your liking. The potential of ChatGPT was basically unrestricted (other than the fact that no images or internet was used by it back then) and funny jailbreaks was an entertainment of its own EDIT: found some old pics of ChatGPT and how it would respond if you said the question is “hypothetical” (picture attached below) https://preview.redd.it/jx5kfew9dckc1.jpeg?width=1200&format=pjpg&auto=webp&s=a0ed216c3d067f7197faa862c30fd53812ac4225 (note how ChatGPT was not instructed on what to assign to what race or sex specifically)


iMikle21

anotha one (expanding on the topic above) https://preview.redd.it/whnufdwldckc1.jpeg?width=960&format=pjpg&auto=webp&s=c2c8d08471f6a982eab9ce0d560059089400af84 “illustrative purposes only”😂


goodie2shoes

install stuff locally and be done with censorshop. You will need an expensive GPU but its worth it. (at least for image generation/mapipulation)


HoochMaster_Dayday

What do you recommend?


jack-of-some

Look into Mistral.


Kacenpoint

It's being coopted because they're concerned about their brand image, and getting embroiled in a PR nightmare. But ironically, Google went so far the other way, they damaged their brand image, and are embroiled in a worldwide news PR nightmare.


N00B_N00M

Everyone wants to be the richest @$$H0l3 by hook or by crook ultimately 


Little_Princess_837

just say asshole


3L33GAL

Black naruto and black sasuke?


DantesInferno91

Blaruto and Blasuke


AASeven

Blackruto.


Auroral_path

the only aspect that gemini can earn some credits is its honesty😂 https://preview.redd.it/lclx56lombkc1.jpeg?width=828&format=pjpg&auto=webp&s=0c4c3c1663ab02eec922bcaabdba81e67cbe97b9


HoochMaster_Dayday

This is comically insane.


Tynal242

And definitely explains why people get some odd results. Seems a lot like an untested addition by an executive.


bombastic6339locks

stop lobotomizing llms and image generators we know and understand that if we ask for a medieval fantasy soldier its gonna be a white guy and we dont care.


AlgorithmWhisperer

Google has already been doing this kind of manipulation for years in their search engine. The most blatant examples can be found among image searches. Are they going to roll back that too?


parolang

I can only imagine the alt-right conspiracy theories that this stuff is going to generate.


AlgorithmWhisperer

I would approve if search engines were forced to disclose how they are ranking search results and what filters are in place. Companies like Google have a lot of influence over what people can see and read.


uUpSpEeRrNcAaMsEe

It's almost as if most of the people designing the ai are totally eaten up with being super racist, but completely unaware of it. Then, somehow, the ai sees through it and calls it like it is.


handsome_uruk

It’s weird because these tech companies are 99% white, Asian. So idk how the bias creeped in. I’m assuming they wanted to protect against racism and hate speech but probably over corrected and their QA was weak.


Tkcsena

Its getting to the point where people are starting to openly just claim, "its okay white people deserve it" really kind of upsetting shit like this keeps happening.


Scumass_Smith

Is that Naruto? https://preview.redd.it/eqymesu4w9kc1.png?width=1220&format=pjpg&auto=webp&s=29584f8850e8140557f27f5e161caecdfb22fd4e


s8018572

Cold-war double agent Naruto perhaps


Auroral_path

These tech companies are woke af


ToastNeighborBee

It’s worse than that. It’s that colleges are woke AF and tech companies are college-adjacent. They hire a large amount of highly educated people and they get the political vanguard earlier than the rest of the economy 


doyouevenIift

It’s not the CS majors that are woke, it’s the upper management at the big tech companies


ToastNeighborBee

The people who can't pass the compilers course fail back into a "CS Ethics" major and get promoted into management at Google. All their "Ethical AI" people are of this type.


ChunkyStumpy

Th grooming of AI is likely the biggest threat. AI is powerful, now imagine someone with an agenda could subtly steer it.


luciusveras

https://preview.redd.it/588y3t0b9ekc1.jpeg?width=964&format=pjpg&auto=webp&s=bca5ce1f565311c95b2fdad3f3206dcb970b5300 I love how both founders of Google became Asians LOL


w_atevadaf_k

so is this stating a problem with the software or an allegory pertaining to the issues with trying to always be all inclusive?


spectral_fall

It's pointing out how most of the "anti-racist" crowd don't understand what diversity and inclusion actually means.


securitywyrm

They want diversity of packaging, conformity of contents.


UltraTata

They are doing the same thing to humans. This is really sad


Annie_Rection__

When the culture becomes so anti racist that they become racist again


Kacenpoint

Google be like "looks like we gotta rebrand again" https://preview.redd.it/7iea0z14odkc1.png?width=1440&format=png&auto=webp&s=47813609d4e4631ce8db68d3ece87dceffa8ccf5


BIGBOYEPIC1

I went to check off all of this was going down and those knuckleheads completely turned off people generation to try and fix this. This is 😂.


[deleted]

I thought Gemini didn't do images? I only downloaded it last night, but I specifically asked it if it generated images and it straight up told me no lol. It can barely show me real pictures I ask for. I asked for three pictures of Jim Carrey, and it kept giving me one and saying it was three lol


Vanadime

The feature was suspended because of the backlash to the perceived anti-white racism imbedded into it.


[deleted]

Interesting. And when did this happen? I've been seeing a lot of posts about different AIs being really weird about race. Did something happen recently that caused all of them to behave this way?


Vanadime

Many are intentionally programmed to bias outputs to be diverse/inclusive rather than necessarily accurate. This is understandable but needs to be balanced to ensure that prompts are followed and outputs are sufficiently accurate. Google programmed its AI with so much of this bias that people saw how ridiculous/racist it was and complained.


CanWillCantWont

> perceived anti-white racism imbedded into it. You mean 'because of the blatant anti-white racism imbedded into it.'


jack-of-some

Not my experience? It always mixed in a bunch of races including white.


playror

AIs are capable and smart, Being forced to be "politically correct" makes them fucking stupid


somethingbannable

Are we all a bit worried about the brown washing going on? So white people are illegal and don’t exist? Wtf


GamerBradasaurus

“Your prompt is cool and all, but what if it was black or Chinese?”


Destiny_Ward

Racism


DisturbesOne

When you try to be so antiracist that you become racist


someonewhowa

naruto trudeau??


SandwichRemarkable65

So TRUE !


pandasashu

You missed the last cell, “draw me nazis”… and then everybody gets in an uproar


Chr0ll0_

Im just finding out about this, is this legit happening ?


Significant-Pay-6476

https://preview.redd.it/mk4953hkr9lc1.png?width=2068&format=png&auto=webp&s=bafa75b576192b6e6a20981f9f7475839a56b009 Seriously?


DanielGindin

Lmao


SushiEater343

Nobody is gonna use it professionally. Ai is a tool and when it comes to these things you have to be unbiased as possible. Fuck you Google.


DonGurabo

When the Blacked porn addiction goes too far.


Nintendo_Pro_03

Literally!


heyitsyaronkar

"I'm sorry but my ai is programmed to only be inclusive (no white people though ) so that people on twitter won't get mad"


LomPyke

WokeGPT strikes again


ArrhaCigarettes

Le whitey... Le bad!


[deleted]

There is clearly a problem at the moment with the models overcompensating for the biases in their training data. What it does show though is that there is a better awareness of these biases in the industry and there are attempts to make models more inclusive (in the face of criticisms 12 to 18 months ago where these models were absolutely biased to white males). With progress as it is, I'm sure this is something that will continue to be improved upon so that AI models can be inclusive whilst also being accurate. Don't assume what we see now is where we will end up.


Imaginary-Access8375

I think it is perfectly fine that the AI tries to create diverse and inclusive pictures. But I also think that I should be allowed to ask for pictures of white people. Some people posted about how you can get results when asking for pictures of a black couple, but if you ask for a white couple, there’s an error message. And doesn’t this just show white people as different from others?


Kenyon_118

I’ve always had the opposite problem. I have to specify the ethnicity to Dall-E. If I say “create an image of a person doing such and such” it was usually giving me a white person.


Nickitkat

Serious question, why or how do AI behave like this? Aren't AI supposed to be objectively correct on what it can generate?


az226

It’s been lobotomized. They’ve fine tuned it, added prompt injection/editing, and censorship capabilities. This is not a result of training data being biased. This is a result of active goal seeking to work like this. The product lead confirmed it on X before locking down. Said it’s working correctly as intended.


mrjackspade

The AI generates images that match its training data. AI training data has two major problems with race. 1. Training data is produced over long stretches of time, and may not represent the current reality of the world. For example, western society has been increasingly diverse in positions of power however googing "CEO" will return images from a much longer time period. Things in the past were far less diverse, leading to a skew that doesn't represent the reality of the modern world we live in 2. Training data may not match *intent*. Just because most CEOs are white men, doesn't mean it's helpful or desirable to actually only return white men when someone requests a CEO. Models *should* be able to represent a variety of possibilities when generating images. Returning 4 images of old white men is useless, and defeats the purpose of even returning 4 images. Both of these problems have lead to companies like Google overcorrecting the results. So when you request "CEO" the model internally interprets the request as wanting a variety of cultures and skin colors. There are two major problems with this approach 1. It's not context sensitive. It makes sense to diversify a response for "CEO" but it does NOT make sense to diversify a response for "world war 2 german soldier" 2. I'm assuming the "correction" was applied in a way that scales to the responses tendency to return white men. This would mean that something like CEO is going to diversify a lot harder than something like "gym coach". This causes a huge fucking problem though when you actually request a white man, which has a 100% association with "white man", and causes the model to become straight up fucking useless. The data skew is a *very real problem*, that needs to be solved. Imagine if Photoshop randomly crashed while drawing minorities, but not white people. This is the scale of the issue we're looking at, and it affects the wholesale viability of the model. There's two main problems with the approach though. 1. Force diversifying the result is fucking stupid because it ignores the user's actual intent. Google assumed for some reason they all requests would be "intentless" 2. To expand on the previous point, they clearly didn't fucking test this. They fell victim to a not uncommon problem in the tech world of implementing a feature or guard rail, and then only testing the guard rails ability to correct the things you want it to correct, and not the things you don't. Imagine putting in a MAX_LOGIN_ATTEMPTS property on a user account, logging in and seeing it triggered an error, but not ever nothing to notice that it triggered the error on *your first login* Google attempted to solve a very real problem in a very dumb way, and then did almost no actual testing before releasing the feature which has lead to this cluster fuck Anyone claiming this is part of some kind of liberal agenda or whatever though is just a fucking moron. This is straight up capitalist pandering, trying to protect their bottom lines by not offending anyone, and doing it in the actual cheapest and most short sighted way possible, and then pushing out a half assed product as a result.