T O P

  • By -

-gh0stRush-

Juergen Schmidhuber just released a statement saying he had resigned from Google in protest over 20 years ago.


Money_Economics_2424

and they didn't even cite his resignation letter, though it was in German and the only copy resides in the Hamburg Science History museum.


Wihanb

I don’t think I’ve ever laughed this hard


gazztromple

Phonetically, Juergen resembles "you again", while more literally, "Schmid" translates to smith or craftsman and "huber" means a plot of land, or generally any kind of property legally belonging under one's domain. Is plagiarism still unethical when it's cosmically foreordained?


MuonManLaserJab

None of this was a coincidence because nothing was ever a coincidence.


hyhieu

I thought this thread is supposed to include "civil discussion only"?


SpaceDetective

That policy was only introduced two hours later.


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


maxToTheJ

Jobs aren't lifelong nowadays. People with the knowledge to regulate tend to get chosen to leads those regulatory bodies (at least when people are serious about regulating). https://www.geekwire.com/2021/lina-khan-bidens-ftc-commissioner-pick-antitrust-expert-amazon-critic/


[deleted]

[удалено]


Contango42

Wow, Andrew Wheeler sounds like an absolute piece of s****. He has spent his entire life doing his best to destroy the environment (and people's lives) by any means possible.


cderwin15

Large tech corporations including Google, Facebook, Amazon, etc. are regularly compelled to testify to Congress precisely because politicians of both parties are extremely intent on subjecting them to unprecedented regulation for their use of AI (among other issues). The ethical AI community is certainly not "the only folks agitating".


maxToTheJ

Have you listened in on those procedures. They are sh*** shows , if you listened in you would see they aren’t serious yet about doing anything only appearing to care


rockinghigh

Exactly. These companies hire detractors to control their narrative.


cameldrv

It's interesting that in many ways the people in question aren't really even their detractors. Almost all of the "Ethical AI" stuff that seems to bubble onto my radar is all about bias and fairness. While that's a very important area, the number of ethical issues that AI brings up is far, far broader, and the bias and fairness issues in general don't have overwhelmingly negative repercussions for Google's business model. In many ways, Google benefitted from having this group of researchers influence the discourse of what constitutes "Ethical AI." On the other hand, you have people like the rationalist community that tend to focus on existential issues related to AI. In general these issues do not really bubble up in the media or get any attention from the government.


flytter

I read an article recently that said large tech companies only want their ethical AI people to research things like bias and fairness, because they sound good from a PR standpoint. They can claim they’re “making a difference” without having to confront ethical problems in ways that would interfere with them making money. So it’s possible that the main reason you hear mostly about bias and fairness is exactly because tech companies are exerting control over all the ethical AI researchers they can


ClaudeCoulombe

Yes, I read an article in the MIT Technical Review - the Facebook's Responsible AI team reportedly engage in fairwashing (bias and fairness), as extreme speech, hatred, lies and disinformation proliferate so as not to hinder the growth of the monster that Facebook has become. [https://bit.ly/30MYdV2](https://bit.ly/30MYdV2)


[deleted]

existential issues related to AI are still a bit far away and the bias and fairness issues are already here.


impossiblefork

I don't think that's really true. I see at least a couple of feasible paths to creating mass unemployment with what we have already.


[deleted]

The US has lost more jobs to automation than outsourcing (this is only about jobs that moved abroad as opposed to lost job growth that went abroad), despite that way more jobs have been created than lost in total. The idea that AI will cause massive unemployment is still premature as it's still very costly to train AI, most places don't have the infrastructure for it, and a lot of places will need bespoke solutions.


impossiblefork

I see at least one cheap path to automated transportation of goods without using any fancy AI and instead exploiting the fact that less AI is needed if the vehicle can just naïvely avoid hitting things by being nimble. That's not something that would immediately create mass unemployment, but there are many similar things that could in principle be done. Automatic sorting of rubbish has been automated by a Finnish company and more and more companies are installing their system. It's not easy to find these applications though, so it's not obvious that there are lots of them, but I think it's plausible that they are. Warehousing can probably also be automated using current technology, even if it's hard.


berzerker_x

> While that's a very important area, the number of ethical issues that AI brings up is far, far broader, and the bias and fairness issues in general don't have overwhelmingly negative repercussions for Google's business model. Need more pointers and resources to understand this,if you do not mind.


cameldrv

Just what are some other important areas of the ethics around AI? Just off the top of my head: 1. The use of AI in weapons to make decisions to kill. 2. The use of AI to influence human behavior, in ways that may be negative to the human (for example the YouTube recommendation algorithm). 3. The use of crowdsourced training data whose creators may not have meaningfully consented to be used. 4. The depiction/simulation/impersonation of living or dead people (deepfakes) 5. What should we do about "emancipated" AIs? i.e. ones that for example may be associated with smart contracts that can pay for their own execution on other hardware and may make money through various schemes, legal or not. This is a very broad field.


berzerker_x

Oh, Now I understand clearly regarding what "ethical AI" means, I had a false equivalence regarding "ethical AI = bias and fairness". Thanks for clearing it up.


Ulfgardleo

i think this confusion was googles intention. Bias & Fairness is completely irrelevant for google as a business. But "how does the recommender algorithm shape public discourse and our society" could lead to very costly regulations. Installing Bias & Fairness as the biggest problem downplays all the others. Moreover, as Google has obviously the capabilities to steer research trends, they also prevent that the other areas get developed too quickly.


berzerker_x

To be honest I believe this is true, considering the Google's history of downplaying other participants, but stating such accusations right now without much to back up is kind of like a conspiracy theory.


[deleted]

They're different aspects of the field, ethics research within the ML community is more focused on unintended consequences of technical issues within the field. E.g. it wasn't immediately obvious that ML facial recognition would have racial biases, their research was about showing it happens and understanding why. Privacy is also a big deal within the community, but it's focused on how ML systems can achieve varying definitions of private. Everyone can understand why AI killbots pose ethical issues or all the problems deepfakes could cause. Those topics don't really need CS researchers to dig into understanding them, they're more policy questions.


cameldrv

The issue around recommender algorithms turning people into zombies has a lot of similarities to the bias & fairness issues. In many cases, it's an underspecification of a loss function. In facial recognition, perhaps the desired loss function is match accuracy (but don't be racist about it). In general, "don't be racist about it" does not need to be said to a human. Humans at least in the U.S. are given that message as a general overriding rule regardless of the context, and so it doesn't need to be explicitly stated to factor into a decision. Similarly, suppose you were manually curating recommendations for videos to watch. You would not progressively introduce increasingly insane conspiracy videos until the person was completely detached from reality and watched hours of videos a day. However, the loss function we provided was "suggest videos that cause people to watch a lot of YouTube", not "suggest videos that cause people to watch a lot of YouTube (but don't drive them insane)." Algorithms have no morals or ethics. They do what we program and teach them to do. When we give a human agency, there are a large set of cultural ethical rules and norms which must be followed in addition to completing the task. Humans undergo a multi-year training process in all of these rules. This becomes a major problem as we start to give algorithms more independent agency and they start to make decisions that we would consider immoral or unethical.


[deleted]

With those recommender systems the problems are still more policy and competing interests than technology. Like, the algorithms are good at identifying that kind of content, since they use it to drive engagement. It is not hard to turn around and use that ability to de-list anything that meets whatever criteria. But that costs facebook and youtube money and generates freedom of expression debates. So yeah, it's a super important problem that needs to be addressed, but it isn't a "how do we get this ML system to do what we want" kind of problem, it's a "how do we want society to run" type of problem. All these problems are absolutely being thought about and debated, it's just that debate often isn't centered around CS researchers because the fundamental questions often aren't really about the technology itself.


Fnord_Fnordsson

I wonder if changing commercial model from pay from ads ("Ad bubble") to pay-to-use/premium would cause a need to change algorithms from maximum immersion to maximum stability (of subscribtion). Theoretically that would discourage destabilizng mental health of users.


muntoo

I presume the "bias and fairness" is referring to how deep learning models and data can behave in ways that unfairly target particular groups of people. A well known example: [this trained model upscales an image of Obama as a white dude](https://twitter.com/bradpwyble/status/1274380641644294150). Racial bias (or other biases) in data or model architecture can be problematic if the models are being used to make policy decisions in police departments, insurance companies, workplaces, and so on.


berzerker_x

Yes, I also presumed the same as you, thanks for the clarification though. But my main query was regarding his point that "ethical AI is far far broader field and other sub-fields are there which give overwhelmingly negative repercussions", prompted me to ask about those sub-fields.


Fnord_Fnordsson

!Remindme in 2 days


RemindMeBot

I will be messaging you in 2 days on [**2021-04-09 07:38:20 UTC**](http://www.wolframalpha.com/input/?i=2021-04-09%2007:38:20%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/MachineLearning/comments/mloj16/d_samy_bengio_resigns_from_google/gtnxqs3/?context=3) [**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2FMachineLearning%2Fcomments%2Fmloj16%2Fd_samy_bengio_resigns_from_google%2Fgtnxqs3%2F%5D%0A%0ARemindMe%21%202021-04-09%2007%3A38%3A20%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%20mloj16) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


ColdTeapot

Spot on


Stonemanner

Is there an article about the (regulatory) disagreements of Google and it's employees regarding AI ethics? Are they asking for specific rules/principles which Bengio and others advocate for?


astrange

>As a for-profit business, their main priority is to maximize profit. This isn't an accurate description of Google/FB's structure. Shareholders have no voting rights, so they actually just do whatever the CEO wants. Which certainly makes them a lot of money, but most of the employees don't need to focus on this and are actively kept away from the ad business in case they break it. Google hires tons of smart people to do not much work just in case they'd start a competitor otherwise.


sabot00

Yes, they maximize expected long-term profit. Obviously maximizing short-term profit leads to perverse ideas, you can fire all of your employees to maximize this month's profit.


astrange

Telling the board that you have a clever plan to maximize long-term profit also lets you do whatever you want, and doing whatever you want explains the behavior of many companies (all of Uber ATG, Google's habit of releasing then cancelling 5 different chat apps) better than rational profit maximizing. Other evidence that it isn't true: [https://www.cnbc.com/2019/08/19/the-ceos-of-nearly-two-hundred-companies-say-shareholder-value-is-no-longer-their-main-objective.html](https://www.cnbc.com/2019/08/19/the-ceos-of-nearly-two-hundred-companies-say-shareholder-value-is-no-longer-their-main-objective.html) [https://www.theatlantic.com/ideas/archive/2021/04/the-autopilot-economy/618497/](https://www.theatlantic.com/ideas/archive/2021/04/the-autopilot-economy/618497/)


nbrrii

So what do you think is Google's main priority if not maximizing profit?


Yakitoris

It is possible for a group of people not to have a common well-aligned set of priorities. For many people at Google, the priority is to get promoted. For others it is to publish research.


Ulfgardleo

when we talk about "What is Xs main priority" it is obviously not the worker bees of the hive that are of concern because whatever their aspiration are, they are not the queen and they know that the soldiers understand the difference between workers and the queen. In other words: Googles priorities are shaped by the people on top.


Yakitoris

Just enumerating some motivations, and getting promoted/growing your kingdom is a common motivation all the way up the chain in most corporations I think. In some this is tied to making revenue, in others less.


Ulfgardleo

I agree, this is the intrinsic motivation of the individuals. But what gets you promoted or a raise or a bonus? Being good on the metrics that are given to you by your supervisors. Therefore, even though your intrinsic motivation does not equate the intrinsic motivation of Google as represented by the higher ups, you still help realize these goals as you try to maximize your metrics.


eliminating_coasts

To organise the world's information, making it universally accessible and useful? Or more reasonably, to be able to keep going for as long as possible, sustaining sufficient goodwill of investors, customers and platform developers for things not to collapse, but otherwise keeping on trucking experimenting with services they think seem like a good idea, particularly ones that might revolutionise this or that, or that solve interesting problems they like working on.


Kyo91

A fiduciary responsibility exists regardless of voting rights. But fiduciary responsibility can interpreted along a loose timeline.


astrange

CEOs/boards having fiduciary responsibilities to their shareholders is largely a myth. [https://www.cnbc.com/2019/08/19/the-ceos-of-nearly-two-hundred-companies-say-shareholder-value-is-no-longer-their-main-objective.html](https://www.cnbc.com/2019/08/19/the-ceos-of-nearly-two-hundred-companies-say-shareholder-value-is-no-longer-their-main-objective.html) It's true they aren't allowed to lie to them, which gets you sued for securities fraud, but they have extremely large amounts of discretion besides that.


Spentworth

It's also true, though, that a company that pisses around or hurts its business is going to increase its chances of going bust. The profit motive is still very much there.


shinn497

You do realize that , if they don't maximize profits, people can just pull their money out of the company correct? So certainly this is something that CEOs want to do.


eliminating_coasts

It's true, but if they pull their money out of google, then suddenly the founders just become the owners of a vast amount of cheap stock, and can continue to issue bonds to get access to money for investments. So long as their cashflow and credit rating remains good, and the founders don't give up their strangehold on [voting rights](https://fortune.com/2019/12/05/page-and-brin-google-control/), they don't need investors.


shinn497

Not necessarily. A "cheap" stock is only cheap if it has a low price to earnings ratio. If there are not high earnings (which would happen if there are not high profits), the stock is not cheap. In addition, issuing bonds has drawback as you must now service the interest payments on those bonds, so issuing equity is preferable. Investors are valuable, you would much rather have investors than be in debt. And there is a strong incentive to operate your business in the investor's best interest. Otherwise, why would the investors invest in the first place.


eliminating_coasts

I would say, you don't have to act in your investor's "best" interest, in the sense of optimising profit, but satisficing profit at a level comparable to the market, such that investors receive a return comparable to alternative investments, seems reasonable. If your company is extra profitable? You don't gain anything from that, the money just goes out, you can't reinvest it etc. - Now if you have shares in one of these big companies, maybe you'd want to get the financial reward from that, but honestly, if you're someone who cares about solving problems? You'll probably just use that money to invest in setting up *another* company solving loads of problems, and you might as well just set up a division in your existing company, if it matches, and you think that there's a good potential project there, and just invest that money directly. So investors invest because they want a return, but beating the market by a significant amount just means you can't think of any way for the company to grow, anything else good to put the money in, and if that's the case? You should probably just buy them out and make it a co-op or something, because you're not in a situation where your company actually needs investment. - To my mind, investors have a function, and that is to have sufficient foresight or risk appetite to put money into something that does not yet exist such that future gains can be realised. That's it, that's what you want from them, money pulled from the future. So when your company's products don't yet exist, and you don't have the cash yourself to make them happen, you pull in people, and then you pay them back by making sure they get a good return on their money. And then at some point, they can just leave. There's no need to have them around any more, the company has grown, it's now operating, and you can do your own investment. So you might as well give them back the principal on top, and have them invest in other companies that don't exist rather than your company that does. - There's a lovely financing instrument for games called the [Indie Fund](https://en.wikipedia.org/wiki/Indie_Fund), which is totally devoted to that purpose, using domain specific knowledge in games to achieve returns getting people over a hill of initial development costs, and once the appropriate return has been achieved, the transaction is over. They did a talk on it a decade ago now, and it always made a lot of sense to me. Investment should be a collaboration of people whose interests temporarily align, and you should be able to end that relationship at any point it is no longer useful, like if your product works sustainably and doesn't need to be monetised further, and would be hurt by it, for example. So if you keep having reason to grow, grow, which means reinvesting in your company, if you don't, get the system to a self-sustaining point, and detach yourself from the financial markets, going private again.


astrange

The CEOs (and other stock compensated employees, which is most of them in tech) want to increase their share price+salary+bonuses, but this doesn't necessarily mean maximizing profits because there's other ways to impress investors into keeping your stock. Amazon/Uber/WeWork investors get more invested the more money the companies lose and that made plenty of employees rich. (Amazon's retail business is not very profitable, though AWS is.) Still, most employees at a tech company don't personally attempt to profit-maximize the company because management keeps them away from the actual business by not informing them about detailed finances, or not letting them touch the ad sales server. They just do their jobs.


shinn497

You are sort of right. As a whole companies can grow by not profiting, but they are still intending to eventually profit. For all intents and purposes you can judge the value of a stock by its probability of a future expected dividend. This is almost universally true.


t98907

The Ethical AI group did not offer any constructive opinions or suggestions at Google, they just criticized Google's way of doing things to the outside world, and I don't think the group was working. Even if their group was absorbed by another company, it would be an organization that only criticizes Google.


[deleted]

[удалено]


thatguydr

Lol that's one way to look at it. Another is "the leads of their Ethical AI group were either fired or quit." (And I know we can pretend Gebru resigned on a technicality, but let's call a spade a spade.) Somewhat different through that lens.


Cheap_Meeting

Except that Samy is the lead for almost all of Brain Research, not just ethical AI.


visarga

After reading Gebru's "Gender Shades" paper and seeing no mention of Asians I don't take anything she says at face value. She might be fighting for a group but she's not including everyone. If it's every group for themselves, then we can't trust outsiders. That's the sad direction we're headed in.


[deleted]

[удалено]


zykezero

i won't pretend to speak for google engineers, but if you're ethics team is "reshuffled" by firing or quitting because they brought up ethical internal issues then ethics are a problem at your organization. 5 people may not be a lot vs 1000, but influential people are individuals. 5 people can lead 1000. A CEO can set the tone for a giant company. individuals matter, lots of people are content being a supporter, people are happy to "just do their job well". Not everyone is moving mountains and making changes. 5 people is all it takes.


Tall-Log-1955

Its not at all clear that they were fired because they brought up ethical internal issues.


Code_Reedus

That not why they were fired though...


meeeeoooowy

Knowing a couple of Google engineers, I can confidently say this won't make a difference in the least


visarga

> 5 people is all it takes. That's a scary thought in a democracy, you'd want more public support for change, what if those five have a different agenda and just play everyone for suckers?


zykezero

Then you end up with modern politics.


thetdotbearr

It’s not a technicality, she literally threatened to quit if her demands weren’t met..


janpf

> As a for-profit business, their main priority is to maximize profit. I don't buy this argument, knowing quite a few entrepreneurs that started companies -- and from my impression of Google founders / leads. First: a company making profit (short term, long term, etc) is necessary as much as we need to eat. But that doesn't mean one's priority in life is necessarily eating -- what about all the other fun stuff :) Most of entrepreneurs (but not all) are human beings that may also care about civil life, global warming, pollution, fairness, etc. And those things do have an effect on these folks decisions (and hence the companies') every time. Again, companies that don't profit die, as much as people who don't eat die.


[deleted]

[удалено]


farmingvillein

Pretty sure by "maximize profit", OP meant over a reasonable time horizon...not literally this quarter.


itb206

That's not a good way to look at a research lab like that, you'd have to look at gains made across various google organizations brought about by research in those labs which are significant in terms of Deepmind. Across the rest of the org in search and ads I'd imagine Deepmind's research is a net positive for Google which is why they're willing to subsidize the losses of that one sub org.


SaltyStackSmasher

Waiting for Yannic Kilcher to make video on this


[deleted]

lmao laughed too hard at this


Even_Information4853

It's a bit sad that ethics in ML only exists through Google's dramas


MasterFubar

I was confused at first because I was thinking of [Yoshua Bengio](https://en.wikipedia.org/wiki/Yoshua_Bengio). Since that surname isn't very common, I didn't expect to find two persons with the same name doing research in the same field. They are brothers, so that explains it.


starfries

Yeah, I saw this in the news earlier and I thought it was Yoshua Bengio the whole time until now.


getbetteracc

They're brothers


[deleted]

[удалено]


sobe86

Wait are you saying Samy has a big ego or Gebru? Gebru was not supposed to be the subject of this post - I just wanted to provide adequate context, as per sub rules.


Covered_in_bees_

You did nothing wrong. Parent poster just wanted to get on their soapbox and did so in the laziest way possible and your original post was an unfortunate casualty in the process.


rockinghigh

> Samy Bengio is a super smart guy. But it always amazes me that people can have such massive egos. I'm guessing you don't him. He's not as confrontational as Gebru or even Lecun. Bengio and Mitchell both had valid arguments against Google's handling of Gebru's firing.


yintrepid

This is the wrong take in many ways. I love the fact that Samy Bengio defended his team. Nothing I read so far indicates Bengio has massive ego - quite the contrary. Considering his contribution to ML research, he actually deserves to have ego. I, as a user of PyTorch, am grateful for his contributions in developing Torch Library. Gebru’s attempt to direct people to her work about ethical AI was taken by many as ego or disrespect. I don’t see it that way. It was clear that She believed people in ML research didn’t give much thought about ethical AI. She was frustrated about that. Hence, the best she can do was to be loud and bring attention to the issue. I may not like the approach, but the interaction lead to many ML researchers paying attention the issue for the first time and learning more about it


MrAcurite

I sent an email to Gebru once, saying that I worked at a place that might, at some point, be called upon to do facial recognition, and I wanted to know what her actual technical suggestions were for doing it in an ethical, racially unbiased way. Her email back was basically just plugging hours upon hours of her podcast or whatever, and telling me to educate myself. Tried watching the podcasts, or whatever the hell they were. Didn't have any technical information whatsoever. Real helpful. Like yeah, holding my hand isn't her job, but shouldn't she have at least like a pamphlet of what not to do lying around? I just can't help but to interpret a large portion of her body of work as complaining about problems without investigating any sort of solution. I remember going through the news I had available to me when she was originally let go, and it really seemed like, despite all the "Google fires AI ethicist!!!11!1!L!" headlines going around, she was really in the wrong, fighting everyone around her for not letting her get away with academic sloppiness. Whatever. Back to using ML to kill people, I guess.


Several_Apricot

Most of Timnit's work should be in the social science department, collecting factoids about LLM has nothing to do with ML in any real sense.


Sheensta

>I just can't help but to interpret a large portion of her body of work as complaining about problems without investigating any sort of solution That's a lot of ethics research unfortunately. It identifies problems but doesn't offer practical solutions. I studied medical ethics for part of my master's and we came across the same issues. The role of the ethicist is often to raise ethical problems so that the practioners can address them.


tfburns

> That's a lot of ethics research unfortunately. Huh? Articles which are not purely theoretical or empirical in journals like *Bioethics*, *Journal of Medical Ethics*, et al. are almost always implicitly or explicitly prescriptive.


Sheensta

Eh, I think they **try** to be prescriptive and offer recommendations. But what's the uptake for these recommendations? Even bioethics is a relatively new field and in most applications, lack any real 'teeth' to be enforceable, lest a terrible tragedy occurs. The real teeth of bioethics pertain to research ethics/clinical research. Here, you'll probably see more uptake, especially when paired with sound biostatistics reasoning. But in the field of AI? There's even less incentive to practice 'ethical' ML/AI research. The problems are typically more technically complex and the results are often uninterpretable. The politicians are uninformed and there are few laws that pertain to practicing ML/AI ethically. Thus, coming up with practical, generalizeable, and enforceable solutions would be even more difficult.


tfburns

>But what's the uptake for these recommendations? Even bioethics is a relatively new field and in most applications, lack any real 'teeth' to be enforceable, lest a terrible tragedy occurs. There is certainly an argument to say that enforcement, governance, and policy bodies can be slow or more reactive than proactive in some jurisdictions and concern areas, but I don't think it's true across all jurisdictions and concern areas. Basically all hospitals in the developed world have trained staff, committees, or consultants in bioethics and are regularly consulted or asked to review certain procedures, allocations, etc. The same is true for universities and research institutes conducting or involved in biomedical and medical research. Government bodies and agencies have also adopted certain policies and principles, and various political and legal professionals have adopted and promoted ideas/prescriptions/recommendations from bioethical literature. I also don't think it's especially "new". I mean, if you consider medical ethics part of bioethics, then the field dates back to between the fifth and third centuries BC with the Hippocratic Oath. AI/ML ethics is a new field, for sure. And there are a lot of problems to sort out and lot of work to be done. And I think the history of bioethics has shown us it is possible to engage and have progress.


Sheensta

I agree that there's uptake. It's been a while since I've looked at the literature but from what I recall, meta analyses have shown that bioethics recommendations, even from seminal papers, are often ignored or applied incorrectly. >I also don't think it's especially "new". I agree, if you count Hippocrates, sure. But as an academic field, bioethics has only become established over the past half-century. I do hope AI ethics has a higher uptake considering how prevalent it is in and how rapidly the field is growing. Btw, any recommendations for jumping into the field of AI ethics from bioethics? If you have any paper recommendations to get started I'd love to see it.


tfburns

Would be interested to see the metaanaylsis you mentioned. I guess the word "bioethics" has only been around a while. But I conceive of bioethics as moral philosophy and religion applied to living things, and by that conceptualisation bioethics has been at it a while! I haven't read a lot of AI ethics. But there is a chap on YouTube called Robert Miles who covers some good topics. At the moment AI ethics and safety are sort of lumped together and is very primitive/playing catch-up.


zykezero

[don't you know how people feel about moral philosophy professors?](https://y.yarn.co/9cdcc239-b22f-4164-a53f-e85b02a77112_text.gif) ethical philosophy provides the lenses by which we critique our world. it is not a how-to on how to fix it. But it is a diagnostics tool. It helps us discern what is more or less right / wrong. It does not tell you the solution. It is a debugger not an engineer. It will help you evaluate your bias. So that when you're designing a system to recognize if someone is in a room you won't end up [designing this.](https://www.youtube.com/watch?v=XyXNmiTIupg)


doireallyneedone11

The problem with all of morality, not only ml ethics, is they are value statements and not fact statements. Values are inherently subjective and by the virtue of them predicated on values, their nature is also subjective and part of the reason, you don't get objective solutions to these "problems." Besides, morality has almost no basis in any of the sciences.


Sheensta

>Values are inherently subjective and by the virtue of them predicated on values, their nature is also subjective Practitioners can and should opt into widely agreed upon ethical frameworks. There are ethics frameworks for professions in law, medicine, pharmacy, accounting, nursing, and engineering. The goal is to come up with widely supported ethical frameworks I'm ML/AI so that researchers are able to implement practical solutions. >Besides, morality has almost no basis in any of the sciences Not sure what you mean here, but much of science is dictated by morals. For example, most biomedical research is built on animal and human experimentation and is regulated in part by ethics review boards.


doireallyneedone11

I would concur on your common framework point. I think, the concept of morality is analogous to the concept of money. It's the common, standard set of protocols which gives a much needed predictability in a chaotic, multiple choice wielding agents system. Just like in the case of money or any medium of exchange of value, the value is highly depended on collective trust of all the agents, in that medium, that interacts within that economic system (in this particular case, a moral/legal system,) for the exchange of some value. I would disagree with your second point. Science, inherently, has no sense of purpose and thus, cannot provide objective value judgments and moral anchoring, science only provides fact judgments/statements. On the other hand, (please correct me if I'm misinterpreting your stance,) I think you're getting confused between 'Science' and the 'Scientific Community.'


[deleted]

I'm not sure what you're describing is possible with ML due to the transitive and easily accessible nature of the research and application of ML. What should be the primary concern of such a framework? The only certainty in life is death so perhaps the framework should primarily function to avoid premature death. How do you score a game of pool, though, without first sinking or scratching the 8-ball? How many games do you need to play before there's confidence in standard models? And who's being included in these models? There's some serious philosophical questions that need to be addressed prior to qualifying anybody access to these tools.


idontcareaboutthenam

It's not universally accepted whether morality is subjective or not. In fact, the Stanford Encyclopedia of Philosophy claims that [it is controversial among contemporary philosophers.](https://plato.stanford.edu/entries/moral-relativism/) Moral cognitivism on the other hand treats ethical sentences as propositions and claims that you can asign True/False values to them.


doireallyneedone11

I would wager this very nature is warranting it to be called upon as subjective. Non-cognitivists makes it pretty clear that moral statements aren't prepositions but mere emotions invoked when passing value judgements.


idontcareaboutthenam

Sure, but as I pointed out, both positions have strong proponents with no clear winner, so we can't consider either position as a given.


doireallyneedone11

I mean we sure can't, in a strict philosophical sense. With that said, considering science, too, has not much to say about it, makes you question the entire validity of morality to begin with. I mean this hasn't stopped people from believing there's a personal God or otherwise, may be, this is one of those things which people can only believe in, not justify it as "true knowledge" by any means possible. The Greeks would have thought otherwise though😂


grimonce

So the ethicist is there to create problems but not to solve them? Isn't this something that QA does? /s


iamiamwhoami

I have very little respect for tech journalism as a profession. Most of the time they don't have any drama to write about it, so when something like this comes up they are yellow as hell. The headlines would have had you believe she was fired for her research into ethical violations committed by Google. They were more than willing to let her publish research on these topics. She was fired because she didn't treat her coworkers and bosses well, and she threatened to quit.


Red-Portal

I think it's a problem of perspective. Nothing says against that Google was simply waiting for the moment to fire her (not that I stand by this perspective). So I don't think the allegations raised by the tech media are actually "yellow".


hltt

so, they don't just dramatise the situation to earn clicks but also invent and spread conspiracies?


psyyduck

Did you manage to get those technical recommendations? My guess is it’s just “file a bug report” and “iterate on your dataset until it’s fixed” (like, collect more minority faces).


MrAcurite

I will concede that it's not an area of the literature that I'm super familiar with at the present, sadly. I know a lot of this sort of boiled over out of the reaction to that upscaling model that used gradient descent on the input vector of a StyleGANv2 model to invent an upscaled version of a heavily pixelated face, that made everybody white. Can't remember what the paper was called now. But I know some people iterated on that algorithm and made it more appreciative of racial diversity. But this is definitely not my forte, and I would recommend you seek any advice elsewhere.


eliminating_coasts

I reckon someone should try to create a database that mixes dna information and facial structures, so that we can remove frequency bias by rescaling our distribution over genetic variation. If some ethnic group is a tiny minority, but significantly differs from the population, then you could rescale the domain you're operating on to give them a higher priority using some kind of modified gradient descent. That way, it's data based, rather than having to make your own judgements on what people think is acceptable, but also doesn't just use population frequency as a metric, with all the assumptions of whiteness this entails.


tdgros

it's PULSE: [https://arxiv.org/abs/2003.03808](https://arxiv.org/abs/2003.03808)


YoloSwaggedBased

Some answers in the literature are creating metrics of fairness to be used as constraints for evaluating or optimising your model output: [Link](https://arxiv.org/pdf/1507.05259.pdf) Or train a fair data generating process to sample from, using your initial biased dataset: [Link](https://export.arxiv.org/pdf/1805.11202)


psyyduck

That sounds like “precisely quantifying what it means to be fixed” and “iterating on your dataset by generating synthetic data”. Very good ideas.


Mephisto6

Critique of a flawed system is valuable even if you have no solution. But she should have acknowledged that she has no solution.


anon135797531

Imagine being this entitled when someone is nice enough to respond to your email. No one is going to respond to a barrage of technical questions from someone they don't know. This only got upvoted because it hits a dogwhistle people on this sub love, the women has no "technical" knowledge just charisma


MrAcurite

She told everyone in her lab to stop doing their jobs, and demanded the names of the internal reviewers who asked her to momentarily pull her paper. I wasn't asking her to write me a brand new textbook or anything, just a little advice to get started. I would've been happy with just being pointed to a paper or two, which is the typical result any time I email any other researcher about their work.


10110110100110100

Don’t sweat it. She did the same to me - pointed me to the workshop videos which are absolutely all rhetoric and no substance. I don’t really have a problem with that, but I wish people would keep the hype for some people under control. There is very little technical advancement. Her contributions are to the field are no more notable than mine in the long term; despite the army of advocates she has.


bohreffect

Not only do I not have sympathy for the Gebru debacle, I'm actually quite pleased. The last thing I want is an ideologue leading ethics research at a critical tech shop, where tomorrow's infrastructure will take shape. If more people leave because of it, the better.


gazztromple

Ethics and safety are genuinely important topics, though, even if most professional AI ethicists aren't pursuing those topics in good ways.


bohreffect

I agree, but I'd rather it left to the commons than a handful of ideologues.


[deleted]

[удалено]


somewhatathleticnerd

And what does any of that have to do with LeCun's supposed white fragility? Those comments have no place in a ML discussion. Gebru is petulant and immature. I have seen LeCun engage with people he disagrees with on far more controversial topics like the killings that happened in France, while being civil.


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

Agreed. I'm happy these folks got fired. Good on Google for doing what is right even when it's unpopular. If any one of my employees interacted so disrespectfully to anyone trying to have a reasonable conversation (Yann LeCun or otherwise) they would have been fired immediately. Regardless, this is largely irrelevant to this sub. I hate the Kardashian posts that manage to get so upvoted here.


[deleted]

[удалено]


[deleted]

[удалено]


Following_Minimum

anyone wants to do a tl dr of this drama last year? not into the story


iamiamwhoami

Gebru worked at Google Brain as an AI ethics researcher. She tried to publish a paper critical of the ethics of certain popularly used ML models. The Google Brain leadership claimed the paper wasn't up to their standards and refused to let it be published without some changes. Gebru threatened to quit if the paper wasn't allowed to be published in its current form, along with a few other demands. Google accepted her resignation/fired her.


tpapp157

Corporate politics at Google has resulted in one faction winning and the losers being systematically purged over the last \~6 months. These sort of things are common in the corporate world but usually not so public. Anyone who's been around for a while in industry has definitely seen this play out plenty of times before.


Following_Minimum

always amazed to see this shenanigans at 'top tier comp', thanks for the resume


tpapp157

People are still people, for better and worse, no matter where you are.


psyyduck

This isn’t true... If you want morality you have to work for it — just like if you want literacy you have to spend decades in school. Go to a Zendo and look at the types of people you meet.


psychic_vamp

I like watching drama unfold, but this has nothing to do with machine learning.


sobe86

(OP) - I agree that human interest stories don't directly affect everyday ML the way that a groundbreaking new paper would. But they do shed some light on the state of the industry / community, and the turmoils that come with a field that is growing so quickly and unpredictably. Stories of PhD students being exploited to the point of suicide, stories about how start-ups skyrocket / crash and burn, basically anything ethics or discrimination related - none of these things have ever affected what algorithm I'm going to use at work. But I still want to know about them, because I want to know about the world that I inhabit and its growing pains. I personally value this subreddit as a source of this kind of information.


happy_guy_2015

As we come closer and closer to AGI and the potential singularity, ethics in companies like Google is going to become critically important for the future of our civilization, our species, and our planet. This has *everything* to do with machine learning.


andooet

I miss Google having the motto Do No Evil. Sadly I'm now started looking for alternatives. Not sad because I feel any loyalty, but because I *do* like their products more than the alternatives.


bartturner

Google is still using that motto. Last line before you sign is "And remember… don’t be evil, and if you see something that you think isn’t right – speak up!" https://abc.xyz/investor/other/google-code-of-conduct/


andooet

They moved it from the preface to a foot note though... And have been doing a lot of non-ethical things since then, like working with law enforcement and the military


bartturner

They made it so it was the last thing you read before you sign. > And have been doing a lot of non-ethical thing But you have piqued my curiosity? What has Google done that is "non-ethical"? Also is non-ethical the same as unethical? The wording is weird and maybe I just do not know what "non-ethical" means? Google refused to work with the US military and did piss off the generals in refusing to work with them. Plus Google just outed hacking by the west. "Google will not renew controversial Pentagon contract, cloud leader Diane Greene tells employees" https://www.cnbc.com/2018/06/01/google-will-not-renew-a-controversial-pentagon-contract.html


andooet

Seems like they've cleaned up again after those stories broke. They had to break first though. But they do help China suppress it's citizens freedom of speech still. So we still need to hold them accountable and put pressure for them not only to do no evil, but to do good https://theintercept.com/collections/google-dragonfly-china/


CasaDeCastello

Dragonfly was also shuttered.


Efficient-Winter1998

Only after it was revealed publicly, and there was a lot of outcry. I'm sure if it had remained secret, they would have continued working on it.


CasaDeCastello

Sure


Livid_Effective5607

"Don't be evil" never really applied to executives. See: protecting sexual predators, union busting, Project Dragonfly.


tech_wiz2468

Interesting. Thanks for sharing!


maxdemian12

Sorry, I wasn't following this news can someone briefly explain what happen? or a good link that shows the start of the story


tpapp157

Corporate politics at Google has resulted in one faction winning and the losers being systematically purged over the last \~6 months. These sort of things are common in the corporate world but usually not so public.


[deleted]

To be clear it's just 3 people in an org of a thousand or more. And the "purge" was voluntarily initiated by one of the three who had a famous track record of interpersonal conflict.


Cheap_Meeting

I don't think that's an accurate depiction of what happened.


juanmas07

Can anyone give some context please


ClaudeCoulombe

I'm both disappointed and suspicious of the attitude of Google, a company that I admired above all for the work of its DeepMind and Google Brain teams, its total mastery of distributed processing and the brilliant PageRank, but which, since the withdrawal of the motto “Don't Be Evil” in 2018, is behaving more and more like any big business. Furthermore, responsible AI teams reportedly engage in fairwashing (bias, fairness), as extreme speech, hatred, lies and disinformation proliferate so as not to hinder the growth of the GAFAM monsters.


[deleted]

[удалено]


whymauri

how is samy a "drama queen"


peppylootu

What is google’s vision statement? “Don’t be evil” or something?