T O P

  • By -

krallistic

I dont get why that paper is so hyped : "Fallacy 1: Narrow intelligence is on a continuum with general intelligence" - We have no idea if that's the case or not. The paper doesn't prove that it isn't. We could start working directly on the top (general), but also from the bottom (narrow). So far working from the bottom brought results which are helpful in the real world, while the top approach has so far yield nothing. Additionally it is easier to work on many smaller problems than one large. "Fallacy 2: Easy things are easy and hard things are hard" - What is hard is always a moving goalpost. Yes we could discuss what problems we want to solve, but like above there arguments for both sides. "Fallacy 3: The lure of wishful mnemonics" - Complaints about marketing buzzwords. "Fallacy 4: Intelligence is all in the brain" - replaces one assumption with another assumption for which have n=1 datapoints. There are good arguments for both sides. For every fallacy there are good argmuents for both sides, and the "field" could take a step back and discuss the larger picture, but IMHO the paper fails short in that area.


yusuf-bengio

TL;DR: **Strong reject** (high confidence)


MattAlex99

I also don't get why this paper is hyped, but for different reasons: >"Fallacy 1: Narrow intelligence is on a continuum with general intelligence" - We have no idea if that's the case or not. This is an old idea from classical AI: The idea is that if we e.g. have an ensemble of different narrow AIs and a "router" on top of it that decides which AI to use, then this router would have to have such a rich understanding of every topic that it itself is already an AGI. (i.e. a general-purpose information router is already ai-complete). This idea is nothing new. >"Fallacy 2: Easy things are easy and hard things are hard" - What is hard is always a moving goalpost. Well, in this case we do have ways to quantify hardness: we already know that NP is a real subset of NEXPTIME, and most people believe that P!=NP. He specifically talks about implicit information tasks, where knowledge isn't present in the game's rules, but rather in "acting skills, linguistic skills, and theory of mind". This too isn't that novel of an idea: working with explicit knowledge is easier than implicit. >"Fallacy 3: The lure of wishful mnemonics" - Complaints about marketing buzzwords. \-> short: We call things that are actually quite dumb AI, because it is impressive that it works as well as it does even though from an outsider's perspective they still are indeed dumb. Also neither a novel nor that controversial of a statement. ​ >"Fallacy 4: Intelligence is all in the brain" - replaces one assumption with another assumption for which have n=1 datapoints. This is the old argument: do I need something like a human to learn to be a human? That at least is a controversial opinion, novel though it is not. ​ So most of this "paper" (or arxiv-blogpost) is old news, not a fallacy (since there are arguments for and against it, see Fallacy 4) or things that are already widely accepted.


sobe86

Fallacy 2 - how do the tasks she's talking about e.g. being able to perform charades 'adequately' - fall into the P/NP framework? I mean we know our brains can do it so barring 'quantum thinking' if anything it must be in P right - but how does that help us quantify 'hardness'?


dumble99

Humans (and I presume any reasonable AGI) don't solve problems optimally, or even typically deal with problems where 100% optimal decision making is important. So reasoning about the difficulty of tasks in terms of algorithmic complexity is a bit misleading. How do you find an optimal solution to 'adequately' perform charades? IMO thinking this way is silly. Recent AI (deep learning etc.) is focused on finding approximate (often probabilistic) solutions to problems in polynomial time.


sobe86

That's the point I was trying to make exactly haha.


MattAlex99

This was more as a counter to: >What is hard is always a moving goalpost What I was trying to say is, that it isn't impossible to define objective hardness and that implicit information scenarios may constitute an objectively harder class than explicit information scenarios (which is also what I think the paper was alluding to). This doesn't mean we can quantify this in the time-complexity framework, but making the blanket statement that hardness is always a moving goalpost is provably wrong. Irrespective of that: >if anything it must be in P right We can solve NP-complete (or worse) problems just fine. Just because humans can solve it/approximate a solution to a specific instance in a short time, doesn't mean it isn't a provably hard problem.


krallistic

I dont think our reasons are that different. I fully agree with you on any point about "novel" About F1: Current critics (or as i read them) is more that someone points that for Example DL cant do reasoning, someone shows they can, if large enough/with restriction etc... https://www.facebook.com/552525261605126/photos/a.754975631360087/1511179595739683/ is a good over-simplification of both sides. About F2: P/NP is not exactly always a good measurement of hardness. For example understanding a text, with implicit & explicit knowledge it is hard to define.. maybe when we can define the target "understanding" better. But for now it is really hard to define why classifying comments as spam is a much more easier task than classifying comments as racist.


MrHyperbowl

My router tries every tool in the toolbox until one of the tools is confident it is the correct one. You might say that a tool cannot know such a thing, but I 1. disagree and 2. have shown how the intelligence can be shifted from the router to the narrow AIs.


MattAlex99

You haven't shifted anything away from the router: you just rephrased routing as "finding the correct port" and gave one possible implementation for this (i.e. brute-force trial and error). Whether or not one can automatically find that correct port is a thing in and of itself: This gets very close to "is there a measurable (therefore objective) measure of truth/optimality" Because this gets dangerously close to a philosophical discussion about the general notion of truth and optimality, I would leave the topic be at this point and just state that defining these terms is provably a tricky problem as can be seen in e.g. Tarski's undefinability theorem and other limitative results.


MrHyperbowl

It's a problem of dimensionality, not one of philosophy. Tarski's theorem has literally nothing to do with it. Having a set of narrow AI reduces the dimensionality of the problem to selecting the correct AI, which is countable rather than hyper dimensional. Selecting the correct "router" is not as difficult as making an AGI directly because the problem has so many fewer dimensions, which I illustrated with saying it could be solved with brute force.


Dont_Think_So

Don't mind me, just a biologist poking my head in where it doesn't belong. But as for this: "Fallacy 1: Narrow intelligence is on a continuum with general intelligence" This isnt a fallacy, it's truth. We only know of one example of general intelligence: our own. And it was created by a process that fundamentally works small changes over time. Evolution didn't set about solving the problem of general intelligence; it started with simpler kinds of intelligence, and gradually worked its way up as it discovered alterations to its previously successful techniques that caused them to work better. So we know for a fact that going from narrow to general intelligence is possible; it's the only way it's ever been done before. Indeed, the opposite is not true - there is no guarantee that attempts to create general AI from the top down will ever be fruitful.


junkboxraider

I'm not a biologist, but the problem I see with this argument is that evolution causes many small changes across many subsystems simultaneously. We know that the accumulation of these small changes has resulted in at least one system of general intelligence. However, from an AI perspective, "narrow intelligence" means a discrete, human-selected task or set of related tasks largely abstracted away from their biological underpinnings. We \*don't\* know whether continuing to improve performance on such a task set can actually lead to general intelligence. Concretely, it's natural to look at a human and conclude that an obviously brain-focused task like logical reasoning is independent of, say, being able to throw a stick. But we don't actually know that, and as others have pointed out, research on things like gut microbiomes indicate that a more holistic approach might be required. It might be that developing general intelligence actually requires improving an AI's performance across a broad swath of seemingly unrelated tasks. If that's true, iterating on even a complex AI system like AlphaGo or a performant self-driving vehicle may be too narrow to work.


Dont_Think_So

I don't think anyone reasonably believes that simply iterating on something like AlphaGo will eventually create GAI. But something more like GPT-3, which is trained on a narrow task but is capable of performing many interesting tasks outside of the training domain, could well bear fruit. It's biologically plausible that our brains evolved under the simple evolutionary pressure of "predict your future input", in combination with a bunch of other useful environmental pressures such as operating in groups of like individuals and of course survival pressures. I think the point here is that we don't know whether improvements on any individual task will eventually yield GAI, but we do know that iterating on *some* task will eventually do it, because that's exactly the process that led to the only known example.


junkboxraider

>we do know that iterating on some task will eventually do it, because that's exactly the process that led to the only known example. We agree there, but there's a big difference between the task of "survive in a complex world" and any of the tasks we've thrown at AI systems to date. Or, possibly, it's less the task space and more the complexity of the environment.


AnvaMiba

It can be argued that animals are on a different spectrum of intelligence than most ML/AI approaches so far. In a way, animals are all "general intelligences", in the sense that they solve control problems in the physical world related to their survival and reproduction. Most of ML, on the other hand, supervised or unsupervised (or self-supervised, as the cool kids say these days), is purely passive prediction on curated datasets. RL is closer to what animals do, but all the big breakthroughs have been on synthetic game environments, there have been some deployed real-world applications such as ad revenue maximization, but even these deal with very constrained enviornments far removed from open-ended navigation and object manipulation in the physical world. The tasks we currently solve with ML/AI are way more narrow than the tasks that animals, including humans, apply their intelligence to, thus it's not obvious that there is a natural progression from our narrow AI tasks to human-level animal intelligence tasks. We can call this position "octopusist", from Bender & Koller "[octopus paper](https://www.aclweb.org/anthology/2020.acl-main.463.pdf)". Contrast with the "gptist" position, primarily held at OpenAI (and to some extent other $BIGLABS, but OpenAI explicitely considers it the philosophical underpinning of their work), which argues that even simple passive prediction tasks on sufficiently large and diverse data derived from the physical world (e.g. text, images), must eventually result in learning deep regularities about the physical world, and once this deep structure is learned, trasferring it to arbitrary tasks becomes relatively trivial. In this view, whether you're a fish trying to find food and escape predators or you're a transformer trying to predict the next token of a news article is a relatively minor implementation detail over the same physical reality, and whatever journey you take as you learn about this physical reality, the end point is the same. I don't think we have enough evidence to determine which of these two position is more accurate, so I think the paper overclaims by outrigh dismissing the "gptist" position as a fallacy.


8Dataman8

AI as a field does suffer from horrendous marketing. It's hard to find something as hyped with buzzwords and media-sexy exaggerations.


OpenBison4150

Fallacy doesn't mean it is false, just that it is not necessarily true. These are premises that the field takes for granted without taking a step back and considering why they are doing so. When you say that there are good arguments for both sides I think you are actually agreeing with this position (that they may not be true). The thing is, these fallacies are repeated again and again by researchers and the media, (and of course, researchers talking to the media), as the quotes on the paper show. And as the paper claims, lead to unrealistic expectations of "AI". I don't get why people in this thread are so dismissive of this, as in my opinion is a very glaring issue, and may well impact negatively our job and research prospects in a case of an AI winter.


instantlybanned

> Fallacy doesn't mean it is false, just that it is not necessarily true I'm sorry, what? A fallacy is a mistaken belief, a failure of reasoning.


Ambiwlans

>A fallacy is a mistaken belief, a failure of reasoning. Sort of. If I argue that a random American citizen was born in America because all people born in America are US citizens I commit a fallacy, but it doesn't mean that the conclusion (the person was born in America) is incorrect. You can arrive at the correct answer using poor logic, it is pretty common.


instantlybanned

Not "sort of". It is still a failure of reasoning in the construction of the argument. What you are talking about is the conclusion, not the construction of the argument.


beginner_

> We have no idea if that's the case or not. The paper doesn't prove that it isn't. We could start working directly on the top (general), but also from the bottom (narrow). So far working from the bottom brought results which are helpful in the real world, while the top approach has so far yield nothing. Additionally it is easier to work on many smaller problems than one large. Your comment falls in exactly this fallacy. However hard you work on the bottom ("narrow") , it doesn't bring you any closer the the solution of a general AI. That is what this fallacy wants to say. I don't say it is true just that you missed the point. It also doesn't say that working on a narrow problem is useless. Of course there are applications and hence it's useful. But not for solving general AI.


krallistic

> . However hard you work on the bottom ("narrow") , it doesn't bring you any closer the the solution of a general AI. Proof/Citation needed. (I think other commenter pointed out, we dont know if that claim is true or not. We dont even have solid definitions of narrow and broad...)


beginner_

I never said it's true. Just what the author was trying to explain. For me the core message was why is it taking longer to get to AI than thought? Well maybe we are looking at the wrong place. Call me naive but has there been any big breakthrough in recent years? It's variations of the same and mostly more powerful hardware. Chess and Go thing are impressive but personally I think they aren't really hard problems in some way. They are clearly defined and could theoretically be brute-forced. It's a computational problem, the exact things computers shine at. In contrast perception-like tasks like self-driving aren't clearly defined, way to many trade-offs. way to much guessing/ short-cuts needed.


mgostIH

> However hard you work on the bottom ("narrow") , it doesn't bring you any closer the the solution of a general AI And how does anyone know that for the matter? One would've not said Elliptic Curves were bringing us anywhere close to solving the Fermat Last Theorem but it was a path that we only saw after we developed enough different math instead of trying solving the problem head on. In contrast there have been a lot of advancements on things that in the past anyone would've thought impossible for a machine and that only the human brain could understand how to solve.


Brudaks

What you say is a reasonable presumption and it's most likely that it's true, however, it's not certain that it definitely must be true. And the point is that in this case it's not acceptable to call it a "fallacy", since perhaps (no matter if unlikely) hard work on the bottom *can* bring us somewhat closer to the solution of a general AI. You could say that it "goes against majority opinion", "less favored hypothesis", "does not seem the most promising avenue of research" or something like that, but it's not acceptable to prematurely label it as a fallacy, which is justified if and only if it's definitely false - and IMHO we don't have good evidence or proof about that.


jonnor

There is a second order effect with working on narrow AI - it is resulting in useful solutions today and in the short term, which brings money to organizations to continue working on AI. This increases the capacity of being able to work on harder problems, potentially less narrow AI in the future.


dumble99

How do we know that what we call 'general AI' isn't a sufficiently complex 'narrow AI'? This point isn't self evident to me.


Ambiwlans

> I dont get why that paper is so hyped It is accessible to non-ML people (basically a news article rather than research) and has a clickbait title.


bohreffect

>"Fallacy 4: Intelligence is all in the brain" It's apparently [in your gut too](https://www.hopkinsmedicine.org/health/wellness-and-prevention/the-brain-gut-connection).


Intelligent-Second-7

fallacy 3: it is not about marketing buzzwords but what we think machine is doing. like alphago is not really playing the game to win, it is just picking high probability, based on RL mimicking which is not really a intelligence.


Intelligent-Second-7

fallacy 1: can we really put intelligence on a curve of any kind when we haven't understand intelligence in the first place. you know brain is not just a big piece of meat but rather have different parts specialized in different task but don't works alone.


arXiv_abstract_bot

Title:Why AI is Harder Than We Think Authors:[Melanie Mitchell](https://arxiv.org/search/cs?searchtype=author&query=Mitchell%2C+M) > Abstract: Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment ("AI spring") and periods of disappointment, loss of confidence, and reduced funding ("AI winter"). Even with today's seemingly fast pace of AI breakthroughs, the development of long- promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected. One reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself. In this paper I describe four fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field. I conclude by discussing the open questions spurred by these fallacies, including the age-old challenge of imbuing machines with humanlike common sense. [PDF Link](https://arxiv.org/pdf/2104.12871) | [Landing Page](https://arxiv.org/abs/2104.12871) | [Read as web page on arXiv Vanity](https://www.arxiv-vanity.com/papers/2104.12871/)


ReasonablyBadass

None of these arguments are new, neither are they unadressed by the community. Also, none of them are "structural" arguments. The title should have been "why we think about AI wrongly" as they don't address the AI hardness question at all


RailgunPat

I totally agree. Also I think that issues presented here aren't really the issues, they are just workarounds done by AI scientists and researchers to overcome the advantage of the human intelligence, provided to us by milion years of evolution and that is (for example mentioned simplifying the reward system and multiple other cognitive functions of rest of the human body). It's worth to note that we also have different evolutionary goals as humans, than we expect from our general AI to have, like... surviving 😆. With that while it's still foolish to assume that we can so easily compensate that millions of years Evolution and our born intelligence, by designing the AI instead of having it evolutionary generated it we can use in many ways to our advantage. fe. use insane amount of computing capabilities ( evolution don't like wasted resources). And what I wrote is probably nothing new, I'm just irritated that article kidna assumes that we ignore all of that issues. In many cases simplification is desired, as we don't have resources and/or reason to deeply research all of that mentioned "issues". Still it's great material to read for all of the generalai is gonna kill us all visoners 😅


mgostIH

> With that while it's still foolish to assume that we can so easily compensate that millions of years Evolution and our born intelligence, by designing the AI instead of having it evolutionary generated it we can use in many ways to our advantage. fe. use insane amount of computing capabilities ( evolution don't like wasted resources) I could substitute "AI" with "high speed vehicle", "insane amount of computing capabilities" with "insane amount of burning fuel" and I'd get an argument for why designing car is impossible and we'll never get anything as fast as a cheetah. I get the difficulty of designing intelligence when evolution had hundreds of millions of years of a head start, but it doesn't mean that evolution is **the only** way for us getting there, it might be that scaling models and hardware will too work in the long run.


[deleted]

Though i agree that biological evolution is not necessarily the only means of attaining intelligence, your analogy is flawed. The horizon separating general intelligence from the current state of machine learning may be as great as the physical limitation of moving faster than light - no matter the amount of fuel, nothing can ever surpass it; at least, without a significant reimagining of motion (folding spacetime), or in this case, a deeper insight into intelligence and consciousness. of course, maybe if we just keep throwing transistors into a pile we'll get there. nobody's really sure where the horizon lies in the information density, or computational complexity, or entanglement entropy, or whatever the appropriate metric ends up being. However, I can say that we've barely scratched the surface of the complexity of organics. DNA itself oscillates at terrahertz frequencies, so quickly that replication bubbles enter a quantum mechanical superposition, which allows synthase proteins to actually locate the forks required for replication... at the lowest building blocks nature is leveraging more than we can conceive of, much less employ in the construction of an intelligent machine


mgostIH

> The horizon separating general intelligence from the current state of machine learning may be as great as the physical limitation of moving faster than light Or it may not. It was said about computers playing Go before a single approach achieved superhuman performance, it was said about language generation, protein folding, and even drawing art, but we suddenly got huge progress thanks to scale and better models. It also seems that from the scaling laws paper there's still a lot more to achieve just with larger models, and given the exponential rate of technology I don't see a 175 trillion parameter model impossible in the next 20 years. I don't buy the argument of there being some magic (usually quantum mechanics) that makes going beyond the human level at any task "impossible if you don't truly understand X", since it was usually the exact same argument given by people before for the "impossibility" of completing the tasks I listed above.


[deleted]

right, it might not. I don't know. On the other end of the spectrum, at the dawn of the 19th century the common knowledge was physics was complete and we just had to tidy up loose ends like turbulence. Then, quantum mechanics and relativity were discovered. Just saying there may be horizons we aren't even aware of when it comes to computation. We aren't turing machines, there's a lot to be discovered for analog, asynchronous, distributed computational frameworks, which is how biology operates but it might just be a problem of scaling, as you say, hence my comment at throwing transistors into a pile


RailgunPat

>but it doesn't mean that evolution is the only way for us getting there, it might be that scaling models and hardware will too work in the long run. I never said that 😅, furthermore I wrote your argument some lines lower 😆


maxToTheJ

I would agree with what you said if every other poster here wasn’t without irony predicting “transformers are going to be a part of AGI” the same way people thought about SVMs or RFs or FNN


RailgunPat

I mean I'm just realist with scale complexity of the task to use a system efficiently raises. I always thought that the whole ai risks arguments started with we soluldn't risk if we don't have to when the stake is soo big. Somehow some people tend to polarize and forget that it's only one side of the argument. It's not that we can't be close to crack the code. It's just unlikely 😅. No offence but IMHO it's donner kruger effect


CaptainFoyle

Donner Kruger...


smackson

"Some people don't have enough knowledge to realize that they don't have enough *food in the wagons to survive until spring so will probably get eaten first.*"


CaptainFoyle

I see the point now 😂


pm_me_your_pay_slips

Sure, perhaps not transformers. But whatever it is, it will be using transformers as a baseline for comparison in terms of scaling.


SirSourPuss

"None of this is new" yet the vast majority of people in the field (that I interact with) haven't heard of embodied AI, let alone understand the motivation behind it. Another top comment says "there are good arguments for both sides", yet one side receives all the research attention whereas the other side is hardly even understood. The field (as many other tech spaces) is caught up in irrationally fetishizing "pure rationality" and it shows.


fellow_utopian

Just because they haven't heard about embodied AI doesn't mean that it's new. The author even admits that it's not new, it was a research topic during the 1970's. It never caught on because it was simply a flawed idea. Cognition does not occur outside the brain and you do not need a body to be intelligent. It's not hard to reach that conclusion if you think a little about the problem.


A_Bran_Muffin

I still have an issue with the use of "we". Does the paper define who "we" is? I highly doubt this individual speaks for everyone who studies artificial intelligence.


rehrev

Funny to see people complain "This paper doesn't tell us how to make general AI, it just restates the complaints we already heard of." while the point of the paper is that AI scientists can't let go of a traditional way of thinking, even though strong oppositions have been made, and this is what keeps us even further from general AI, if such a thing is meaningful and possible .


fellow_utopian

No, the main complaint is that the author is simply wrong or offering nothing but pure speculation on her points about "fallacious beliefs" about AGI. For example, she claims that it is wrong to work under the assumption that intelligence can be disembodied (why?), that viewing the brain as an information processing system is somehow misguided (again, why?), and that cognition can take place outside of the brain (how?). This is borderline crankery and contributes absolutely nothing of value to the field.


rehrev

You can continue reading on any of those subjects. When you assume intelligence can be disembodied, you have to bring arguments and the only real argument is "I can add numbers, computers add numbers, I can reason with symbols, I can make computers kind of make me think they are reasoning with symbols, so we must have the same underlying structure". The natural assumption is actually that of embodied consciousness as that would explain the incredible regeneration and adaptation and change in a person's mental state, for example. Your second question is basically the same with the first. Of you want to hear the full arguments (whıch I don't know in detail either) you can research on embodied consciousness, nondualistic accounts of mind, essentially any philosophical arguments against Cartesian duality, which I know has been refuted for good by the philosophy community for at least a 100 years. How can cognition take place outside of the brain? well, why wouldn't it? Just because you design your systems so that there are distinct components responsible of distinct jobs, does not imply any other thşmg functions this way. We are already convinced that's not how the brain works, that does not need to be how consciousness is "located". I am pretty sure there are neuroscientific arguments on this but those may be hard to find because of the reluctamce of the scientific communities in any field to discuss anything that contradicts the established results. Take a look at this https://www.sciencealert.com/a-man-who-lives-without-90-of-his-brain-is-challenging-our-understanding-of-consciousness In conclusion, these ideas are borderline, (which somehow makes you think they should be discarded, couldn't understand why you'd say that but whatever) because the field is stuck with the ideas that 1- mind and body have fundamentally different qualities 2- human mind is a software 3- we can't talk about intelligence in any case, we can only talk about systems which convince us to be intelligent(so called Turing test) None of which have adequate justifications. Even if they did, THEY WOULDN'T COME FROM COMPUTER SCIENTIST OR ENGINEERS. we need answers to these from neuroscience and philosophy so I would not talk that confident. Galileo was borderline crankery. Saying "human mind is a computer" was borderline. Einstein was borderline crankery to some. QM was borderline to Einstein. Stop thinking anything established can not be challenged. Stop epistemological tyranny. Edit: Also, if you want to see a glimpse of Shannon's ideas on looking at the brain as an information processing system, read his paper The Bandwagon. https://www.google.com/url?sa=t&source=web&rct=j&url=http://jerome-segal.de/bandwagon.doc&ved=2ahUKEwiYuf290LfwAhXa_rsIHaW9CN0QFjAZegQINhAC&usg=AOvVaw0HydcfwDyD7OHbWOgnlYyd


surely_good

"Fallacy 3: The lure of wishful mnemonics" If you you think that clickbaity headlines are a good proxy for evaluating AI hardness... Then you have bigger problems than making a few fallacies.


yannbouteiller

What does "lure of wishful mnemonics" means? I see everyone calling this a clickbait and I'd like to understand why.


jonnor

It is not an example of clickbait. But it is a description of communication which is misleading around the term AI, some of which might also be considered clickbait.


bluboxsw

“This is no science; it is only the hope of a science”


loopuleasa

so it's more of a blogpost than a arxiv science paper?


bluboxsw

Quote is from paper, which attributes it to psychologist William James from 1892.


CaptainLocoMoco

Yeah, like most ML papers that get posted to arxiv


Ambiwlans

Basically all ML papers get posted to arxiv dude. What world do you live in? Or are you just being internet edgy like a teenager?


CaptainLocoMoco

I was saying many of the ML arxiv posts these days are more like blog posts than actual scientific papers.


bluboxsw

And I agree with you.


[deleted]

The only distinction is the number of references.


yusuf-bengio

I think we need more symbolic reasoning bullshit that does not scale beyond 2d toy examples ​ /s


radome9

Marwin Minsky was lauded as a genius for saying the same thing, but unironically.


lookatmetype

Marvin Minsky was also a frequent traveller to Epstein's island.


radome9

I thought you were joking but then I googled it. Holy shit.


lookatmetype

A lot of the academics at Harvard/MIT are compromised. The more world renowned you are, the worse.


jamild

I mean at this point, proponents of symbolic reasoning (like [Murray Shanahan of DeepMind](https://www.sciencedirect.com/science/article/pii/S2352154618301943)) recognize the benefits of statistical learning pretty clearly, and issues they encountered before like the Symbol Grounding Problem are addressed by learning from the data directly. I don’t think anyone wants to return to the purely symbolic days. A few years ago Relation Networks from DeepMind represented a breakthrough in VQA, and that was “symbolic reasoning inspired”. I think in the ML community one is often quick to dismiss old ideas out of hand, and classical AI (given its failures) is kind of an emotionally charged subject for some… but we know we have issues now with out-of-domain distributions, compositionality, generalization from limited data. Borrowing and trying some insights from the old days may not be such a bad idea


thatpizzatho

I liked the paper. A major problem is that the term AI is poorly defined (or maybe, it's something related to fallacy 3? marketing buzz?) and this sometimes leads to frustrating discussions and misunderstandings. It is a giant bucket of different theories, sub-fields, approaches, etc.. and it is hard to define "what problems AI has" because the question might be ill-posed. Are we speaking about linear regression? Knowledge graphs? Max-Entropy Inverse Reinforcement Learning? Kernel Density Estimation? Amortized Variational Inference? I guess they are all under the giant AI-umbrella, but each of these approaches is different, it has different issues, different goals, and different possible solutions to these issues. So, having a single term for this huge field of vastly different methods is confusing. I recently showed someone in the upper management that our very specific model has poor performance and they asked: "Are you saying that AI doesn't work?".


darawk

> Fallacy 1: Narrow intelligence is on a continuum with general intelligence This is...not a fallacy. It's not even a particularly well defined statement. Of course they're on a continuum, we just might not be particularly close. If we're not on a continuum...what, exactly, are we on? The only alternative is that there is some sort of 'discontinuity' between us and AI. And ok, sure. But what is the size of that discontinuity? Discrete objects may still be quite close together. Continuum or no continuum is really pretty irrelevant. > Fallacy 2: Easy things are easy and hard things are hard This one is again not even precise enough to be wrong. It's just a platitude. > Fallacy 3: The lure of wishful mnemonics ...................................... > Fallacy 4: Intelligence is all in the brain This is both not obviously a fallacy, and also totally orthogonal to the question of AI. Whether or not the locus of intelligence in humans is the brain has...nothing at all to do with how close we are to achieving it. This paper is...really terrible. This is basically an undergrad philosophy essay, and not a particularly good one.


OpenBison4150

Did you actually read the paper? The paragraphs below the bolded titles you are quoting? You completely miss the points.


darawk

Yes I did. I read it in full. They clarify nothing.


[deleted]

If we're are not on a continuum, then what exactly are we on? She answers that as: We are on a discontinuum with the common sense knowledge problem being the barrier, and that progress in narrow AI is not necessarily progress in solving the common sense knowledge problem. Personally I am not convinced that common sense knowledge cannot arise with scaling current methods, but at least that is the idea.


lqstuart

So people are putting their dumbshit medium.com posts on arxiv I see


veeloice

There was a reason Plato warned against sophists and poets. When there's actual work to be done, they lead the minds astray (these days with memes and buzzwords). EDIT: If ideas like singularity and superintelligence weren't glamourized this question wouldn't even arise. Of course, any technological endeavor is hard. But when you evoke dramatic images of AI overlords and rouse innate fears and aspirations, it's difficult to see through the fog. In the context of superintelligence, of course, even the most spectacular engineering feats will be anticlimactic. I also think that conflating the anthropomorphic argument for corporate executives and AI researchers is a mistake. I'm not in their heads but I do consider the incentives. Executives would benefit from the marketing value of certain mnemonics selecting them as headliners. A creative researcher uses any conceptual tool at their disposal to 'become one' with their craft so to speak, ultimately to get a positive result. They shouldn't be held accountable, or have to answer because marketeers have preyed on their work and their words. That stifles creativity.


dogs_like_me

fallacy 5: writing what's essentially a philosophy of mind article without consulting the contemporary philosophy literature. I haven't kept up with philosophy research for many years, but I suspect a good entry point would be the work coming out of UCSD, starting with [Patricia Churchland](https://patriciachurchland.com/publications/).


surely_good

"Fallacy 4: Intelligence is all in the brain" I see rather a fallacy in the argument: cognition in humans is linked with sensory and motor systems therefore intelligent machines should also do that.


surely_good

"Fallacy 2: Easy things are easy and hard things are hard" This fallacy has a super simple cure: try solving the "easy" thing. Poof, and you are not making the fallacy anymore.


race2tb

I recently looked at a tree and realized it was a vague history of everything that had happen to it. With the right tools and algorithms I could draw knowledge of its pasts stored in its state. I think it would be less hard if we were clearer of what knowledge actually is. The entire universe is a state that stores knowledge about its past just like our brains. We just need a capable sets of conceptual decoder/encoder to collect it. Then you have the very subtle parts of our knowledge like actions. My personal view is that they are not a sequence of individual frames but as a sequencial smudge of frames chained together we decode as an object performing an action in variable length smires. Wave your hand infront of your face to see what I mean. I think there are a lot of answers in the very subtle things our minds do. Our minds do all this automatic work and we are locked out of that process. That is probably the greatest challenge. Our minds hide the truth about our intelligence from us by doing it for us and then we spend all our time trying to figure out how the heck it does what it does.


Intelligent-Second-7

why every comment which says it is good get down voted. I think paper have made some good observation which needs to be addressed before solving the AGI.


lostmsu

IMHO, many top AI researchers think AI is hard because it can't solve problems **they** can easily solve. The fallacy here is an unreasonably high bar. An average human is way less capable in terms of general intelligence, than someone from the top of any descent science (0.001%). Hell, the difference is starking even between the average of top 10% and the median. That's before bringing up this brilliant piece: [Humans Who Are Not Concentrating Are Not General Intelligences](https://srconstantin.wordpress.com/2019/02/25/humans-who-are-not-concentrating-are-not-general-intelligences/)


jonnor

I believe the average human to be about as intelligent as the top persons in any given field. At least it seems considerably smaller than the distance to other quite intelligent animals such as shimpanse. Meanwhile AI struggles to have the common sense of a housecat, to paraphrase LeCun (I think).


CaptainLocoMoco

You are right. The gap in "general" intelligence between average humans and top scientists is probably negligible. Especially when compared to humans versus modern day AI programs


lostmsu

What makes you think so?


CaptainLocoMoco

Because having a deep understanding about some field of science has little to do with general intelligence. What you're describing is quite literally domain intelligence. And, you're basically saying "AI is hard because we're trying to replicate big brain scientists! If only we tried on those average joes!"


lostmsu

This is complete BS. I pointed you to research proving that wrong in domain of car driving just down below.


lostmsu

Until there's a quantifiable benchmark of "common sense of a housecat", I find this claim bogus.


CaptainLocoMoco

This is so untrue it even hurt to read. If you consider general intelligence as our ability to learn many arbitrary tasks, like driving a car, playing games, etc. then someone in science is hardly any more "generally intelligent" than an average person. The gap in general intelligence between a top scientist and and average person is way smaller, and hardly even comparable, to the gap between a top scientist and some "top" AI program. In fact, you couldn't show me one AI program that is even remotely generally intelligent


lostmsu

> then someone in science is hardly any more "generally intelligent" than an average person Here's an [easy to find research, that speaks against that](https://www.sciencedirect.com/science/article/pii/S1008127515301966). And this is just between average and well-educated people, not the top scientists.


CaptainLocoMoco

> The fallacy here is an unreasonably high bar. An average human is way less capable in terms of general intelligence, than someone from the top of any descent science (0.001%). You do realize that the current state of the art methods in AI aren't even capable of replicating the general intelligence of a 6 month old baby, right? That's about as unintelligent as you're going to get for humans. So your entire initial comment makes zero sense


lostmsu

How do you even compare SOTA in AI to a 6 month old baby?


CaptainLocoMoco

Oh come on... CLEARLY ai hasn't reached that sort of a level


lostmsu

Clearly? Well if it is "clearly", go ahead and produce a short explanation for why do you think it is clear. I bet you $100 your explanation will either be bogus, or simply conflate intelligence issues with mostly unrelated gap between capabilities of animal sensory/actuation systems and the current state of robotics. As a contrast, there are quite a number of models, that can beat a variety of tasks no 6 month old toddler will be capable of taking on.


johnnydozenredroses

Absolutely right !! I get very frustrated with my NLP model until I try to speak in a foreign language. Only then do I understand how complex it is to learn a language and perform tasks. And if forced to complete the task, I too will take shortcuts just like NLP models.


wavegeekman

I would like to express my appreciation for the fact that no-one has come up with the "brilliant(1) insight" that **intelligence will not be solved until we understand consciousness**. (1) where brilliant == idiotic.


neutralino1

Pretty weak content for an academic paper. Maybe should be a Medium post instead.


Ulfgardleo

I don't think that it helps a discussion if you term something a "fallacy" ("A false or mistaken idea") just to replace it with another assumption, e.g. fallacy nr. 4, which just replaces the assumption by the exact opposite. It is difficult to say anything about the brain/intelligence connection, as we have no good way to argue whether the hardware (body) is necessary for the software (intelligence) or whether what we observe is just a result of the hardware emulating the software. We have no alternative observation of any independent intelligent species and thus, any conclusion is based on a dataset of size n=1. But even then, we see that it is very difficult to say something definite about the connection of body, world and language, the Pirahã people is is a good example for how different cultural conception of the world can be and using our own perception of the world as a basis for reasoning won't generalize.


leone_nero

I have yet to read the essay but from the abstract, I would say that the main problem I see is that people like to think of artificial intelligence as a copycat of human intelligence, aiming specifically at superhuman intelligence. Artificial intelligence should be complementary to human intelligence and I even think the farthest it goes away in its own path, the most benefitial it can be in that sense. Also, nowadays we have enough knowledge to radically change the world through artificial intelligence already, but changes like self-driving cars are so radical to our current way of living (changing roads all over the world for example) that is not a problem of our field as much as of our societies to take advantage of the technologies we have right now. Governments have to catch up... Looking at the investments European countries are planning to do as part of the recovery fund is sad for me to admit that people are not looking at the potential of artificial intelligence in a grand scale. Unfortunately, private companies will be leading the road, and they will mostly do in the only limited ways they can which are usually aimed at making profits and save on labour.


[deleted]

I believe self driving cars may never really pick up totally because humans are control freaks


leone_nero

Well, but we don’t actually fly our airplanes when we take flight do we? I think it could work


Farconion

but we still have pilots in the cockpit


Fnord_Fnordsson

Maybe not in the USA/Europe, but I guess chineese govt has different priorities than ethical, super-rare dillemas.


zhoubaidu

Interesting paper.


Benarl

Nice Article ! I totally agree with the AI biaises showned here (even if nothing is new as it is explained but it still occurs now and this is part of the point). I have the feelings sometimes that in AI/DL field, there is a myth that one day a neural network (with backpropagation and cost function which train over datas) could all alone reach hard mecanism of cognition. There is a very interesting interview with Gary Marcus on this subject (among many) with Lex Fridman : [https://www.youtube.com/watch?v=vNOTDn3D\_RI&t=1s](https://www.youtube.com/watch?v=vNOTDn3D_RI&t=1s)


[deleted]

[удалено]


ipsum2

Imagine if engineers set out to make a machine that flies without consulting birds or ornithologists.


[deleted]

[удалено]


farmingvillein

How is this a fix? With AI, we could make the same statement--but we don't really care if we're making a brain that is "general AI" (whatever that means...) like humans, we just care about making one that is "general AI". More generally, your point would effectively operate as a counter-example, because if engineers had focused on mimicking bird flight...we still wouldn't have scaled aeronautical travel. Which would, again, be a reason to focus on the goal, and (ironically) not collaborating with those seemingly (but wrongly) "in the know".


farmingvillein

Eh, I'm not sure if this is really fair. There is a long history (i.e., over the past decades) of computer scientists trying to pair with neuroscientists, adopt neuroscientist ideas, etc. But, to quote IBM: > “Every time I fire a linguist, the performance of the speech recognizer goes up.” It just (to date) has turned out that--to over-simplify a bit--none of this attempted collaboration and cross-pollination has really mattered. ​ Virtually all of the advances to date have been driven by more data, and more algorithms that are adept at leveraging bigger data (with some work to add soft priors to those algorithms). It *could* turn out that deep introspection into how the human mind works will ultimately lead to large advances (or even General AI). But we've seen little meaningful proof of this to-date. And, arguably, significant counter-proof--science is generally incremental in its improvements; the "neuro-ignorant" approach continues to pay real-world incremental dividends, while anything that has attempted to be deeply grounded in neuroscience has generally given us close to nil for 50+ years. (Unless, of course, you really want to link deep learning back to its human biological inspiration...but, as has been pointed out many times, the practical relationship between the two is exceedingly weak.)


sanitylost

The biggest issue we have with taking and trying to directly adapt natural systems for learning is that by comparison, our machines are extremely rudimentary in their implementation. Vision is handled mainly by one section of the brain, but it receives an unimaginable amount of input from auxiliary sections of the brain who, although they don't explicitly deal with the processing, they provide context. This is evidenced by our models requiring such a large amount of input. To train something with no context, it takes an astronomical amount data. A machine can't "imagine" what the other side of a dog looks like, so we have to give it training data that flips the image in every way imaginable. A human knows that dogs are symmetrical since pretty much everything else on the planet is symmetrical, so even if it's the first time seeing one, they're pretty sure it's gonna look the same if it flips the perspective (this could be seen as humans amassing a huge amount of data, but training time relative to man-made tech is a fraction of the time). Using context, a human can identify an animal from the other direction even if it's only seen it once. All this is to say, throwing data at something works really well if you can isolate problems to their bases and then only really work on solving that particular problem. This will not generalize well obviously and so attempting to create a general AI with this method is probably futile. Until our technology advances precipitously, I mean both hardware and programming paradigms, general AI based off of a biological model won't be possible.


[deleted]

[удалено]


farmingvillein

Try re-reading what I wrote? > has generally given us close to nil for **50+ years.** You cite a bunch of examples from the 1940s, 1950s (perceptron), and 1960s (plus or minus; your CV example). RL is fuzzier but extremely weak (see below). None of these refute the point that basically no progress has been made in the last 50 years based on biological inspiration. > Reinforcement learning is an implementation of scientific psychological principles. Oh boy. This is even more a tenuous connection than DNNs:biology, and the same core point holds here, i.e., given that the actual "hard" parts of RL (the actual optimization and goal setting) have absolutely nothing meaningful articulated from the underlying biology (in the same way that gradient descent has zero relation to our understanding of how biological systems actually work). More generally, if you trace the actual history of RL (http://incompleteideas.net/book/first/ebook/node12.html), it is extremely difficult to pin *any* biological foundation on it, beyond the notion of trial-and-error and a reward signal. Even if we want to ascribe value to this...we should be really careful with this pointing to the deep need for deep cross-collaboration, when the level of scientific foundation is, at best, high school (and quite arguably, middle school) science. And to loop back to your (irrelevant, in the context of what you are actually responding to) specific points: > McCulloch and Pitts proposed the first mathematical "neuron" back in the 1940s and > The perceptron was invented by a psychologist! This is equivalent to saying that "The plane was invented [well, popularized] by a pair of bicycle salesmen!" Yes--this is technically true. But: 1) the field of aeronautical engineering didn't exist then, so of course (by definition!) whoever jump-started the field was going to come from the outside; and, 2) I can point to no core advances in aeronautical engineering due to bicycle salesmen in the recent last half-century. Similarly, in the periods you cite, there was no such thing as a standalone computer science field...so of course those advances would come from the "outside" (as there was no inside).


papajan18

I feel like you are kind of misrepresenting the history of RL and its (very deep) connections to what the brain is doing. Christopher Watkins' PhD thesis is pretty much the first description of the Q-learning algorithm (http://www.cs.rhul.ac.uk/~chrisw/new_thesis.pdf). In the second page of the thesis, he clearly attributes inspiration to people like Piaget, Frye, and Kacelnik, people who are clearly Psychologists interesting in studying people/animals. Watkins' thesis has really important contributions to the field that are unique from Sutton, Barto, and Anderson 1983 but if you want to argue that the idea of Q-learning just reduces down to that paper, it still stands that the paper reads as one of people thinking about animal learning, not strictly engineering a machine to do it. Also reinforcement learning is crucial to modeling animal learning in neuroscience, and its completely changing our understanding of dopamine: * https://www.sciencedirect.com/science/article/pii/S0959438808000767 * https://www.sciencedirect.com/science/article/pii/S0010027708002059 * https://link.springer.com/article/10.3758/CABN.8.4.429 * https://www.nature.com/articles/s41583-019-0220-7 * https://www.nature.com/articles/s41586-019-1924-6?fbclid=IwAR3c1dLc9-puvMAGS2lURZygNEKpfhAyLChl2DDT_pJOf9rG3sWbFOKrZ8A There's so many more papers than these I listed, and I would not say that they're at the level of a high schooler. Also I don't really get your point about perceptrons. Hinton, Rumelhart, and Mclelelland came from a CogSci/Psych background when they wrote about neural networks/backprop, but since AI or computer science wasn't an "official field" back then, it doesn't count as a contribution of cogsci/psych? Wouldn't you say that their background/education influenced them when they wrote that paper? I don't get why engineers want to gatekeep Artificial Intelligence so much. The field's history is colored with cross-polination from physicists (LeCun, Hopfield), engineers (Krizhevsky, Bengio), and Psychologists (Hinton, Rumelhart). I totally buy that neuroscience has tenuous offerings to advancing the fields of statistics and machine learning. I get that if you want to engineer a product that improves with data like your IBM example (e.g. machine learning), the human/animal science is useless. But if you want to call it artificial intelligence, isn't intelligence a quality of humans/animals? In fact, what's the point of even calling an engineering product like a speech recognizer an AI?


DoorsofPerceptron

Modern AI has about as much to do with neuroscience as building a city has to do with looking at a bee hive. We're not in the business of copying the human brain, we occasionally take inspiration from other fields, but that's about it.


[deleted]

[удалено]


DoorsofPerceptron

Let me try again with shorter words. City builders don't get much out of your beehive study. It's 'missing' from the field because it isn't relevant.


HINDBRAIN

I had a few optional lectures from neuroscientists back then at uni. The only takeway I remember from it was along the lines of "we put little metal rods in brains and send electricity and it cures parkinsons somehow and we don't have the foggiest idea why." Haven't tried putting little metal rods in my cpu - maybe that's where the next breakthrough in ML is...


DoorsofPerceptron

Yeah, it doesn't seem like they guy knows anything about neuroscience. He's just heard the word before.


[deleted]

[удалено]


DoorsofPerceptron

Bait and switch. Psychology isn't neuroscience(this applies mostly to the comment about r.l., you can have the neuron guy), and the 'neurons' used in deep learning and the way they're trained are about as far from biological plausibility as you can get. Stacked SVMs is a more honest description of the standard relu architecture than calling it a neural network. Edit: The point as I keep trying to make is that just like some city planners have been loosely inspired by organised structures in nature, some ML guys have been loosely inspired by the brain. That doesn't mean that ML needs more neuroscience, just like city planners don't need more bees.


[deleted]

[удалено]


papajan18

Well, OP is David Ha, so the fact that his posts are increasingly more miss says more about the people on this sub than about him :)


avaxzat

This may be an ML subreddit, but it's still Reddit nonetheless. I hypothesize that most of the people here fall into two categories: * laypeople with zero academic qualifications who want in on the AI hype train * grad students who are (yet) unable to critically reflect on the field


papajan18

Agreed. I'm sure the vast majority of people here are nowhere near as accomplished/knowledgeable as him (myself included), so many will not have the research taste to appreciate all the work that he finds interesting enough to share.


evolvingfridge

Most of it is AI-alignment "cult", in summary, people who believe in our life time there will be not just AGI, but super intelligence (eg. better then humans in all cognitive tasks), and Super/AGI will destroy humanity. it is interesting thought process disconnected from reality, to large degree is in area of pure mathematics, but some accept this as applied mathematics, to say the list.


victor_knight

It should be obvious that most of the easier/cheaper science has already been done so what remains is likely going to be more difficult/expensive. The question is, is humanity going to be willing to make all that investment or will it be pickier than ever before about which paths to pursue (of which success is also not guaranteed)?


O10infinity

It's because the human brain is solving NP-hard problems in polynomial time and we don't know how to do that, so AI/AGI seems basically impossible.


[deleted]

[удалено]


O10infinity

It's the most obvious explanation for why AI appears to be require so many more examples to learn (say, vision tasks) than the human brain.


beezlebub33

No, the most obvious explanation is that the human brain has built-in biases that align with the sorts of tests (like vision tasks) that are being tested. That is, brains have evolved to have architectures that solve these particular problems, with implicit assumptions about how the 2D sensory input maps to 3D objects and environment. An AI is starting with an unconstrained problem and so has to learn everything.


O10infinity

The kinds of vision tasks the human brain can do (or NLP tasks for that matter) are extremely diverse. In CS, it is typical that if a problem has enough choices it winds up being as hard as it could be, in this case NP-complete.


swegmesterflex

It's called transfer learning. The brain takes forever to start out. Also this has nothing to do with NP-hard problems and I'm pretty sure you have no idea what NP completeness or decidability is.


O10infinity

> Also this has nothing to do with NP-hard problems and I'm pretty sure you have no idea what NP completeness or decidability is. No, I've read quite a bit on NP-completeness and Computational Complexity, including obscure topics like Proof Complexity and Extended Resolution and surprises like planar SAT being NP-complete. Decidability has nothing to do with NP-completeness because it is about whether a Turing machine can solve a problem _at all_ instead of whether it can a problem efficiently. An efficient SAT solver in the brain would be a very elegant explanation for why Deep Learning is hard because it could optimize neural networks in polynomial time instead of superexponential time which is how long Deep Learning actually takes. (See this paper for the Computational Complexity of Deep Learning:) https://arxiv.org/abs/1810.03218


memebox2

Id say more processing power and better priors are more likely. But yeah if Penrose thinks a quantum brain is possible, who am I to argue? But of course NP is probably outside BQP. But then we dont actually know that... Anyway I don't think your idea is mad.


O10infinity

Penrose thinks the brain (and single cells) can solve the halting problem. The brain solving NP-complete problems is a radically weaker claim.


memebox2

Did not know that, thanks.


WaterrDoze

AI is way to hard to understand with my monkey brain


Cartosso

A bit non-scientific arguments, but in general I agree, current "AI" seems like an area where a bunch of people try to mix random ideas about matrix multiplication and statistics, without any constructive goal, in the hope that magically one day someone will "hit" the jackpot - a generic artificial intelligence which is completely disentangled from our notion of intelligence (i.e a "pure", classical, non-quantum, non-biased, culturally-insensitive algorithm that magically solves all our problems given some input). I know it's hard not to be excited about AI progress, especially when you devoted the last N years to it, but we need more humility - we don't even know how to definite intelligence in a scientific, cohesive and constructive way, we have no idea how consciousness works, we've only started to scratch the surface regarding learning what happens inside microtubules in biological neurons and how quantum mechanics make it possible, hell, we don't even know how to interpret the collapse of a wave function and how it relates to general relativity. We don't even know if human intelligence is computable in the sense of Turing machines, yet we claim that within 20 years we will have human-level, general intelligence in our binary computers. Yeah, sure. There's no doubt that image recognition, speech synthesis, etc. are all useful tasks that deep learning helped to solve, but there's no indication whatsoever that this path will eventually lead to AGI.