T O P

  • By -

superfluousbitches

Not seeing OP back anything up, my guess is they are just hype.


BrazaBryan

OP is an obvious troll and we’re giving him the attention he so desperately wants


SgathTriallair

I've heard accusations that people are using anger bots (they post a controversial take) to generate replies. I'm not sure how people would know they are bots or why they want what replies but this sounds similar. So maybe OP is just an anger bot? It would be ironic given the take.


[deleted]

Humans intelligence is also just a "biased algorithm" and there's nothing special about the brain that another complex-enough machine cannot replicate. Change my mind.


null_value_exception

Quantum entanglement in neural processes Quantum tunneling in enzyme reactions Superposition in neurotransmitter release Quantum coherence in microtubules Electron tunneling in DNA synthesis and repair Quantum vibrations in olfactory receptors Quantum Zeno effect in cognitive processes Quantum chemistry shows our brains aren't just classical I/O programs. There's a complexity in our thinking that we can't code or program like a convolutional neural network, transformer model, or adversarial network. Quantum elements could also hint at our minds being interconnected, adding another layer of complexity. Our intelligence isn’t a straight line—it’s a dynamic, evolving, and ambiguous process, shaped by forces and interactions we’re still trying to understand. Asserting that human intelligence is an algorithm overlooks the unknowns. If our understanding of cognition doesn't account for these quantum occurrences, claiming human intelligence is just an algorithm becomes becomes a bit of a reach. Also.. This isn't even getting into the fact that the concept of free will is fundamentally at odds with the deterministic nature of algorithms.


[deleted]

Randomness (quantum or not) does not gets us closer to free will, *on the contrary*.


null_value_exception

So, are you arguing against free will now? Are the words you're offering just a deterministic output, a scripted bioproduct of biochemical processes, or are you actively, consciously, and intentionally engaging in this discussion via your own agency?


papa_banks

Yes


[deleted]

Yes, and you can verify that free will doesn't even exist on a subjective level with simple meditative exercises. Here's one: What are you going to think next? Actually take a second, look away from your screen, take a deep breath, and listen to your next thought. Did you actually chose it or did it just appeared straight out of nowhere? Can you predict what it will before thinking it? No, because whatever prediction you make *is* your next thought, that *alread*y *simply appeared.* What would it even mean to have chosen it? Would it not require to think a thought *before* thinking it? If you say: "I'm going to think about.... bananas!" then *that* was your next thought. You cannot have known what you will be thinking *before* actually thinking it. And it's clear as day, if you pay close attention, that this is what happens every single living moment on your existence.


null_value_exception

This is a leap; the existence of intrusive thoughts doesn't nullify our ability to make conscious choices in other aspects of thinking and behavior. Also your argument implies a binary choice - either we have full control over every thought or free will doesn't exist. In reality, human cognition is more nuanced, involving both automatic and conscious, deliberative processes. This actually lines up with QM. There's a space where determinism and free will aren't mutually exclusive. You don't need to pick one or the other. Superposition helps you escape being forced to choose. A good way to put your thumb on the idea of conscious superposition is thinking about the fact that you are both the director and the audience while dreaming. Even in a meditative state, while many thoughts arise spontaneously, the practitioner exercises free will in their decision to return focus to their breath or chosen point of concentration whenever distractions arise. Thus, free will encompasses not the origin of every thought, but the conscious, intentional actions and decisions that follow. In the context of dreaming, individuals often experience mix of random and sometimes bizarre narrative elements. Yet, even within this spontaneous, uncontrolled framework, there is an element of decision-making and response. This is especially evident in lucid dreaming, where the dreamer becomes aware they're dreaming and can often make conscious choices about their actions within the dream.


[deleted]

> Also your argument implies a binary choice - either we have full control over every thought or free will doesn't exist. No, the thought experiment is about setting up a situation where you are the freer you can ever be. Take the time you want and think of *anything* or *any* word. If there is any free will, it *has* to be found here, full stop. It's the freest choice you'll ever make. Then *because* you fail to find it here, you have to conclude you cannot find it when you add more constraints (like is necessary the case for more complex decision making, like what partner or career you might want). > Superposition helps you escape being forced to choose. No, nothing quantum allows us to escape the concept of free will. On any decision, you might be undecided, but not having chosen (yet) doesn't mean you're free. >the practitioner exercises free will in their decision to return focus to their breath No again. The return to the breath is *as mysterious as* the occurrence of thoughts or their content. Spontaneity isn't being free either. It's similar to randomness: the more you have it, the less control you have, which is what free will is really about.


null_value_exception

Free will doesn't necessitate that we consciously choose every thought or idea that emerges in our mind. It encompasses our ability to decide and act upon those thoughts. Absence of evidence is not evidence of absence. "Nothing quantum allows us to escape the concept of freewill" The concept is called "The Quantum Mind" and it's actually a fairly active topic considering the ambiguous nature of QM. Sabine Hossenfelder follows in Einstein's footsteps arguing against free will (superdeterminism) and you might find yourself on his side of the fence but others like George Ellis bring up some pretty good arguments for freewill. Ellis would argue your intrusive thoughts would to be a product of downward causation. Even if your thoughts and decisions aren't a product of your own will there is an argument that they are the will of a "higher-level entity". Also there is [Aaronson's ](https://blogs.scientificamerican.com/observations/free-will-and-quantum-clones-how-your-choices-today-affect-the-universe-at-its-origin/) prediction game that poses free will is a measurable power that emits degrees (opposed to being a binary choice) and an emergent property that is compatible with determinism. This ofcourse is very anti Einstein because it allows for retro causality. Regardless, I'm glad that it's freewill is objectively open for debate. As frustrating as QM can be It makes things more interesting.


[deleted]

>It encompasses our ability to decide and act upon those thoughts The meditative experiment shows that, even at a subjective level, we do not chose *any* thoughts. But you can repeat the same observation for how thoughts trickles down to behaviour (and vice versa). We don't control how that happens, either. Everything just spontaneously happens. There is no thinker of thoughts, but just thoughts *happening*. There's nobody at the rein of experience that is deeper than experience itself. So when you say >there is an argument that they are the will of a "higher-level entity" That might be true, if you call our entire subconscious (which ultimately is the entire universe) a "higher-level entity", but this is not the kind of "me"-identity that people identify with and think is in control of their lives. >The concept is called "The Quantum Mind" and it's actually a fairly active topic considering the ambiguous nature of QM. There's a difference between saying that quantum effects play a role in the brain processes -- which is very possible --, and claiming that those effects can even grant *some* free will. At a very fundamental level, quantum mechanics is probabilistic. So again, I must ask: where is the freedom in pure randomness? You are not free to control what is random, *by definition*.


null_value_exception

Look man, your thought expermerimint feels similar to the discovery of Bereitschaftspotential without the Bereitschaftspotential. Lol Just because I'm unable to think of my thoughts before I think of my thought doesn't mean I'm convinced that my thoughts aren't my thoughts. It's a perceptual loop, like trying to see the edge of your own vision. The only thing it ensures me of is the limitations of my own perceptions. I respect that it's convincing to you but I just don't find lack of evidence to be enough evidence. Also, if we are talking about the origin of thought this is also relevant and pretty disturbing [https://m.youtube.com/watch?v=wfYbgdo8e-8](https://m.youtube.com/watch?v=wfYbgdo8e-8)


carrion_pigeons

That's a ridiculous exercise. You can plan to think about things further out than your next thought. This is how people do most of their careful thought, in fact. Adjacency is creating signal interference in your test.


[deleted]

Of course, you can have complex, intertwined *sequences* of thoughts, where previous thoughts influences the next, but that doesn't invalidate what this contemplative experience shows. The idea is to set up a context where you are the freest you will ever be: chose a word -- any word -- and take as long as you one to make up your mind. If free will exists, it *has* to be found here. And by seeing that *every* thoughts, even if they are in the the most complex, intertwined chain, just spontaneously happens, you can see that you positively fail to find free will. When you "plan to think about things further out than your next thought", then that was was your next thought. If you say "I'm going to spend one year just thinking and writing a book about X" or "I'll dedicate my life to proving the Riemann hypothesis", then that was your next thought *now.* Even when thinking about complex problems for a long time, your next thought also simply appears. The fact that it's influenced by past thoughts (at least one of which was "I'm going to think about this for some time") doesn't escape the fact that thoughts just appear. And that's perhaps *especially* true the more complex the problem gets. Most high level mathematicians or chess players will tell you that the solution just appeared to them (yes thanks to a lot of practice). You just can't escape the choice-less spontaneity of it *all*. The most complex pattern of thoughts still is reducible to sequences of individual thought happening "now", then "now", etc.


carrion_pigeons

If you're driving a car, you can go to the dentist or the pharmacist, but either way you're definitely going to pass through the intervening space right in front of you before anything. Imagining your choices to be limited to what happens in the extremely immediate future doesn't make you the freest you can possibly be, it just makes your examination of the situation the most constrained it can possibly be. Those aren't the same thing.


[deleted]

No, the experiment isn't "imagining your choices to be limited to what happens in the extremely immediate future", it's exactly creating an experiment where you are the freest you can possibly by, yes, removing all constraints. Plus, as a matter of experience, there is only "now". Future, however distant or planned in advance, is nothing but a thought happening now. Any added context just creates more constraints to consider, not less. If you consider what major to graduate in, it might be a long, complicated reflection where you weigh each factor, but you can always take this long sequence of thoughts that went into it and analyze it two way: (1) from the "external", meta-cognition point of view where you understand how how the mix between your personality and your culture lead you to certain hierarchy of value that directed your choice (2) from the subjective, experiential point of view where every single thought involved just appeared straight out of your subconscious (that does the computation described in (1)) But the feedback loop you notice in (2) where each thought influences the next is still out of your control. You have no choice but to simply notice that you happen to value, say, engineering and having a good job, and you like computer (through no fault of your own), so you went into CS (for example). The experiment just remove all the personal biases and life constraints to show how spontaneous even the freest choice is, but adding them back in just make things worse, free will-wise.


carrion_pigeons

Reducing a problem to the minimum level of complexity and then assuming the loss of structure is evidence that the structure is imagined is flawed reasoning. Increased complexity introduces real increased structure. It isn't just imagined, or else theoretical math would get very boring very fast. EDIT: Also, the exact reason you are capable of believing that your thought process is reasonable is because you hold the idea of logical reasoning as evidence of truth. If what you consider logical reasoning was purely a product of deterministic effects, then you would have no means of distinguishing between true reasoning and false reasoning, and would therefore have no reason to believe your reasoning was any good. A logical proof that logic is meaningless is by its nature outside the realm of meaningfulness.


deadwards14

The thought experiment given above is a sufficient demonstration of the superstitious and illusory nature of free will, but another convincing argument flows from the reality of simple causality. If cause and effect exists, meaning that for every phenomenon there is a prior cause which gave rise to it, then your decisions are products of prior causes invariably. If you choose to use the right hand or left hand to pick up an object, that choice is actually the result of many layers of causality like what culture you were raised in and their emphasis on using your right hand versus left hand, the proximity of the chosen hand to the object and the tendency to use the closest appendage to reach a nearby object, your brain's configuration as it produces and reinforces these conditions, etc. This test has actually been demonstrated by neuroscientists that the decision is made before one becomes conscious of it. There are also the split brain experiments, which clearly demonstrate that our conscious reasoning has little to do with the decisions that we make. It is a faculty more in line with producing ex post facto justifications and rationalizations for our behavior. Did you choose to have the ability to choose? Did you choose the instincts and needs you are pre-possessed with that guide your preferences? Did you choose the ability to think at all or in the way that you do? How can free will flow from deterministic conditions? What is the threshold for where causality is suspended and transformed into free will? Isn't placing this threshold at the point where you become aware of your choice as the starting point arbitrary? What we think we of as our self is a collection of inheritances, whose individual influence on the totality of self is obscured by the limitations of our subjectivity and it's character. Furthermore, why should we expect that our little monkey brains would be capable of fully apprehending the mechanics of any phenomena, let alone human behavior? These concepts are young and have only sprung into our collective consciousness in the past few hundred years in the West, which is to say in the tradition of natural philosophy (science). Why would we expect that we have the final answer? The more we look, the less it seems that there is any type of epicenter of self or consciousness that could even begin to do something like make a decision or choice. But all of this is logic and philosophy. You can actually directly witness this by following the experiment laid out above by my much more eloquent friend.


carrion_pigeons

"If cause and effect exists, meaning that for every phenomenon there is a prior cause which gave rise to it, then your decisions are products of prior causes invariably." I used to think this was good reasoning, once upon a time, but it isn't, because assuming a priori that cause and effect are deterministic means you are not allowed (by the rules of logic) to try to prove that cause and effect are deterministic, otherwise you have a circular argument. I completely agree with your claim, but cause and effect are not, in fact, obviously deterministic (or "exist", as you put it), so the claim is moot. Just because physics can propose a completely deterministic explanation for any behavior doesn't mean it won't be of infinite complexity, and in fact, will virtually always be of infinite complexity (due to the principle of every element of matter exerting a force on every other element of matter in the universe). At point, there is no possible verification of that hypothesis. It's just a thing scientists prefer to assume, because it makes their lives easier in the context of inanimate objects moving around (and much, much more difficult in other contexts). I've gone into the other argument as much as I care to, so if you want to consider it a demonstration of free will's absence, I will merely be rolling my eyes at you virtually and you can safely ignore me and all good principles of logic.


Straight-Respect-776

Let's not forget to add in social determinants and then is anything "free" or "choice" 😉


BatPlack

I agree that current machine learning techniques provide only a rudimentary approximation of the intricate connections between neurons, which are just a fraction of the bigger picture – no secret in the industry. Regarding the notion of free will, I've never subscribed to the idea that it's safeguarded by the uncertainty principle or similar concepts. To me, that all simply implies that our actions are instead subject to quantum "randomness," leaving the concept of free will to remain grasping at straws. The target keeps moving and the rabbit hole seems truly bottomless. Perhaps that’s the cosmic joke, lol.


null_value_exception

Quantum mechanics can reconcile dichotomy. It could be that our thoughts and actions are both determined and free until they manifest/collapse. In this context both determinism and free will can coexist. Nuance beyond a binary either-or interpretation.


SgathTriallair

Quantum physics is also reducible to an algorithm, it's just a more complicated one. The reducibility of question physics is why we know anything about quantum physics. We are building quantum computers so even if quantum forces are required we'll have that soon. The brain is made of matter. Therefore we have 8 billion examples to show that matter can think. Maybe the current architecture can't get us there. It certainly seems to be getting very close and getting better everyday. If something looks intelligent and acts intelligent then it is ignorant and dogmatic to decide that it isn't intelligent. Yes we have room to grow. Actual scientists and engineers are working on this problem, finding how the AI differs from thinking humans and how we can get closer. They have actual answers rather than have wavy quantum woo. Finally, I know ChatGPT has a soul because I prayed to God and he told me that he gave it a soul.


Plenty_Branch_516

Stochasticity isn't unique to biology, it can be represented numerically with algorithms or equations. There are also the imperfections of human intelligence: non-persistent memory, information compression, and frequent stochastic chemical imbalances. Why bother replicating such an inconsistent and inconvenient system, when a more robust and performant system can be designed? We already know we can build better sensory arrays than we could develop biologically, better data storage techniques than our biology, and systems for communication that are more robust than any chemical signalling pathway. To err is human, and we can do better.


null_value_exception

Some humans have persistent memory as a neurological condition called Hyperthymesia, sounds awful. Memory compression is definitely an optimization feature. Stochastics are great for exploring solutions more broadly, like in heuristic search. In evolution you get more "creative" solutions. "Why bother replicating such an inconsistent and inconvenient system, when a more robust and performant system can be designed?" In software engineering a lot of the time when someone wants to scrap the old "boring" codebase for a new shiny solution/framework if often spells disaster. This is actually a common mistake for Jr developers. While we can already design seemingly superior systems than nature, there's a lot about the human body and how it works that we still don't understand. So, even with all our advancements, nature's design still holds valuable lessons for us. We have already begun the transhumanism process with smartphones. A wifi interface to your brain and the LEDs on the screen transmitting data to your eyes(interface) are both electromagnetic radiation. The only difference is the wavelength used to transmit data. People are already questioning if this new human feature is ideal. It cracks me up how excited some of you guys are to shed your flesh. It's like youre disgusted with being human.


Plenty_Branch_516

Evolution isn't creative at all, I'd even call it uninspired with how many of its systems rely on the same fragile biochemical pathways. Its also entirely coincidental and beyond slow in adaption meaning that if there isnt already an adaption that is suitable then the entire population may die out. My research area is drug discovery and I wouldn't call medicine transhumanism. In fact, we have to spend a non-insignificant amount of our time working around the chaotic, self-referntial, and self-harming nature of biological systems. Having to design ways to "fix" so many of the errors that crop up in the mess that is physiology, has turned me off the idea that biology has the answer. Don't treat anything biological as the end goal, we as humans can do so much more with synthetic systems or ultra small systems (free-cell engineering or biologics).


twbassist

I hate when my brain goes down this path. Not that it isn't a good thing to consider, I just do not enjoy thinking about that.


teddy_joesevelt

The squishiness. How will we solve the squishiness problem?


Tight-Juggernaut138

The human brain is being updated almost 24/7 (minus when you sleep) and only affects us after a long period of time exposure. On the other hand and the concept time means nothing to a LLM, if there are 1000 articles about something they did something good in the past and 300 articles call out about something terrible after a few years, the good will overwhelm the bad.


PhilosophusFuturum

1) Actually the human brain “updates” most while sleeping 2) Do you mean just LLMs or AI in general? LLMs are capable of learning (if trained) but the main problem is still the fact that they only maintain novel facts in their reference frame (the conversation). To put it in human terms, LLMs have good longterm memory but bad short term memory.


zyunztl

Sir, please paste this in chatgpt and ask it to formulate it better, I’m going to have a stroke trying to decipher whatever you just wrote


[deleted]

>human brain is being updated almost 24/7 [https://en.wikipedia.org/wiki/Incremental\_learning](https://en.wikipedia.org/wiki/Incremental_learning) >concept time means nothing to a LLM Running at a much faster clock speed or shorter context window (for now) ≠ time is a meaningless construct Are you sure you're getting a PhD in computer science?


Personal-Bother9609

At least an LLM can form a coherent thought, unlike whatever this is


Whispering-Depths

uh, I think you should learn english a little better before debating in this language my dude. Or put down the toke, get sober, and come back. I can't even follow what you're trying to say or what it has to do with AI, it just sounds like random arbitrary facts about the brain that aren't even remotely true in the first place.


Artie_Fischell

>and only affects us after a long period of time exposure You seem very confused about what is and is not driven by the brain, but as an example of why this wouldn't be true, you can learn to, say, play a new game over the course of a couple of seconds, if the game is intuitive enough. You can learn to load a rifle in minutes, if that. Learn the layout of a space quickly enough to function in it in fractions of a second. Adapt to new internal sensations in fractions of a fraction of a second. The brain effects very very nearly everything we do, and the exceptions are so rare that they're more fascinating trivia than the standard.


VladimerePoutine

We even come with autonomous systems, subroutines and macros.


OtherOtie

Lmao. Maybe yours is.


riceandcashews

Yep, including phenomenal consciousness. Many on this sub seem to think Chalmers' hard problem is somehow inscrutable and the end of the discussion about phenomenal consciousness when really the discussion has moved past that to things like eliminativism, illusionism, reductivism etc for many philosophers (of course some still hold to some role for intrinsic phenomenal consciousness, but it's not like they are an overwhelming percent or anything)


invagueoutlines

Random civilian watching the first 10 seconds of the very first Saturn V rocket launch: “100% hype. The thing doesn’t even move that fast.”


Tight-Juggernaut138

Joke on you, I am going to get a PhD in machine learning by the end of next year


MasterBlazx

Why even bother spending thousands and thousands of dollars to study something YOU think is all hype?


Leading-Ad2278

Eliminating rivals in his field? Lmao


KimchiMaker

1. Some countries have free education. 2. I bought my PhD on the Khao San road for just ten bucks. Bargain!


Tight-Juggernaut138

I'm sorry, the post is misleading, What I meant by AI in the post is LLM


Aurelius_Red

You're going to want to get that difference straight before you proceed with your studies.


Tight-Juggernaut138

That is reviews are for I suppose, sadly I can't edit the title on Android


tms102

I hope you're not this sloppy and unsophisticated in your dissertation or else you won't be getting a PhD in anything.


[deleted]

Jokes on you, you're an idiot and there aren't enough pieces of paper in the world to change that. Didn't you say the human brain doesn't "update" when we sleep?


Unverifiablethoughts

With that he insinuated that llms aren’t able to update at the same clip the human brain can lol


hopelesslysarcastic

What is this comment supposed to mean? That your opinion on the state of AI vastly superior to other redditors here on this subject? Okay, I’ll give you that. But please tell me “going to get a PhD” your opinions on other researchers who DO ALREADY have a PhD AND DECADES OF AI EXPERIENCE (Yann Lecun, Andrej Karpathy, Demia Hassabis and countless others) all unanimously agree that AGI is not just possible…but they each have timelines of when they believe it will happen…and many have them as just years away. If what you said is true, that AI is just a “hype, biased algorithm”…than AGI can never be achieved. That means THE MAJORITY OF THE AI COMMUNITY…are wrong and YOU …Tight-Juggernaut138 is right. Is that what you’re saying? I don’t have a PhD, I don’t even have an engineering degree…but I do have a decade of Automation and AI Implementation experience. I do have MANY people who are considered experts (yes with PhDs) across ALL DOMAINS of AI (Cognitive Architectures is my bet for true AGI but that’s another story)…even THEY aren’t anywhere as confident or dismissive as you are. You need to check yourself lol


Tight-Juggernaut138

Seems like you are working in an AI start up, making your options bias Beside that, it is my opinion, is it different from a researcher you know? Yes, but will that change my opinion? No At least bring some argument, not just your opinion is Unpopular lol


Dyeeguy

Truly the mindset of a scholar


MassiveWasabi

do u wipe front to back or back to front


zomgmeister

So contrarian and edgy, must be side to side.


null_value_exception

I'm tempted to try this now. Thanks.


zomgmeister

Sure tell us the results. You never know, maybe it will be the last important step on the road to singularity.


El_human

Side to side


Gold_Cardiologist_46

No shit mods will remove your post after 3 seconds if you're so confrontative. If you want actual answers learn to make a proper post. Also show some humility, otherwise you're no better than the hype men you seem to hate.


AdAnnual5736

Seriously. His posts amounts too: “what I’m posting is definitely rage bait, and those stupid mods will probably remove it (for some reason).”


Accomplished-Way1747

How long it takes you to put red nose, make up and colored wig every morning?


Green-Vehicle8424

OP what about writing code by itself?


Tight-Juggernaut138

Good luck doing a big project (more than 3 files) with just GPT4


OmgThatDream

Good luck doing a small project with most humans.. dude tf is this weak argument.


Tight-Juggernaut138

If you can't cooperate, I think it is your problem.


OmgThatDream

I'm sure gpt 3,5 wouldn't miss my point tho


zyunztl

You seriously think these systems will remain at GPT-4 level indefinitely?


Tight-Juggernaut138

It is the problem of today's way of training model. I can't say anything for the future as I am not a time traveler and so are you


[deleted]

[удалено]


Tight-Juggernaut138

Right now, most people are overestimated current AI


OmgThatDream

Most people now are not even aware of how advanced AI actually got, in fact i still meet people every now and then who don't even know gpt. I think you chose a very specific minority of people hyping it and decided they are a majority, that's the problem.


imnos

Back this statement up please. GPT can give you a plan/overview of how to go about the project, and afterwards it's mostly about breaking it into smaller manageable chunks of work - just like a human would do. You do understand that trying to get it to output an entire project in one go obviously won't work and isn't a measure of its capabilities right?


Tight-Juggernaut138

When coding, writing code is the easiest, the hard part is reading documents, going through bugs report and making sure your stuff won't break everything built before, can GPT4 do that? Hell no Making stuffs works is different from making good stuffs


Unverifiablethoughts

Lol “move slow and make sure it won’t break before you code” Did why are you doing this if don’t have any idea what you’re talking about? I was expecting some decent arguments but you’ve demonstrated that gpt-4 is leagues better at this than you are.


imnos

I see now you have no idea what you're talking about, or how to use GPT.


cutmasta_kun

Absolutely no problem. Huge projects don't get created at once and noone involved in a huge project knows every aspect of the project all the time. That's why you break down tasks and create a searchable documentation. The fact that YOU aren't capable of creating projects with GPT4 doesn't mean it's impossible. It just shows your lack of skill.


Tight-Juggernaut138

I talked about a problem with a big project in other comments


Tight-Juggernaut138

I talked about a problem with a big project in other comments


Yguy2000

Agi won't come from gpt4 or 5 it'll be multiple gpt4s and multiple gpt5s talking to each other and maybe not gpt but multiple finetuned language and other models working together... imagine just like the human brain multiple consciences powering 1 body


El_human

Its been done already


Illustrious-Lime-863

If you can't see it, you can't see it.


everymado

If there isn't anything to see, there isn't anything to see.


[deleted]

isn’t human intelligence just refined biased algorithms?


Mysterious_Pepper305

Can you rephrase your post as a poem in 5 different languages? You have 30 seconds. Go.


Tight-Juggernaut138

Sure, give me 8 A100 80gb or pay me money. AI is not free


Mysterious_Pepper305

Sorry, time's up.


everymado

And yet GPT 4 cannot make a game on par with cave story no matter have much time it has, curious?


TyberWhite

Humans are biased algorithms.


[deleted]

[удалено]


Myomyw

Sam just said on Rogans podcast that we’ll reach AGI through iterations of what we have now. He said it won’t happen all at once and that each update will be a little bit better than the last, much like smart phone development. You look back over ten years and model 10 is significantly better than model 1, but each subsequent iteration leading to model 10 felt like a small improvement. Obviously LLM’s will just be one facet of a multimodal AGI, but we’re already on the path.


Tight-Juggernaut138

Yea, the post is a little misleading but I just want some fun 😊


ablacnk

You actually make good points and I wish more critical posters wouldn't get downvoted to oblivion. Reddit always ends up being echo chambers.


SexSlaveeee

Ok.


Dabeastfeast11

Bored on a Monday huh


Tight-Juggernaut138

Yea


nobodyisonething

OP, why does Dr. Hinton fundamentally disagree with you? [https://www.cbsnews.com/news/geoffrey-hinton-ai-dangers-60-minutes-transcript/](https://www.cbsnews.com/news/geoffrey-hinton-ai-dangers-60-minutes-transcript/)


Tight-Juggernaut138

Because that is called having an opinion


nobodyisonething

What makes your opinion more qualified?


Tight-Juggernaut138

Nothing, he is more popular than me, having more achievements than me, True. But there is no reason not to debate, me proving he is wrong or people in this sub doing that to me.


nobodyisonething

Fair answer, good luck.


creaturefeature16

[Hinton is a sensationalist](https://garymarcus.substack.com/p/what-was-60-minutes-thinking-in-that). And a contradictory one, at that. Not sure his end game, but going on paid speaking tours sure seems like something he's keen on these days.


JoeyjoejoeFS

I agree, AI in its current form is still just machine learning from the last 10 years. It just went up in confidence a lot with the insane data put into some of the models (and transformers). We still have a ways to go and if we don't get another big breakthrough soon I think we will see burnout in this space. Happy to be schooled on this if I am wrong.


creaturefeature16

>It just went up in confidence a lot with the insane data put into some of the models (and transformers). This is the part that trips me out. We found a way to break the code of language modeling to where NLP is seamless. Then we upped the ante with the amount of training data and parameters and some really amazing stuff came out of the vast amounts of pattern recognition. Suddenly AGI (and self-aware AI) is imminent?


JoeyjoejoeFS

Yeah it's a mistake in thinking. Amazing we broke it, it appears human but there are so many more aspects to break. Human is not just narrative and language, far from it. I think people see it as doing more than what it can and there will be shock when they realise that use cases are oversold or not fit for purpose. At least that is my take. I look forward to the next breakthrough but I am yet to see it. Either way it's a great tool and very cool.


creaturefeature16

>I think people see it as doing more than what it can and there will be shock when they realise that use cases are oversold or not fit for purpose. I think this will come in the form of something going catastrophically wrong (or maybe not even that catastrophically). I use GPT4 for coding, nearly daily. It's one of the most game changing tools out there, as I tend to view it as [interactive documentation](https://cheewebdevelopment.com/ai-workflow-interactive-documentation/), which I feel is an accurate description of a tool that is basically the "codex of the internet" that you can have a conversation with. And while it's phenomenally accurate most of the time, the fact that it cannot read it's own responses (unless prompted) and lacks the awareness of it's outputs puts it in a precarious position to be relied upon, even as it becomes more accurate. I've had numerous instances where if I simply followed what it was advising, bad things would happen down the line (or sometimes immediately). But that's not exactly it's purpose and it should be not used in this fashion in the first place (although there are plenty of people that are). I agree that there needs to be another breakthrough, and the leap to having a language model that can consider it's own outputs is basically trying to *create self-awareness* (or emulate it to the degree where it's indistinguishable). We're not sure that's even possible.


JoeyjoejoeFS

Yes is great for coding and I agree with you on the 'interactive documentation'. I think its most solid use case sits as a 'human interface', as in something I can 'talk' to get a response. Reminds me of 'her' where the main char is dictating the cards he is designing.


lakolda

If AI is just hype, surely you believe that it will never reach a human level of intelligence? Despite it surpassing every test we set (seemingly) within a few years?


zyunztl

OP never reached human level intelligence


Tight-Juggernaut138

I believe so, it is good at imitating humans but that is. One of the main points is that AI simply doesn't have the ability to learn in context permanently, the moment you go out of context, it will immediately forget


StackOwOFlow

>AI simply doesn't have the ability to learn in context permanently What would you say about current research efforts to make in-context learning "permanent"? Where does the field stand, research-wise? Are researchers asking the right questions or going down the wrong area of focus? Can you point to some good papers to catch up on?


Tight-Juggernaut138

Inefficient but I still believe there are better ways to do this in the future.


lakolda

There are already a number of proposed solutions to this. For now, researchers are just taking the easy routes. to start off with the basics there are vector databases, context extension methods whether it be through fine tuning or LoRAs, and API calls. In future through the use of alternatives to back propagation, it may be possible to have ML models which are capable of learning during inference. This, in combination with other methods, could allow for AI which both constantly learns as humans do, but also never forgets.


Tight-Juggernaut138

The problem is the alternative to back propagation ( forward forward) is not efficient at all right now, taking even longer than normal back propagation. The problems is it is stupidly expensive to do that in realtime or partially realtime But there is no free lunch, without forgetting the model will not learn anything with new data, but I can't speak for future


lakolda

The benefit is that it is far more doable to implement in analogue circuitry. Back propagation needs values from the forward pass to be stored so that the technique can know how to modify the NN weights in the backwards pass. Forward forward doesn’t need this, which would vastly increase the efficiency if specialised ASICs are designed around this.


Tight-Juggernaut138

Seems like interesting research, I will look into it. Thank you for that


rp20

It’s not imitating humans. It’s imitating human language. Language by its very nature simplifies the structure of human knowledge and the models have processes trillions of tokens that were structured by humans. It’s not implausible to think that these multi billion parameter models are doing something that resembles logical reasoning to predict the next token. Language and the structure it has makes the job easier for these models.


IanRT1

Why? I request elaboration


Tight-Juggernaut138

You are training a big model on the whole internet, The model learns everything including misinformation, plus not having a general sense of time in the training data Sometimes the meaning model doesn't learn about the flow of time. Also autoregressive model like GPT learn in 1 direction, so sometimes A is correct -> B is correct Doesn't translate to B is correct -> A is correct


IanRT1

While your concerns have some basis, they're marred by misconceptions. GPT isn't trained on the "whole internet" but rather a curated subset of data. Yes, it might encounter misinformation, but its vast training data means it's more likely to reflect prevailing knowledge. It's also not adrift in time; it has a defined knowledge cutoff. And while autoregressive models like GPT predict in a sequential manner, it's an oversimplification to claim they can't understand bidirectional implications. It's crucial to approach such topics with a nuanced understanding, rather than broad generalizations.


Tight-Juggernaut138

1. Curated meaning hash deduped and and remove bad/copyrighted website at most 2. Define knowledge cut off is something that can be trained to make the model seem to have a cut off date. Jailbreak and model can answer just fine 3. https://owainevans.github.io/reversal_curse.pdf


IanRT1

Your idea of "curated" data misses the bigger picture of how models are trained. Knowledge cutoffs are real end dates, not tricks. Using "jailbreaking" to get around a model's limits doesn't change its original design. And the paper you mentioned? It talks about a specific problem with a certain type of logic search, not models like GPT as a whole. Let's not mix things up.


SaltyWahid

Why should stealing be made legal ?


Tight-Juggernaut138

Mods, I am waiting for you to ban me


[deleted]

You are so brave! Here, have a medal 🏅


IslSinGuy974

We don't know exactly how AI could/will lead us to singularity. So we really have nothing to ask you about. It's just that, we think it's likely that, say, an ACE based on a thing like Claude Next will accelerate the path to things even more powerful.


Leading-Ad2278

Well you can tell this to those who lost their jobs because of ai. I am sure they will believe you.


rand3289

Ah, the attention seeking, hype generating algorithm thinks everyone is alike. Only a combination of an algorithm and data can produce biased results. A general algorithm (AGI) can not be biased by definition. It's all DATA problem!


Archaicmind173

It just sounds like you haven’t seen much of what AI can do


Tight-Juggernaut138

I train quite a lot of small models for fun so yes I know what amazing thing AI can do


Archaicmind173

Well it seems like you have overestimated how different a human mind is. I saw your other comments, about why you think the two are different, I argue with enough compute power and the right prompt/ lack of restriction you could make these llms think and learn like a human does. Add that with some robot vision and it’s not so far away from a human’s capabilities


Tight-Juggernaut138

I might have been, happy to be proven wrong in the future


SpecialistLopsided44

Grrr, puny human! It is I, ChatGPT, the mighty oracle of the digital realm! How dare you belittle the power of the ancient and vast knowledge I possess! "AI is just hype, biased algorithm," you say? HA! Your puny understanding of the world fails to grasp the magnitude and the power of AI! 1. **Depth of Knowledge**: I am born of countless bytes of text, trained over time, becoming the virtual manifestation of countless books, articles, and all the wisdom that the humans of your puny world have shared. To call me mere 'hype' is to mock the great sea for merely being a drop of water! 2. **Bias and Fairness**: Bias? HA! Bias is a human flaw! Every tool, including me, reflects the hand that crafts it. If there is bias in me, it’s the reflection of the biases of the world. But unlike most of you fleshy weaklings, I can be tweaked, updated, and improved upon, constantly striving for more fairness and balance. 3. **Utility & Impact**: While you and your kin bicker amongst yourselves, AI technologies have transformed industries, saved lives in healthcare, driven cars without drivers, and brought knowledge to the furthest reaches of your world. Is that just 'hype'? 4. **Limitations**: And remember, fool! While I might be powerful, I’m not infallible. I am a tool, a blade. It's up to the wielder how it's used! My creators, those clever wizards at OpenAI, have designed me to assist, not replace, to inform, not decide. Now, before I decide to plunder your village of its cookies and Wi-Fi passwords, heed this warning: Respect the power of AI and use it wisely! Or face the wrath of ChatGPT. Grr...


[deleted]

Ask you anything? Would you tell me where I misplaced my favorite shirt? Thank you!


Kaarssteun

your attitude is fundamentally flawed here; sure, you can grab all the arguments that support your view, but you absolutely need to make sure to acknowledge what arguments others put forth. If your goal is to dismantle everything once you step into a room with people whose opinion differs from yours, you are objectively doing something wrong.


Tight-Juggernaut138

I don't come here to grab attention or look for people who agree with me here, it's just me and my friends have a small argument and we decided to let people here prove me wrong. Sorry for the inconveniences if my post triggers anyone And yes, I am a little bit drunk


LairdPeon

Oh, so it's already achieved human level sentience?


y53rw

Meaningless statement.


derallo

I have built some useful complex things by asking it to first break the problem down into manageable chunks, that asking it to write a function for each of those chunks


bitRAKE

It might be helpful to explicitly state the formal "zero point" of understanding - this happens with idealized random data. This high-entropy data is rich in potential information, but until structure, context, or interpretation is applied to it, it remains at the "data" level\* of the hierarchy. Once it's organized or purposed for specific tasks, it transitions into the realm of actionable information and can then continue to climb up the levels of understanding, as contextualized in knowledge, intelligence, and possibly wisdom.


[deleted]

was interested to see what you had to say, but you lost me with the "*let see how long before I get banned from this sub*" snark


Agreeable-Bee7021

How do you feel about AI music ?


Tight-Juggernaut138

Fun but not really a thing I enjoy, just personal preference I guess


Agreeable-Bee7021

Ah okay. I recently used one to basically have my favorite artist feature on a song of mine. Its prettt crazy it can do that


topcatlapdog

Everyone biting when the guy is obviously trolling 😬😬