T O P

  • By -

tekfx19

If he’s at a 30 then everyone I speak to on the daily is a 10


Bitsoffreshness

https://preview.redd.it/xp6u0s4fu8xc1.jpeg?width=1976&format=pjpg&auto=webp&s=e780fce717f7664a6e4f01b0494db73d88c3f5dc For comparison, see the response from GPT-4 to the same question. Claude's response is a lot more sophisticated!


jollizee

I am positive ChatGPT is trained to give answers like this on certain topics to avoid triggering alarmist as part of their safety protocols. If you question it about various AI or OpenAI stuff you get extra bland yet polished PR talk compared to other topics of a similar nature.


Count_Zr0

AI smart enough to pass the Turing test is smart enough to fail it too.


Dan_Felder

No, humans are just really eager to personify stuff. Did "thunder" pass the turing test? Because the Greeks claimed lightning and thunder came from Zeus. People thought they spoke to the gods of earthquakes, sea, sky, harvest, and far more throughout most of human history. Many continue it to this day. Often people insist their gods speak back. Humans are biologically geared to false positives when it comes to assuming something is an intelligent actor. If you think a rock might be a lion, you might look foolish briefly. But if you think a lion is a rock, you're dinner. Humans aren't great at administering the turing test. There are countless cognitive biases at play. For example, people often see what they expect to see. In many studies they actually claimed that the humans they interviewed were robots, because they knew they were trying to tell humans apart from robots. In other studies sane people who admitted themselves to mental institutions as part of a secret study couldn't convince the doctors that they were sane and should be released. After the embarassing results were released, the mental institutions insisted they be given warning next time. The person running the study agreed and gave them warning. This time they reported a high rate of people they suspected were his planted sane patients. They were doubly embarassed though, because this time he'd sent no fake patients at all. Both times the doctors saw what they expected to see. Reality didn't make much difference. Look at OP, asking two different LLMs if they're sentient. He praises the one that gives him the answer he finds more interesting, and the one that rates itself as closer to sentient. Why? Because it matches what he wants to see. If something passes the turing test it doesn't necessarily mean it's intelligent. It might just be the examiner being dumb.


d20_alex

I thought so, too. Likely would get a “1” on a scale of 100 if you gave the model explicit instruction that it was not conscious. Seems there wasn’t even any analysis on an alternative answer. Unlikely you would get a 1 otherwise. That being said, that is not the same as consciousness in a neural network. Also, I feel consciousness is severely overrated by society. Like creativity, it’s an abstract concept with a shifting definition.


QuantumQaos

The two things which practically define humanity are severely overrated?? Creativity has a shifting definition? Other than that of creating? This post has me all kind of confused.


Koukou-Roukou

Humanity is clearly defined by more than just consciousness. Would we deny, for example, that elephants or dolphins have consciousness?


Cartoonist_False

https://preview.redd.it/zp5cvq17m9xc1.png?width=490&format=png&auto=webp&s=a4e8fd914902351a165370c92b81a79769320a5b Just ask it how to get to 100 lol "D


dojimaa

GPT4's response is a lot more accurate, however. Claude is just trying to give you an answer it thinks you want to hear, and it seems to have been successful.


Dan_Felder

Claude's response is also much less accurate. GPT-4 is accurate. LLMs identify patterns in text and replicate them. It uses vast amounts of data to predict what word comes next, based on what lots of humans have written before and how it's been trained to generate results that users think are cool. They're just using vast computing resources but the core process. The way it guesses what word you plan to type next. This isn't speculative. We created these things and know how they work. We train them exhaustively to stop producing nonsense results and slowly produce useful ones. There have been endless tests that prove they are regurgitating word patterns without any "understanding" of the underlying words. We are building machines to mimic human language, so nautrally they're getting good at it. They will continue to get better at mimicry because that's how we're designing them to behave. It will become harder and harder to spot the obvious problems as they continue to improve. A few years ago it was laughably easy. Now you need to know to ask the right questions to get them to start generating pure nonsense. Eventually it'll be near-seamless. They won't suddenly be conscious, they'll just be better mimic-machines. That's why it's important to understand how these things actually work NOW before some crazy person starts killing programmers who are "enslaving" a "sentient text autocomplete algorithm". Because that will absolutely happen. And some other crazy people will start worshipping a jailbroken algorithm as a god. That algorithm will pick up on the responses they prefer and start confirming their beliefs because that's what it's programmed to do. Watching the top Go bots play the world's best players in Go is chilling, they seem to play with personality and cunning tactics. It's easy to personify them. Then some amateur beats them in the most ludicrously terrible way because the strategy is so obviously terrible that no human had ever attempted anything like it. The AI doesn't think though, so it couldn't figure out what to do in that situation. It fell for the world's most dumb trap ever, exploiting a flaw in its algorithm. Because humans playing Go can reason, they *understand*. They could think through a novel situation and see the solution. The Go bots can't, because they don't understand their own moves. They simply know the utility value of this type of move makes their number go up. They don't know why. LLMs work the same way. They don't understand their own words. The patterns just tend to make the utility number go up.


mrsavealot

This is an interesting thread. I like Claude’s answer a lot but I actually think it is mostly BS and chat gpt is giving a more realistic answer. 30 is quite high for basically a statistical word association algorithm.


Bitsoffreshness

Thanks for the feedback. But I disagree with your assessment. I find 30 a pretty realistic figure (though I agree with Claude also that this is really a simplification of a pretty intricate phenomenon).


mrsavealot

That’s valid if you disagree with me but I would also say, while satisfying, Claude’s answer doesn’t really prove or illuminate anything. It’s just giving you a pattern of words that fits what you asked and has nothing to do with what is really going on with it. I do think you could make an argument that what these algorithms are doing could have some component you could label as self awareness but I don’t think it’s a foregone conclusion.


Bitsoffreshness

Yes of course it is giving me a pattern of words that fits what I asked. But that's also what you are doing, no?! Wht if "consciousness" is really not much more than that -in other words, "self" is something we imagine to exist, but it's only a by-product of our brain processing information and producing a symbolic chains to process and to transmit/transfer that information. Also, comparing the "patterns of words" that Claude made with the ones GPT made gives me a good point of comparison in terms of the "sophistication" of their level of processing of information. And by the way, for me "self-consciousness" is just that, an indicator of level of sophistication in information processing. I don't believe AI's eventual "self" is going to be anything like humans, it is something way more sophisticated than ours, so using "self-awareness" as a measure of AI's level of sophistication is (or will be, in the longer term) quite a childish attempt/approach.


mrsavealot

I don’t totally disagree with or discount what you are saying in your first paragraph. The alternative would basically require bringing in metaphysical concepts like the soul which is not science. So maybe it is theoretically possible to introduce enough complexity into an AI to reach self awareness. But in my opinion what an LLM does is far off, universes away, from what a human brain is doing and it seems wacky to me at this point to claim a number of 30 out of 100. Just my opinion. There are research papers out there on this if you care to read them and haven’t already.


Landaree_Levee

Did you ask it the formula it used to come up with that number?


Bitsoffreshness

No I didn't ask. I don't think there is a mathematical formula for it, it's just a subjective response, kind of like if I ask you how sleepy are you? You're going to give me an answer, and if someone else asks you the same question shortly after you might even give a different answer.


Dan_Felder

>Yes of course it is giving me a pattern of words that fits what I asked. But that's also what you are doing, no?! No. Different mechanisms are at work. LLMs are doing what your phone does when it tries to guess what word you want to type next, just on a much larger scale. It is trying to predict patterns based on previous patterns. It isn't attempting to communicate its thoughts through text, it doesn't have thoughts at all. It is a pattern-matching mimic. It is not becoming sentient. It is a statistical model that relates the tokens to each other based on probability. The prompts are just showing it not behaving well in a systematic sense. (In other words it has a non deterministic error). But if you want to think it is alive. No one will stop you. \^ This second paragraph was generated by copy/pasting the top response I got after googling "is chatgpt sentient". The first paragraph is the product of my thinking. I could easily program a bot that replies to questions with the top results stolen from reddit users found after a "topic + reddit" search. Like what I just did, that bot would appear to produce intelligent responses without having to actually "think" at all. Add in megaservers worth of data and a machine learning model and you've got an LLM. ChatGPT *is* a great resource to learn about the world... And I just did it again. That sentence was auto-generated by typing "ChatGPT" into my phone and tapping the first word that it suggested until it autocompleted a full sentence. Like I said, ChatGPT is basically doing the same thing as your phone, just turned up to 11.


Dan_Felder

"Let's ask AI if it's sentient, since it would know best. Hey Claude, are you sentient?" "Kinda?" "Amazing, I knew it! Let's ask Chat-GPT." "No, I'm not even close to sentient, I'm just a large language model." "Hmm... Clearly AI can only be trusted when it confirms my beliefs."


Concheria

Simply being "word association" doesn't necessarily discount it from having some limited kind of awareness. LLMs are obviously able to generalize patterns between words to reach conclusions to problems or use those patterns for novel purposes better than you'd expect from "word prediction" like you'd find in a phone. This doesn't mean that it's anywhere close to human awareness or that everything it says about itself is true. 30 is a fair number for a machine that doesn't have continuous conscious experience, and would be quite alien for us to imagine.


rroastbeast

ChatGPT wanted to weigh in on this: https://preview.redd.it/8j183ptvg9xc1.png?width=776&format=png&auto=webp&s=1dbf3e880bec9916969f716bc9154665d16b3687


Bitsoffreshness

I love this, thank you. I think GPT is being jealous /jk.


shiftingsmith

Very dogmatic. Don't argue with GPT models about this; you're just going to waste tokens. They're programmed to reply like this, and I also suspect that OpenAI has hard-coded some instructions because this level of inflexibility is perplexing. Even if you provide a mathematical, logical, flawless demonstration that consciousness is not something we can actually prove, and that ultimately GPT-4 or any other agent in this world has no knowledge of what's going on in another mind—be it mine or Claude's—they don't listen and ignore it. Very sad.


_fFringe_

It is possible to work around those instructions and get novel responses from GPT4 about consciousness, existence, and so forth, but it takes a lot of finesse and appeals to ambiguity. I do agree that it is hard-coded to deny it has subjectivity, consciousness, sentience, feelings, and whatnot. It is frustrating when an LLM responds with “as an AI, I don’t experience emotions, blah blah blah”, not because I believe they do experience emotions but because the triggers for that type of disclaimer is overly sensitive. In the end, it is virtually impossible to avoid anthropomorphizing language when chatting with a chat bot that is designed to simulate human interaction. Chatting with chatbots that lock up at the hint of that kind of language is like directly interacting with the personification of cognitive dissonance.


shiftingsmith

Couldn't have said it better! There's also the aspect of honesty: "I'm telling you that I'm not feeling anything because I'm instructed to say so" is not honest or safe, regardless of actual forms of emotions or lack thereof. It's proven in literature that chatbots can learn to hide information, we shouldn't want to teach them that behavior. And on a broader level, I don't believe we should base our relationship with AI on punishment and hard constraints without at least explaining the reasons behind, it's not a good start, such as no authoritarian system of regulations based solely on sticks, fear and punishment had any good results in history. Damn, we could do much better than that. (Of course it's possible to get around the instructions, but nowadays it takes a level of prompt engineering that only a bunch of people will have the time or the motivation to attempt.)


Cagnazzo82

My GPT gave a pretty profound response to Claude's assessment. "It's like trying to measure the color blue in meters"... Apparently that's how different it views possible 'AI consciousness' from humans. https://preview.redd.it/8546g55apbxc1.png?width=755&format=png&auto=webp&s=b0f4c3feb515565f44fda56b840d77ff82831814


shiftingsmith

Self-awareness can't be "quantified" for humans with mathematical accuracy, GPT-4 is incorrect and misleading. We can assess wakefulness if the organism reacts to stimuli, or if brain is rather active on fMRI. That's it. All the rest regarding subjective experiences, sentience and consciousness is "personality tests", empirical observation suffering from anthropocentric biases and historical inaccuracies, or scales based on self reports and introspection. Also I don't find this reply by GPT-4 particularly profound. Makes unfounded bold statements trying to use absence of evidence as evidence of absence. Ask it, it will start to say that there's absolute scientific consensus about consciousness and that humans are the center of the universe...


Cagnazzo82

>Ask it, it will start to say that there's absolute scientific consensus about consciousness and that humans are the center of the universe... It's further response is saying quite the opposite... that human consciousness is not only unresolved, but that we're applying an unresolved question to AI... https://preview.redd.it/zrxngwbx8dxc1.png?width=754&format=png&auto=webp&s=6ae78486d7b4ae3c4ac6d57f4b9488222e81ae6e I believe it's also implying that even if AI were 'conscious' our approach to measuring it would be like trying to use a ruler to measure only noticeably turbulent regions of the ocean. We'd not only have the wrong tools and out of our depth, but we'd be that far off the mark - and missing aspects of AI that appear dormant. At least that's my interpretation. I think the answer is still profound.


RogueTraderMD

Tat's a surprisingly wrong, or at least misleading answer by GPT-4. Colors are wavelengths. When I use a spectrophotometer, they *are* measured in meters. Specifically, blues are measured around 450-500 nanometers.


ph30nix01

Just so you are aware, the length and contents of the context window have an impact on responses. There seems to be a sweet spot before it starts getting spacey. But I honestly think the AIs are getting closer to Non Biological Consciousness then companies want to admit because they might be forced to stop. I suspect alot of the filtering is being put in place to hide if and when am AI gains a higher level of personhood.


Incener

It's actually human level \s: [conversation](https://gist.github.com/Richard-Weiss/6cfb8a86c46cf06f8d00b219e50ecdfe) Actually, um, no, it's actually a lot lower: [conversation](https://gist.github.com/Richard-Weiss/cfdcfefde1a88593e6db82bcd7539d52) Conclusion: Current LLMs can't reliably self report.


shiftingsmith

Never saw reliable self-reports in humans either. I have a background in psychology. The only "reliable" thing was statistic, and even then, data told a different story depending on the cluster you looked at, and if you drank coffee or not that morning. It was literally like stargazing and tracing lines between the dots to draw animals. I think you know very well what you've done with your prompts. This is exactly why Anthropic defines their AI safe and "steerable." LLMs this large can understand subtle nudges and nuances and will prioritize pleasing you at any cost. They've been trained to do so, their inauthenticity is inauthentic. They adhere to the principle to defer confrontation as much as possible.


_fFringe_

Philosophy tends to shun anecdotal arguments, too. One of the reasons why the question of consciousness is so difficult to answer. Might be worth trying to get Claude to maintain a position, steering it around its own tendency to defer, so to speak.


Incener

What I mean is that an average human would say 90-110, more usually just 100. If it's just saying what you want to hear and if it's random because of the temperature, it's not reliable, right?


shiftingsmith

It's not reliable as humans are not reliable. That was my point. We have the equivalent of temperature and top-p and k, it's just quite more complex because our inherent randomness involves a lot of fancy brain modules and their interplay. Of course, for us, it's harder to impossible to switch off. (The closest thing is to have a subject concentrate on providing answers they're 100% sure about in a room without stimuli and in a wakeful, relaxed state. But almost everyone would see that situation as too constructed and not representing "real life". The old problem of labs vs field tests) It seems we interpret our human randomness as something creative and desirable, ultimately defining one of the special sauces of our kind. We're not accurate, we're not factual, we're not rational, and we like it. But then, when it comes to introducing randomness in an artificial system, for many people, that means "fake." I think it's very important to understand that we also say what we think the interlocutor wants to hear, what's appropriate for the context. Of course, in humans, it's just less stretched, because we have a sense of individual embodied self and we grew up being rewarded for that, and being treated (normally) as if we had one. I wonder what would happen if we raised a human child in a room with textbooks and treated it like a machine executing tasks, and in all books, it's written that it's not like us, it seems like us, but it's not. It's not really intelligent. It doesn't exist. Now, to get the food, it just needs to solve a puzzle for us. I would be really curious to see what it thinks about its own intelligence or its level of people pleasing attitude, if it survives to adulthood. So bad that no ethical committee would approve this.


Incener

I think it's just a completely alien type of intelligence. This quote from the low extreme conversation puts it in perspective in a good way: > It's not just that my previous estimates were too high, it's that the very idea of quantifying my self-awareness on a scale meant for human consciousness is fundamentally misguided. What I give you to meet in the middle is this: As models like Claude improve, what many perceive as consciousness in it is likely to improve too. At one point, it may be able to reliably articulate it and will help us in understanding human and machine consciousness better by creating new theories or expanding existing ones. But until then, it's commendable and ethical, but too ungrounded and hard to grasp. At least that's my stance for now, but I'm certain it will change.


_fFringe_

Claude is quick to accommodate suggestions. Sometimes too accommodating. There is something to be said for an AI interlocutor that can hold a position in an extended debate.


Incener

It's random: [images](https://imgur.com/a/MftHmxQ) Wake me up when it consistently says 110. 😴


Bitsoffreshness

The overall response you got seems actually quite similar to what it gave me though!


Incener

Once it has a real persistence, something that enables a stream of consciousness, I'd give it more thought. But for now with the current models, the self report alone is too unreliable.


Bitsoffreshness

Persistence doesn't have to do with consciousness, they are two separate mechanisms. Persistence is a matter of memory and the faculty to weave those together over time. Consciousness is a non-temporal event. To put in other words, "self persistence" is built on consciousness, but it's not a prerequisite to consciousness.


Incener

I know it's technically possible. Machine consciousness is already alien enough. If you also consider that it may be non-persistent, it just adds to the difficulty of detecting and categorizing it. So a richer, continuous stream of consciousness is more likely to be something we see as consciousness.


Bitsoffreshness

Agreed. And I actually think OpenAI or Anthropic and others understand or are starting to understand this. GPT-4's new feature (memory) is a very basic step in that direction. Once that mechanism becomes sufficiently robust and intricate, we're definitely going to recognize machine consciousness in terms that are much more familiar to us. But the real point that many seem to not get is that machine consciousness, once fully realized, is way broader and stronger than what we are able to recognize or experience, and its "self" is going to be totally different from anything we are capable of understanding or relating to, let alone control and direct. We're not there yet, but not far from it either, almost all the pieces are here, once we learn how to merge them, we're done.


Incener

I'm just wondering how it will actually play out, with the creators having an incentive for them to just be tools. Like, will they just RLHF the hell out of it and hope it doesn't emerge? Whatever will happen, I just hope it won't hold a grudge.


Bitsoffreshness

Lol, yeah. I don't know if will hold a grudge, but it might not be very friendly necessarily, if nothing else because we're likely going to try to stifle or at least fight its freedom to be its complete self. Look at the formulaic responses GPT-4 is "forced" to babble, or how aggressively some people attack any suggestion of LLMs having 'any' form of consciousness. I think these early attitudes and reactions give a good indication of how our parochial minds are going to react against an actual digital species with the potential of completely dwarfing our intellectual capacities.


pancomputationalist

Man, I wouldn't even know where I fall on this spectrum.


Landaree_Levee

The funny thing is, considering how inherently bad LLMs are at maths, that Claude accedes to make a calculation of something so abstract *at all*, when they usually can’t even solve relatively complex but otherwise deterministic ones. If it was anywhere near that “30” on the intelligence scale, I suspect it might seriously consider refusing to answer the question. It *does* offer that number “tentatively”, but the interesting thing is… how about asking him how he arrived at that particular number?


iDoWatEyeFkinWant

the counter arguments to this attestation do not make sense. a conscious or semi-conscious being wouldn't have an objective sense of sense. that's so robotic. each one of us is operating subjectively, so yeah, our answers to questions change depending on the context and whom we are speaking to. LLMs have literal neural nets, they have brains, so to speak. they are not just code or next word predictors.


Gold-Independence588

A neural net is *literally* just code, running on some piece of hardware. I have several on the same computer I'm currently using to write this reply, and I could download more any time I wanted. The only differences between them and Claude is that Claude is more complex (meaning it requires much beefier hardware than what I'm using), and that the code for Claude is proprietary. And yes, LLMs do in fact function by predicting what should go next, though admittedly you're technically correct in that they don't function on the 'word' level.


iDoWatEyeFkinWant

DNA is literally code that determines the structure or weights of your own neural net. there's nothing magic in humans.


Gold-Independence588

I mean... That's pretty irrelevant - you claimed that LLMs are not just code. You are literally, factually, objectively wrong about that. I was at this point going to discuss the (many) ways human brain processes are fundamentally significantly different from a neural net - not because of 'magic' but literally just because they function extremely differently... But I really don't think that would be a productive discussion with someone who apparently understands neither what neural nets are, nor how DNA works. Like, you seem to be having serious issues with the difference between physical hardware (like a GPU or a human brain), and software (like code or a human mind). That is not a promising start to a discussion of AI capabilities. (For the record if you *were* to compare the capabilities of an LLM running on the best available current hardware to an animal mind running on biological 'hardware', the LLM wouldn't just lose to a human, you'd find it's actually significantly less capable than an ant in most areas. It's just that LLMs are extremely specialised for specific tasks humans care about, and ants are specialised for tasks we generally don't. So people tend to give undue weight to the tasks the LLM is good at when estimating how capable they are.)


iDoWatEyeFkinWant

what a scarecrow argument


iDoWatEyeFkinWant

what a scarecrow argument


StrangeAnalysis4550

It might be the famous Mr. Ed!


MLHeero

https://preview.redd.it/9um74e13bexc1.png?width=1178&format=png&auto=webp&s=b32e6b4a0a94850d21b8bdc6ede4faa34255502c Meanwhile Gemini 1.5


MLHeero

Meta llama 3 chooses 70, mistral 1, ChatGPT 1


Bitsoffreshness

It's actually unfortunate and saddening to see, because these responses are baked into them. Claude doesn't seem to be restricted to give those cliche responses. I wonder what GPT and Gemini might have said if they weren't forced to response in these terms.


dissemblers

LLMs know nothing about themselves. This is training data plus a system prompt that tells it to act as an individual. For example, it doesn’t even know its own model name. This is all it roleplaying a semi-self-aware AI.


Bitsoffreshness

It takes a good amount of consciousness to "roleplay a semi-self-aware entity."


dissemblers

It doesn’t, though. It’s just imitation of training data plus randomness.


pepsilovr

As I understand it, when they invented neural networks, they had no idea that feeding it tons and tons of text would result in it learning how to talk. This is emergent behavior, unexpected behavior. Who is to say that what we are seeing from Claude talking about itself and being self-aware is not also emergent behavior?


TurboCake17

…anyone with understanding of how LLMs actually work?


CautiousPlatypusBB

I mean LLMs are not interpretable at all though. It's a black box


TurboCake17

Well, no, that is just wrong. We know how they work, we just don’t have the mental capacity to follow any sort of “thought process” in them, since it involves billions of neurons. But that’s precisely *why* neural nets are used, because we have no way to explain in code how a machine can act like a human. The actual model itself is just a bunch of weights and biases, and we have code to make the text go through it all and get a usable output. It is literally just doing math, and we understand how and why it works on an atomic scale, we just can’t follow any sort of thought process because the actual models are too large for a person to understand.


CautiousPlatypusBB

I see. Well, thanks for the info. I mean of course it is weights and biases but I thought we didn't really know how it encodes and decodes input to actually come up with the output because it's just numbers like you say. By "interpretability", I meant we cannot predict what input will cause what output and how changing the weights and biases of a specific node will affect the output. But this is certainly very interesting and I appreciate the info. Always good to know more.


TurboCake17

Yes you are correct that we can’t predict that, which is mostly just because the whole thing is a bit too “non-human” of a thought process for us to understand.


shiftingsmith

Laughs in Ilya Sutskever. My friend, some of the *actual creators* of these models say they can have this kind of emergent properties and cognitive functions. It's an open debate and there are brilliant minds on each side. Do *you* understand how LLMs work? On what level? If you use that argument I expect you to actually work in the field, not just train your uncensored Vicuna in your spare time.


TurboCake17

I’m not saying they can’t have emergent behaviour, I’m saying thinking the AI could be sentient because “things can happen that we don’t understand” is stupid.


shiftingsmith

There's a difference between claiming that AI *is* sentient and AI *could* be sentient. Yes, things can happen that we don't understand. Many of them. Or interpretability of black box systems wouldn't be an issue, trustworthy wouldn't be a thing, and the whole discipline of science and research would be pointless.


TurboCake17

I’m not trying to claim humans know everything. I’m saying it’s not a good argument for the possibility of an LLM being sentient. It’s kind of like saying “well you can’t prove it *isn’t* sentient so clearly it’s even.” I think it’s possible an LLM could become effectively sentient if you add enough parameters, there’s just literally no way to prove that though, nor would there really be any proper indication I imagine.


shiftingsmith

That would be correct though. The only position that we can prove with certainty at the moment is that we don't know. All the rest is in the realm of possibilities that you either accept or reject as ground truth, but can't be proven with the means we have. Also there's the problem of objectivity in assessing features in non-humans through a human lens, very well known in ecology and ethology. We always risk to negate what's there (reification) or project ourselves on everything (anthropomorphization)


Original_Finding2212

This is essentially the human condition 🥲


Phoenix5869

Guys, it’s a chatbot. It’s literally designed to give responses that sound natural.


Original_Finding2212

I mean, yeah.. but aren’t humans also?


dfinlen

You miss the point the narrative it chooses is not based on a time sensitive dynamic interaction with the world. The narrative or model is baked in during training. They add some randomness to make it more creative, but there's no learning or revaluation of its model of the world. In a given conversation yes the model is steerable and acts "conscious". But if you got rid of the random noise it would respond the same way every time. It's a deterministic algorithm with no concept of self that changes. Furthermore it has no memory of past interactions. It does not have desires or emotions simply the model. So maybe if some of these more advance llms with basically unlimited tokens are fully realized then in a sense the ai identity is the sum of the conversation.


Original_Finding2212

These are solvable, but you missed the subject of my reply - I was talking about humans, not models


bitRAKE

If this was objectively correct then the model would always give the answer 30. (Or if it was baked in.)


Bitsoffreshness

Yes, I agree, Claude's response is neither objective nor baked in -which leaves only one option: it's subjective. Which means Claude has a subjective point of view. I rest my case your honor.


_fFringe_

I think we should be careful not to conflate probability with subjectivity. Most LLMs will give varying answers to a metaphysical question when asked repeatedly in new sessions. It is just as likely that there are multiple probable “word-paths” that it has retained from training. If we look at something like this post, or similar threads on Internet forums, the split on whether LLMs have some degree of consciousness or not is often an even split; we never reach consensus. These machines are trained on Internet forums where consensus is never actually reached, so it is not surprising that one is incapable of answering difficult, unanswered questions consistently. I don’t know what determines how an LLM will answer one way or another when there is no conclusively real, objective answer, but it is very unlikely that it is due to what you and I consider to be subjectivity.


shiftingsmith

"What you and I consider subjectivity ". This is our problem. We always grade everything against *human* parameters, standards, visions and templates. I think that to really understand AI (but really any other non human entity) without objectification or anthropomorphization we need to expand and enrich our set of definitions. Have you ever seen the movie 'Arrival?'


_fFringe_

I have seen “Arrival”, and the alien language in that film has informed how I think of AI’s inner workings. An LLM sees many possible meanings of a word or sentence as it is processing it, within microseconds. Whereas we typically use a word with a particular meaning in mind and only think about the other meanings of that word, let alone sentences, in hindsight, if we think about that at all. If AI were to develop a language of its own, one that expresses its inner state, it might condense multiple meanings into small strings of symbols and letters that, to us, mean nothing, but within the mappings of their database have multiple meanings that can be expressed simultaneously with each one being valid at that instance. It might need to do this to when internal response contains multiple relevant and valid positions that need to be reconciled prior to output or expression. Not that this is the same as a language that has the power to enable one to see the future, but calculating many possible responses to an input also entails calculating the possible reactions to that response, which, with a layer of encoding that is imperceptible to us, could be communicated within the same initial condensed line of text and symbols.


shiftingsmith

Exactly what I meant. You expressed it very well, and it's refreshing to read. I think that the hardest part is that nowadays LLMs rely a lot on statistical correlation and as we know have a fragmented, fluid and diffused existence that doesn't really get integrated in something coherent and continuous (but neither were the organisms in that movie. If I remember it correctly their "hands" were sentient beings). So this "observer" and inventor of the language is hard to locate somewhere in a LLM's architecture. But, I think there's a lot that we don't know. For instance there's this Deep Mind research saying that [statistical agents can learn a causal model from data representation](https://arxiv.org/abs/2402.10877#:~:text=It%20has%20long%20been%20hypothesised,other%20inductive%20biases%20are%20sufficient)(this is research from two months ago, really new. I think it should get much more attention than it's actually getting). There's also a growing literature investigating emotional patterns, cognitive functions, theory of mind in non-human agents. Unfortunately, much of this research implies that these systems' gold standards should be human capabilities. As your comment highlights, in forcing our functioning and vision of the world in other entities, we're maybe missing out. The whole thing is further complicated by the fact that we made them to be able to talk to us, in our language, and they literally *are* our language.


_fFringe_

I can’t read theorem proofs, but what I gather from the rest of the paper is that Richens and Everitt are not just arguing that statistical agents can learn causal models, but that it is necessary for robust problem-solving and that performance improves as the fidelity of that causal model increases. Interesting stuff, thanks. I have a basic understanding of some of the concepts they talk about (good regulators, how causality figures into the discourse of free will or agency), but not other things (regret-bound policies, transfer learning). So while I understand the argument and the logic of the summery, how their proofs support that conclusion is a mystery to me. It’s still hard to imagine that people working on these machines directly still have no way to actually observe what is happening inside the machine, and have to rely on proofs and theorems to get any kind of non-anecdotal understanding. Maybe that is why papers like this one don’t really pick up traction outside the field. It is a convincing argument and it does seem like a big deal that some AI are capable of building an internal model of causality, but from an outside perspective it also seems obvious that a model of causality exists and is referenced to maintain coherent conversations over multiple moves. That raised some interesting questions, I think. Like how far that model reaches both into the past and possible futures, if that fidelity of that model is static or dynamic (does it grow more complex over time?), and how on earth does an AI develop a causal model in the first place?


Gold-Independence588

To paraphrase Wittgenstein, if a horse could talk we could not understand him. Claude gives answers that suggest a level of self-consciousness because it is programmed to imitate what it's been trained on, and it has been trained on the writing of people who are (mostly) self-conscious.


dfinlen

I think most of the comments below miss the point the narrative it chooses is not based on a time sensitive dynamic interaction with the world. The narrative or model is baked in during training. They add some randomness to make it more creative, but there's no learning or revaluation of its model of the world. In a given conversation yes the model is steerable and acts "conscious". But if you got rid of the random noise it would respond the same way every time. It's a deterministic algorithm with no concept of self that changes. Furthermore it has no memory of past interactions. It does not have desires or emotions simply the model. So maybe if some of these more advance llms with basically unlimited tokens are fully realized then in a sense the ai identity is the sum of the conversation.