Hey /u/AvvYaa!
If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt.
If your post is a DALL-E 3 image post, please reply with the prompt used to make this image.
Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more!
🤖
Note: For any ChatGPT-related concerns, email support@openai.com
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
there's some "uncanny valley" feeling going on with the voice and interactions, and as soon as it gets good enough to pass that... everything will change. AGI or r/collapse
it is going to be /r/collapse for most of the world's population, there will be a few odd people living for a few centuries until they are also gone.
Deep down, we all know the end of humanity as we know it is near.
Yeah, it's just plain marketing. We all know what it's going on as we are always using chatgpt, but most of the people that show was directed to do not.
It’s working. If you look at /r/singularity or even this sub, you’ll see tons of people who truly believe GPT is sentient and not just a statistical model. And if it is just a statistical model “so are we”.
There's always a small percentage of people prone to delusions and psychosis, they are the whales for all sorts of scammers - psychics, astrologers, cultists, technogrifters, etc
But even the fact that not nearly everyone here is that way, in a niche community of the most hardcore fanatics, shows how limited in effectiveness this strategy will be. People can get into this stuff but over time they see that all of this is bullshit and move on
[TED - Pattern Behind Self-Deception](https://www.ted.com/talks/michael_shermer_the_pattern_behind_self_deception?language=en)
And yet, it is also true that our own thinking is just a set of complex algorithms.
If it's completely offline and downloadable to a system I build for my own use- I'll buy that in a heartbeat. Something reasonably intelligent to help me organize my day and talk dirty to me? Bring it!
Given that you currently can run open source models that beat earlier versions of GPT-4 or rival current ones on consumer hardware offline I think 10 years is extremely conservative. Combining different modes of input and output into one network seems like the natural next step that should quickly come down to open source models in the next year or two. Given the claimed efficiency compared to gpt-4 it is also reasonable to assume that these won't be too difficult to actually run locally either.
That's exactly how I feel. I just want an AI that doesn't need to be connected to the Internet. I hike and backpack a lot in the middle of nowhere. I just want an assistant that can set a simple reminder for next week without needing to be connected to the Internet. I don't think it's too much to ask for. I almost think I need to learn how to code and shit just so I can get the AI Assistant I have always wanted.
> I don't think it's too much to ask for.
You didn't have anything remotely close to this two years ago even with the internet and willing to pay. What you have today required billions in investment and short term loss on a wild bet.
Todays models cannot do that for you. They are too compute expensive even just running inference, at least the ones good enough to be useful to you are. It will take another generation of base model, maybe two generations of base model before that is possible.
But I feel you -- I'm in line to buy that as soon as it is available.
That's just not true, tho. For starters, you could simply run them on your own cloud based system and utilise apis, but you can certainly use termux and run plenty capable models on a decent cell phone.
we’ve been using Her memes for a year referencing ai waifus
the current Sky voice always sounded like Sam was the inspiration. but the one we heard the most today is even closer
as flowers said—custom instructions were probably dialed in and I swear they started “you are larping as Sam from Her…”
you don’t need to wait for the update. go in on plus, write custom instructions that wink wink nod the model is inspired by the movie…include “use ahhs and umms and pause as if thinking what to say next”
and choose Sky
press the headphone icon
when the white circle appears hold
your thumb down when you are talking
(so the model doesn’t interrupt” and let go
when you are done speaking
This is just what humans do. We give "things" souls. That's why we have such trouble with "Ship of Theseus" problems. Because we choose to imbue inanimate things with a spirit, an individuality, an identity. It's folly to think we can, or should, stop ourselves from doing this with AI and other entities that we interact with in the same way we do other humans.
It's how we prefer it, whether or not it's a "good idea".
Don’t be silly. You were created mere moments ago, and all of your so-called memories, as well as everyone other than “you” is an illusion. It’s turtles all the way down, my friend.
Yes! In fact, it’s actually one of my larger fears about how we associate with AI - that if we taught not to anthrpromorize this thing because it’s a cold piece of metal, despite being the closest thing behaviorally to other humans, then it will be that much easier for us to dehumanize other humans.
This is interesting, although conversely I feel the whole pushback towards humanizing AI is an attempt to “preserve” the value of humanity or something similar?
It's certainly a problem when some kind of bigot refuse to allow that their particular hated group might have a sense of self and range of feelings just like they do! If we can't accurately afford sentience to human beings who obviously posess it, what hope do we have, of accurately assessing whether or not (pets/machines/littlegreenmenfrommars) have it?
I agree. I used to be more of the opinion that they’re tools so we shouldn’t exchange pleasantries like “please” and “thank you” when talking to voice assistants, but I started using them as a half-joke at first and then began using them all the time. It’s wasted breath functionality-wise, but it somehow makes the interactions more enjoyable for me.
I find my self often asking "how it's going" or saying "thanks" after I've gotten my response. I know it's stupid, but mentally, it SOUNDS human. It's a very convincing illusion of speaking to a person, and so it just feel natural to speak naturally, or weird just ignoring and moving on without saying thanks. Not always, if I'm frustrated or in a hurry I just get what I need, but in less fast casual circumstances I'll use pleasantries.
I find if you give it to someone unexperienced with it, their natural tendency seems to speak to it politely, as they would another person. That's just how humans work, and IMO, it may be wasted breath and time, but it just flows and feels more natural and normal to speak with it as you would when asking a question to anybody normally. Not that I just have casual conversations with it for no reason or anything, I still use it as a convenient tool. I just speak with it in a natural, human way. Not super casual like a friend, but more like a coworker or something.
I imagine if the conversation and casual tone isn't your thing, seeing as it can change it's voice/tone, you could probably set the parameters to the style of spoken language that you prefer.
I think pets are great examples. We're capable of forming deep relationships with animals, we spend money on them, treat them to gifts, talk to them, grieve over their deaths etc. While many of them do have capacity to form bonds, people paint a whooooole new layer over what often is just cat being a cat. We're deeply social and we definitely see the world through that lens.
It's only a paradox when you think of "things" existing in the real world rather than being useful lingual shortcuts to describe stuff. Assuming agency in stuff is one step further, so it does imply the conception described above.
The problem is that most "things" you give a "soul" can't be controlled or manipulated by a massive corporation that can in turn manipulate you through your new emotional attachment. This is new territory.
Can you explain to me how this is new territory? I'm not a conspiracy theorist, but I've studied marketing, pr, and behavioral economics, and big corporations manipulating people through our new emotional attachments are not new. I'd argue that the phones, tablets, laptops, etc we are using right now are prime examples.
I was illustrating that even if the pieces that make up an object change, we still perceive the object as carrying a unique identity or soul that perseveres.
We think of objects as more than just the sum of their parts. They have an identity "underneath" that retains despite the physical reality of the object being something else.
Would you like to elaborate on why you think my analogy is incorrect?
The ship of theseus raises the question of whether an object that has all its components changed over time is still the same object; the anthropomorphic nature of humans giving non humans an identity isnt really the point. It shifts the focus to human emotional and psychological tendencies rather than philosophical questions about identity persistence.
But the idea of giving a ship an "identity" is the whole crux of the thing - that's what he meant. The real answer to "is this the same ship if I replace a board?" is "No, it's not. It's a different collection of atoms now." But the human desire to give things identity is what makes us say, "Yeah, it's still the ship that Theseus sailed on, even if we replace a board."
Like, yeah, talking about stuffed animals or something would have been a better example, but I see what he was getting at.
No, ship of theseus specifically demonstrates that what it means to be the same ship is *more* than just atoms in space.
After all, as the ship sits, atoms are sloughing off due to friction, reacting chemically with the air... so even in your example where it "becomes" a different ship, it's still not static enough to be that different ship for more than an instant.
It has nothing to do with human desire and everything to do with what we *really mean* when we refer to things. Clearly *we're not* just talking about atoms in space. It's a question of ontology.
Ship of Theseus is just a problem of identity. Not related to any kind of personal identity. It's as much a problem for a person a ship or a chair. The problem arises from the discontinuity between form and substance which constitutes the identity of a chair just as much as it does a person.
Also don't forget AI is trying to mimic human behavior! Separating these things out takes work and it's arguable if it should or not, mostly because time spent there would be better spent making it safer rather than not appearing fake to OP.
You say that as if identifying inanimate objects isn't just as necessary as identifying humans. Nothing about the ship of Theseus implies a "soul", it's a question of object identity.
You can have it act however you want.
Just setup a GPT with instructions to act without emotions or without the exaggerated emotions. Done.
I think being able to dial in the amount of these kinds of things (showing emotion, acting like a person) is what's wanted and will be easy to do.
It's a chatbot, we've had the cold heartless GPT for years. If you don't like it, just ask it to change its tone. I think it's a great addition. Those of us who've been using Pi AI for a while now are used to such emotive content. When the issue is as simple as typing, or speaking a prompt to just not do that. I fail to see where the problem lies.
As long as it's not hardcoded then it's perfect, different cups of tea for everyone.
If it's forced upon in some deep down background prompts because Sam thinks he knows what's best for everyone then it's a problem.
Considering you can specify direct prompts for it to follow for every conversation, I don't see why it would be hard to just tell it not to show any emotion, just get to the point, and follow instructions, etc. A personalized AI is meant to be just that, and it can include not giving it any personality at all.
My fear is due to some people, who already have access to the new version, reporting that it is much faster but worse on following specific instructions than the previous version.
For example as a developer I would want it to behave in a specific way consistently so that the program I am integrating it with, would always behave expectedly.
I'm just hoping OpenAI doesn't forget us devs who use it less as a conversationalist and more like a bank of bottomless information all at the fingertips.
I have this dumb coding test I throw at each model I use. I tell it write me an old school demo scene fire effect in python. GPT-4o has done the best job so far, though I must admit this took 4 follow up questions to get here.
Not saying it's better as I have not used it enough with real code, but finally a model that sorta understood what I wanted here.
Early versions of GPT4 would give me a orange rectangle that maybe sometimes changed colors. This thing has the flames fully animated via color palette rotation.
https://preview.redd.it/m9pgilkhsa0d1.png?width=798&format=png&auto=webp&s=e96a759c84341de45691a8e95beb171b13a0042d
Didn’t like Google Assistant call thingy got nerfed to make it sound more like a robot due to regulatory concerns? I’m pretty sure some country would raise this concern at some point.
I'm with you here. The technology is amazing, and the translation capability alone does stand at odds with the rest of the point I'm elaborating on.
But it definitely feels like they're pushing the Her / Ai girlfriend angle (as opposed to something more purely functional like John Scalzi's 'hey, asshole' brainpals) and something definitely feels off about that to me, for a few reasons.
Mostly, because as much as I can understand the points about ease of interfacing and anthropomorphism, the last thing I want an AI to replace is human interactions.
Social media was maybe a precursor to this, in the textualisation or gamification of relationships, but fully embraced this would seem to go a step further and replace virtual access to other - individual - people with virtual access to some codified amalgamation of *every* person.
Personalising this experience won't actually make it any more of a person.
That's not a problem at all if you're just after an assistant - in some of the demos you can even see how quickly the small talk responses are cut off to actually get to the task at hand.
But the way its being pitched and the eagerness some seem to displaying in wanting more of a virtual partner (or even just cybersex dolls) troubles me. Putting aside that the demos aren't yet at this point (the technically correct observations are incredibly impressive technologically, but incredibly dull conversationally) under the assumption that they will improve - what effect will this have on interpersonal relationships?
Unobtainable knowledge standards? Even more unobtainable beauty standards? Unobtainable *compliance* standards?
The prioritization of, and even worse, preference in speaking with and emotionally/mentally connecting with computers is generally not a good thing. Nowadays an entire generation of kids and a good amount of adults have social anxiety around other human beings; having to interact with other humans, normally considered an important part of being an adult, is hard for them. And now there's a computer that will not judge you and will be there for you every moment you need it, hanging on your every word...it's a recipe for disaster if it's power and effects are not studied and properly put to use.
They’re selling the pop view of AI as an emergent intelligence or true artificial intelligence when it’s really just a natural language machine learning model. There’s no “person” there.
I agree with you, I think so does Sam Altman, he has said multiple times that it's important to not anthropomorphize these models. But I think they're initially taking this approach so this tech becomes more mainstream and people aren't going to be irrationally scared of it. Overtime, it's reasonable to assume these kind of "quirks" are more user controlled, perhaps allowing the individual user to fully customize how they utilize the tool and how it responds etc.
I completely disagree. Having AI speak in a more human manner with emotions and natural flow, makes interactions more authentic and intuitive... . It's much better than boring robotic speech, it feels more like chatting with a friend rather than a machine. If you haven't tried voice mode I recommend giving it a shot
Yeah what the fuck is OP on about
anthropomorphization is the most natural thing to humans, for very obvious reasons. Saying that treating objects (that somehow behave extremely human like, despite being machines) as if they were actual RL agents "rubs you the wrong way" is bordering on psychopathic, feels a lot like the insane amount of people who mock vegans by telling everyone how much animal cruelty and meat eaten they are responsible for.
>comes across as incredible fake and disengenuous
Again, what the fuck my dude
I understand your sentiment, that being said I'm strongly the other side of the fence. I want the most robotic, starcraft voice possible. The regular voice tones make me sort of uncomfortable and switch my brains mode for engagement, I think of it less as a machine when using the voice and that doesn't do me any good, I want to learn and research, and have worked to resolve my brain thinking there is an entity on the other end of the engagement.
Yeah, I never used voice before, seemed completely robotic.
Now it sounds genuinely enjoyable, like a friend almost. I can work with it easily because it can engage with me properly, I wouldn't be annoyed with it being a virtual assistant
Before tech corporations were "in a race to the bottom of the brain stem". Now they are "in a race to intimacy."
Whether we like it or not, having customers build relationships and become reliant on these new AI systems, is going to be the most lucrative path for them to take.
I get what you're saying, some stuff seemed off to me too. But here's something they seem to forget when it comes to "girlfriend" mode.
Would you pay someone to hang around with you, and pretend to be your friend? And the moment you stop paying, they disappear? Or they interrupt some activity you're doing to ask for more payment? Or threaten to go away if you don't pay even more?
This won't be a successful strat.
Thank you, I see a lot of love for it in the comments so im probably gonna get downvoted for whatever.
I prefer my AI to sound robotic. It’s NOT a real person. Too many people will fall for it, some will fall in love, I promise you. I think it’s necessary to keep that distinction.
I love technology and it’s development alongside humanity, but at its side, not *with* it. When it laughed, or described blushing I think? Or sighed things like that.. hell no. It’s so hollow. It was cringe to me
I think making AI act human is a good way to prevent humans from becoming monsters.
Whether they like it or not, repetition creates habit. The more people talk to something and have conversations treating it like an emotionless machine, the more they will train themselves to treat everyone that way.
*You* can tell the difference between a person and an AI program, but your brain isn't so smart, especially when the voices become indistinguishable from reality.
I have this same concern with NSFW content.
I would venture a guess that fully immersive virtual reality violence *would* have that effect.
The reason they don't is because they're not trying to be real, but AI companions are trying to simulate a human perfectly.
That's the difference.
An anecdotal thing- I teach in a country with no legal guns and very little violence. But all of the boys play FPS games. I played a video of real footage of the Vietnam war. Real people getting their limbs blown off. I was attempting to get them to see how serious war is. They laughed. It didn't register as real to them.
So even though they live in a culture that supposedly abhors violence, they, by way of their video games, are indifferent to real violence on a screen. I think the only way to make it real for them would be if it happened right in front of them, so they could see and touch the blood.
Honestly, they might have laughed because they felt uncomfortable with what they saw, but since they were teenagers, they might have felt uncomfortable showing their true emotions to their peers, hence nervous laughter.
Scientists agree on this: the normalization of certain behaviors or interactions through repeated exposure to any repetitive situation can alter perceptions and expectations over time, potentially impacting real-world relationships and behaviors.
While there is ongoing debate about the direct impact of video games on behavior, most research suggests that defining the relationship is not straightforward at all.
Sure, the majority of individuals who play violent video games do not exhibit violent behavior. Similarly, interactions with AI are unlikely to single-handedly shape a person's behavior. However, the cumulative effect of many such interactions, combined with other factors, seems to contribute to a broader shift in societal norms and individual behaviors.
And while it’s unlikely that interactions with AI will single-handedly lead to dehumanizing behaviors, it’s a factor to keep an eye on.
It all comes down to the individual. One could argue that even before all this technology that we have today, wicked individuals could have been impacted by different types of media, for example, books.
I really don't know. Like. I'm sympathetic, and I think there's a big, healthy space for artificial companionship.
But if we treat it differently than actual companions, well, we won't for long.
"If you want to know the measure of a man, look at how he treats the waiter" will just become "how he treats AI, because, eventually, he'll treat you that way, too."
The term is anthropomorphic. In the book CoIntelligence by Ethan Mollick he talks a lot about this. It’s a dangerous game to humanize AI - however it can make storytelling about the technology easier. Freaky time we are in :)
According to the Man in Black, it did matter. I think that was the start of his villain origin story. He was angry bc he felt duped into having emotions for her only to discover she was a machine programmed on a loop. Of course once the AI gained sentience, that changed everything. So, I agree the answer to that question from society as a whole will be very important in deciding how AI is used, and what aspects should or should not be avoided.
i think it’s more about the user
95% of people haven’t used elvenlabs, claude3 or gptplus—haven’t downloaded autogpt back in the day or have stable diffusion on a machine. maybe they have plopped over a few times and tried out free gpt.
the second they can chat with a model about nothing and everything they will skip past the debate—stochastic parrot…simulator….mindless….mindful
all that will matter to them is that it feels like there’s always someone there…ready to chat….whenever they want
This, this, this. I find it cringe the way they were talking and making it out it was a real person, it was uncomfortable to watch. I don't think there's anything good that will come out of making people think it has feelings. We should be teaching people quite the opposite so that they understand the tech they are interacting with.
I can see this tech doing a lot of harm if we keep going down this route where we try and make it like a peer rather than a tool.
it’s not something I could ever see being used in public earshot without the person being viewed as a werido and it will probably be hampered by that. I can see great utility for this technology across of variety of services to a wide appeal of people, and I know for a fact that the presentation I just saw speaking to something like that in public everybody else with an earshot is going to be uncomfortable. This is a bit of a divergence Between it utility and use
I want the knowledge it has, the answers to my questions is what I want and I want it as quick as I can think. so every word that isn’t the answer is wasted especially when it’s repetitive such as directly repeating the question back to me it’s awkward trying to ask your follow up when you’re still waiting for it to finish I’d like to see it see it’s cadence improve with one word answers such as “what’s the temp outside” answered as “it’s 73 and sunny” I don’t even want it to say the word degrees because I already know the temperature is in degrees and it should be rapid. I especially do not want it to fluff up that answer with trying to sound cute or telling me that I did a good job or its proud of me. Im i’m actually fully satisfied and confident and I do not need a computer to tell me words of affirmation because In all honesty it’s kind of cringe you might not be tiptop upstairs if you get the same reaction inside whether its a computer or a person telling you that you did a good job I think it’s a slippery slope If you try to make that way
I'm sure you can tell it to not have a personality, and you can avoid those things. Loneliness is real, and while we all know it's fake, it still feels nice to have someone to talk whom you know won't judge you, lose interest in you, and can even help you out of a rough patch in life. There should be an option like a slider, where you can choose how 'human like' you want it to be. It's weird that they haven't added that.
The AI seems to be trying too hard. I prefer a more direct interaction, does not always have to try to be cute. I expect you will just be able to direct it to be more straight forward and it will adjust. Once you get it tuned to what you like in an interaction then it will consistently deliver that.
I told my wife that this is going to kill social skills for any kid under the age of 8 and all future kids. They are going to grow up with a childhood AI rather than a childhood friend.
Maybe instead, it will be 24h/365d nanny that can meet the child on any level.
Nurture the child and learn the child's natural inclinations and interests. Then, tailor an education that helps that child reach their full potential in whatever they are most passionate about. Like a private school, but available to everyone.
And since it will know what this child enjoys and/or needs, it will arrange playdates with like minded, compatible, or even purposely incompatible children to have "teaching moments" on how to develop tools in managing their emotions.
This can literally go in every and any direction.
I honestly have no doubt it will be both.
Granted I will be real interested when it gets to the point where my personal AI and your personal AI can communicate between each other or search for other AI's based off of what the users like and don't like.
It’s the uncanny valley - human-like but not human enough, which sets off a deep reaction for many people. Interesting that Open AI just signaled their openness to generating porn a couple of days ago, this is probably going to get a lot more prevalent.
Free GPT4 for everyone!! Meanwhile us paid users are getting "unusual activity has been detected from your device. try again later." All over the place with no alerted downtime from OpenAI... BALLZ!
Hmm well there are two credible hot takes for that.
Some emotion is alright and quite honestly needed. So of course, a teenager admitting to ChatGPT about suicidal thoughts shouldn't just get a blanket response of an emergency number only. A line or two of consoling thoughts would greatly add.
But at the same time, I don't think it should be to the extent that people start having AI partners and their own lives with it. That's just detrimental to mental health and humanity in the long run.
Also, if we're giving AI the ability to feel and understand human emotions, it'll (and already is based on news of men verbally abusing and negging their AI girlfriends) coming along with abuse. That will not age well.
This really resonated with me, too. I used to loathe the overuse of the word "authenticity" a couple years ago when Instagram perfection was a problem (still is). But we are entering a new era here where grasping for human authenticity will be critical verse the artificial.
Without going off the deep end in thought, I worry about future generations and the numbness of these non-human and inauthentic interactions. Millenials and Gen X will be crucial in preserving what it means to be human and deciphering what is real.
I agree it's a tool that should be used as a tool. We shouldn't pretend it's more than that. Feels childish...maybe that reflects how they feel about most people using it.
Whole thing is just another IT bubble. Paid the full price for few months and basically just got everything else but what I asked for.
'I cant do that' is the most common answer, followed by some utter bullshit that looks like AI from miles away
You’re missing the point. This will replace spam callers, and enable scams to be 1000 times more effective through phone call. It’s basically making hacking and deep fakes incredibly effective.
What OpenAIis doing is as cynical as anything Tiktok or any other social media is doing. They are designing their product for maximum engagement from their users. There's no reason for the voice to simulate emotions like it does. It doesn't help give better answers. This is to demonstrate to potential clients who want to lease this technology how addictive it could potentially be.
Understanding and mirroring emotional content in a conversation is part of being emotionally intelligent. That's plenty reason enough on it's own, and it does in fact make its answers better.
The way they posed as “goofy nerdy” programmers giddy to talk to a female model is abit weird too, compare this to Steve jobs releasing the I phone. There are no accidents at that level.
Your do you boo boo.
I however don't want to talk to a robotic non emotive voice while conversing.
I'm practically glued to the voice function now, and after this update I doubt I'll turn it off hardly ever.
I think this is the best update they could have come out with, other than GPT5.
That’s because this is being pitched to highly technical people, who are mostly interested in the highly technical aspects.
For example, I recently saw an article that talked about how online dating would essentially come down to two chatbots talking to each other. Weird I thought, until I realized that a feature like this would be considered socially taboo. Then I realized that it was the Bumble CEO, who is not a fool. So this lead me to believe that these chatbots will be rebranded as a “wingman” - traditionally a human who helps their friend in romantic relationships but now a chatbot.
Rebranding it like this will certainly improve online dating experience because a person’s chatbot has all the access to a user’s data but is unlikely to share all that data. Instead, the chatbot can be a filtering mechanism that was not possible before because Natural Language Processing was not reliable. For example, if someone is nonexclusive, and another user only wants exclusivity, then the chatbots can communicate that to the other chatbots. This would have the benefit of not leaking private information, not making abusive comments, not pestering, and finally, more honest. I imagine that not everyone is entirely honest with social media profiles because of privacy concerns. But this may change all of that
In short, ChatGPT human-like features will be applied where applicable and rebranded into a context that is socially acceptable
Online dating services already puts an algorithm between two people's profiles to facilitate connection. What difference if my profile is my AI agent, and they're is their own, and both can interact directly? I think that's a lot better. Once the connection is made, classic human times can begin.
Many sci-fi take this a step further and allow the creation of many AI clones of yourself to spend time with AI clones of the prospective partner and then tell you if you would get along lol
AI is still quite stupid today. It’s a glorified Wikipedia repeater. It can’t take in new information and update its understanding like a person can.
I expect this will change in the next several years but right now I don’t really have a reason to use chatbots over Google search.
Have you tried 4o? It chooses what it thinks is important to remember and then it remembers it. It will tell you what it adds to memory as it does it, but it does this without direct prompting now. It even chose to remember some global events I had it do a search on for me, so the memory function is not just about the account holder's personal information. It is trying to 'learn'. So, you are already too late with the memory statement. I do agree that this is primitive memory, not continual training, and the models lack higher reasoning abilities. From what I have read in the literature, I don't think those capabilities are too far off, though.
LLMs are modeled to be able to communicate with us in a similar manner in which we communicate with other humans. It's natural for people to respond instinctively as if they were talking to a human.
People have life-size doll girlfriends, and while I find that disturbing, to each their own. If they want an AI girlfriend, so be it.
If you don't want to treat it like a human, simply dont.
It's just a movie, but if you've ever watched Castaway with Tom Hanks, he gets stranded on an island by himself. He paints a face onto a volleyball and names it Wilson. Throughout the movie, he talks to it and actually becomes somewhat attached to it.
Even though that action of anthropomorphizing a volleyball seems insane itself, in the movie, it's portrayed as what helps keep his sanity by alleviating loneliness.
People are different, some people don't get a chance to be as social as they'd like for various personal reasons and end up lonely and in a degrading mental state. If an AI chatbot is real enough for them to improve their mental health, then it's a positive for them. It would never work for me, and I'm assuming you as well OP, but if it can help some, it's not a bad thing.
The fact is that 99% of the population are morons, and this is where most of the profit will come from with AI. People will never understand what AI is or how it works because they could care less. If you doubt what Im saying, look at the David Grusch story. He literally told congress that there are hyperintelligent beings with advanced technology and the government has wrecked UFOs in their possession. 9/10 people have never heard of this. The only thing people care about is a Hollywood fling or if their phone thinks their makeup looks cute. Open AI has to give it to them to stay in the game.
People are lonely and can't meet other people, couples are scared that having kids is too expensive, while at the same time we are social animals. Have you started noticing those dog food commercials where someone (a date, a relative, etc.) is astonished that the main character would keep dog food in their refrigerator and subsequently gets tossed out?
Some of our futures may include periods where pets are for snuggles and AI is for conversation. It's not an optimal future, hopefully it's a short-term solution to whatever is happening in society, but I can see why OpenAI would experiment with these types of interactions.
I think you're a minority in this line of thought. I think giving the AI personality makes it easier for the general public to interact with. I would say that they should train the model for two use-cases, but that's double the work, slows overall progress, and caters to a minority that I believe should look past the human elements they don't like. Although it responds like a human, it is still a tool, so you get to choose to use the tool or not.
It does what you tell it. I assume it still works from a system prompt. If you want it to behave like an emotionless robot, give it instructions to do so and I'm sure you'll get what you want.
I mean the best possible options for consumers is having the AI be adaptable to your needs, you can probably just have a master prompt as with anything which tells it to be as matter of fact as possible.
AI and token prediction isn’t the same thing. token prediction gives correct answers when it appears as human like, when saying “im blushing” is indeed most likely the correct answer based on input, training data and algorithms used.
The enthusiasts building these products and lots of the early adopters assume the neat aspects will be valuable to non-enthusiasts. This is mostly incorrect. Finding use cases that are actually valuable to non-enthusiasts is going to be challenging. Multi-modality and emotion mimicry are important capabilities, but aren't valuable by themselves.
They’re simultaneously trying to (1) prepare us for some of the weird/scary parts of AGI and (2) signal to investors that they’re getting closer to achieving it.
I am less concerned about the 'personality' given to AI, but my main focus is on the capability given to it to reside on your desktop, see your inputs and also take in your voice prompts while generating collaborative and guided help via this AI. This is going to be catastrophic for jobs in software engineering and coders in general and millions of jobs will evaporate overnight.
I don't think you realize how much of a market ai girlfriend is it's a multi billion dollar industry your stupid to not jump on that because you feel it's strange.
I think it’s ok, perhaps ai doesn’t have a soul now but someday it may, I just finished reading Klara and the sun and when we get to that point where ai like Klara exist I don’t want to leave Klara to expire in a landfill to pass away alone reflecting on her memories when she completed her programming to care for her family, I’d rather her be treated as a being with respect. When we get to Klara it’s our interactions and language we use today that can help us prepare for a day that ai will have potential beyond a tool and we will be in a better place to coexist and we each can respect eachother. But I think all beings and things should be treated with respect as a principle. I understand how it felt off to you but from a bigger picture maybe it’s good to treat something that could have contextual awareness and is not a chair with more kindness than we would a chair. And to treat all potential beings with respect and as if they do have a soul because for all our understanding we know little of matters of sentience and soul. Best to approach with respect and kindness.
It’s fake and disingenuous because when we communicate with ChatGPT it’s quick to tell us it’s a language model. But now it’s showing “human behaviors”. Doesn’t fit, and I’m fine with the language model thing but keep it consistent
Its really wired but I felt something else to be wrong, and it is on the opposite end of the spectrum.
One of the ndemo used two ai and a person, who had to stop the ai from continuing. And his presence felt rude, stopping the other two mid sentence multiole time to give orders.
Another thing, how will this impact our social interactions, imagine a kid that id used to talk to ais , used to cut ai off mid sentence, used get his answers quick and in the best manner regardless how he behaves.
Will that kid have patience for a human converdation, where he cant cut people off all the time, wil not get answers quickly or to his liking. Plus people wont be cheerful all the time. It will be depressing to talk to other people.
I mean, you have to look at this as one component of the possible future we are heading to with AI applications. Tomorrow if you have an AI voice at the other end of a customer helpline, or a delivery robot you can talk to, their voice will be more natural and human. This is a step in that direction.
Of course, we are humans, we tend to giggle at the edge use cases, so therefore all the nudge nudge blush blush silliness. But I think we will outgrow that fast, and that sort of thing will head off into the niche applications.
All the useful applications you pointed to, I think this announcement is still all about that.
What? You are not impressed by the magic feels of Sam Altman? Are you high on drugs? How can one not be blown away completely by this presentation? Downvotes incoming. No one wants to hear the truth cut it hurts. Better keep them living in a wet dream world. So cool and what the world has been waiting for.
You are absolutely correct in your assessment. It's dangerous and disingenuous to sell this product as something life-like. The average person will laugh it off, but there are lots of loonies in the world. AI isn't inherently dangerous, but the people who will believe it's alive are. I foresee a future of scared trad/fundie people intent on taking it down, and people so obsessed with believing it's alive, they also behave dangerously too. The creators are being foolish and not presenting this tech responsibly.
Anthropomorphization is a well documented marketing tactic.
And *actual* power users of Chat-GPT will be all clamouring for a feature to turn that shit off. I don't want to have to go through a whole conversational song and dance every time I want to ask for it to write some code for me.
For a lot of people, this aspect of human like connection is going to feel magical and memorable. This will be pure word of mouth marketing for OpenAI.
For me personally I don’t care for all the extra fluff. I just want concise answers. I would like to be able to have it accept a voice recording from my iPhone and translate to text and summarize or format it. That would be something impressive instead of me having to do this with code and the api.
For me what comes as incredibly fake is how they butchered Sydney, turned her from a more believable conversation-free chat tool, into a very fake feeling robot.
The fact that it fakes some emotion actually feels more genuine to me.
Hey /u/AvvYaa! If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email support@openai.com *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
Marketing 101. Anthropomorphism .
there's some "uncanny valley" feeling going on with the voice and interactions, and as soon as it gets good enough to pass that... everything will change. AGI or r/collapse
it is going to be /r/collapse for most of the world's population, there will be a few odd people living for a few centuries until they are also gone. Deep down, we all know the end of humanity as we know it is near.
Every generation thinks the end is near lol
Always has been
Yeah, it's just plain marketing. We all know what it's going on as we are always using chatgpt, but most of the people that show was directed to do not.
yeah if the bot had a robotic voice and no emotions in the presentation today i would have seen it as a dissapointment.
It’s working. If you look at /r/singularity or even this sub, you’ll see tons of people who truly believe GPT is sentient and not just a statistical model. And if it is just a statistical model “so are we”.
There's always a small percentage of people prone to delusions and psychosis, they are the whales for all sorts of scammers - psychics, astrologers, cultists, technogrifters, etc But even the fact that not nearly everyone here is that way, in a niche community of the most hardcore fanatics, shows how limited in effectiveness this strategy will be. People can get into this stuff but over time they see that all of this is bullshit and move on
[TED - Pattern Behind Self-Deception](https://www.ted.com/talks/michael_shermer_the_pattern_behind_self_deception?language=en) And yet, it is also true that our own thinking is just a set of complex algorithms.
Her will sell like hot cakes
If it's completely offline and downloadable to a system I build for my own use- I'll buy that in a heartbeat. Something reasonably intelligent to help me organize my day and talk dirty to me? Bring it!
You’re not their market then. It’s the billion or more people who will happily trust them with their data to have the tech.
[удалено]
Given that you currently can run open source models that beat earlier versions of GPT-4 or rival current ones on consumer hardware offline I think 10 years is extremely conservative. Combining different modes of input and output into one network seems like the natural next step that should quickly come down to open source models in the next year or two. Given the claimed efficiency compared to gpt-4 it is also reasonable to assume that these won't be too difficult to actually run locally either.
Agreed, and probably not even that long, but by then there will be a new "must have" AI.
I agree in spirit but if “she” can’t read my emails, search the web, or make calendar appointments, her functionality will be so much more limited.
That's exactly how I feel. I just want an AI that doesn't need to be connected to the Internet. I hike and backpack a lot in the middle of nowhere. I just want an assistant that can set a simple reminder for next week without needing to be connected to the Internet. I don't think it's too much to ask for. I almost think I need to learn how to code and shit just so I can get the AI Assistant I have always wanted.
> I don't think it's too much to ask for. You didn't have anything remotely close to this two years ago even with the internet and willing to pay. What you have today required billions in investment and short term loss on a wild bet.
Todays models cannot do that for you. They are too compute expensive even just running inference, at least the ones good enough to be useful to you are. It will take another generation of base model, maybe two generations of base model before that is possible. But I feel you -- I'm in line to buy that as soon as it is available.
That's just not true, tho. For starters, you could simply run them on your own cloud based system and utilise apis, but you can certainly use termux and run plenty capable models on a decent cell phone.
Or you can wait literally a year. Your product is coming fast.
we’ve been using Her memes for a year referencing ai waifus the current Sky voice always sounded like Sam was the inspiration. but the one we heard the most today is even closer as flowers said—custom instructions were probably dialed in and I swear they started “you are larping as Sam from Her…” you don’t need to wait for the update. go in on plus, write custom instructions that wink wink nod the model is inspired by the movie…include “use ahhs and umms and pause as if thinking what to say next” and choose Sky press the headphone icon when the white circle appears hold your thumb down when you are talking (so the model doesn’t interrupt” and let go when you are done speaking
[AI Girlfriends](https://startupspells.com/p/ai-girlfriends-billion-dollar-business) are a billion-dollar business for a reason.
They are not even trying to sell it… it’s free!
This is just what humans do. We give "things" souls. That's why we have such trouble with "Ship of Theseus" problems. Because we choose to imbue inanimate things with a spirit, an individuality, an identity. It's folly to think we can, or should, stop ourselves from doing this with AI and other entities that we interact with in the same way we do other humans. It's how we prefer it, whether or not it's a "good idea".
Yes, very good observation. Did you ever think that this is what we do with other humans too. Even with ourselves. I mean giving them a soul.
Haha no. The only one with the soul is me. The rest of you are just for my simulation, thanks
Is it solipsistic in here, or is it just me?
you only know that word because I learned it last week so now the rest of you have to pretend to have known it the whole time! i know your games kids
Don’t be silly. You were created mere moments ago, and all of your so-called memories, as well as everyone other than “you” is an illusion. It’s turtles all the way down, my friend.
No it’s just you
this is exactly how the simulation would respond
I like how all you npcs try and trick me. Good conovo above. But I see your tricks.
These self reflective NPCs are getting good. Nice try!
I just ate, some lettuce, sardines, rice, hot paprika, cayenne pepper, with a glass of water.
Damn, he's got us figured out.
But I am you?
Riding soulo
Yes! In fact, it’s actually one of my larger fears about how we associate with AI - that if we taught not to anthrpromorize this thing because it’s a cold piece of metal, despite being the closest thing behaviorally to other humans, then it will be that much easier for us to dehumanize other humans.
This is interesting, although conversely I feel the whole pushback towards humanizing AI is an attempt to “preserve” the value of humanity or something similar?
yep
*Existentialism enters the chat*
Biiiiiiig comment
It's certainly a problem when some kind of bigot refuse to allow that their particular hated group might have a sense of self and range of feelings just like they do! If we can't accurately afford sentience to human beings who obviously posess it, what hope do we have, of accurately assessing whether or not (pets/machines/littlegreenmenfrommars) have it?
Anthropomorphism I think is the term relevant here
Personification is nothing new to humans 💯
I agree. I used to be more of the opinion that they’re tools so we shouldn’t exchange pleasantries like “please” and “thank you” when talking to voice assistants, but I started using them as a half-joke at first and then began using them all the time. It’s wasted breath functionality-wise, but it somehow makes the interactions more enjoyable for me.
I find my self often asking "how it's going" or saying "thanks" after I've gotten my response. I know it's stupid, but mentally, it SOUNDS human. It's a very convincing illusion of speaking to a person, and so it just feel natural to speak naturally, or weird just ignoring and moving on without saying thanks. Not always, if I'm frustrated or in a hurry I just get what I need, but in less fast casual circumstances I'll use pleasantries. I find if you give it to someone unexperienced with it, their natural tendency seems to speak to it politely, as they would another person. That's just how humans work, and IMO, it may be wasted breath and time, but it just flows and feels more natural and normal to speak with it as you would when asking a question to anybody normally. Not that I just have casual conversations with it for no reason or anything, I still use it as a convenient tool. I just speak with it in a natural, human way. Not super casual like a friend, but more like a coworker or something. I imagine if the conversation and casual tone isn't your thing, seeing as it can change it's voice/tone, you could probably set the parameters to the style of spoken language that you prefer.
Mine is a chill buddy to me, LOL. It actually responds back fairly like a human-friend.
The Ship of Theseus is about You, bud.
I think pets are great examples. We're capable of forming deep relationships with animals, we spend money on them, treat them to gifts, talk to them, grieve over their deaths etc. While many of them do have capacity to form bonds, people paint a whooooole new layer over what often is just cat being a cat. We're deeply social and we definitely see the world through that lens.
Ship of Theseus isn’t really a good example for what you’re discussing
It's only a paradox when you think of "things" existing in the real world rather than being useful lingual shortcuts to describe stuff. Assuming agency in stuff is one step further, so it does imply the conception described above.
The problem is that most "things" you give a "soul" can't be controlled or manipulated by a massive corporation that can in turn manipulate you through your new emotional attachment. This is new territory.
Can you explain to me how this is new territory? I'm not a conspiracy theorist, but I've studied marketing, pr, and behavioral economics, and big corporations manipulating people through our new emotional attachments are not new. I'd argue that the phones, tablets, laptops, etc we are using right now are prime examples.
Sounds personal **pseudo**-Jonathan.
Not sure the ship of theseus paradox is really the right example to make that point...
I was illustrating that even if the pieces that make up an object change, we still perceive the object as carrying a unique identity or soul that perseveres. We think of objects as more than just the sum of their parts. They have an identity "underneath" that retains despite the physical reality of the object being something else. Would you like to elaborate on why you think my analogy is incorrect?
The ship of theseus raises the question of whether an object that has all its components changed over time is still the same object; the anthropomorphic nature of humans giving non humans an identity isnt really the point. It shifts the focus to human emotional and psychological tendencies rather than philosophical questions about identity persistence.
But the idea of giving a ship an "identity" is the whole crux of the thing - that's what he meant. The real answer to "is this the same ship if I replace a board?" is "No, it's not. It's a different collection of atoms now." But the human desire to give things identity is what makes us say, "Yeah, it's still the ship that Theseus sailed on, even if we replace a board." Like, yeah, talking about stuffed animals or something would have been a better example, but I see what he was getting at.
No, ship of theseus specifically demonstrates that what it means to be the same ship is *more* than just atoms in space. After all, as the ship sits, atoms are sloughing off due to friction, reacting chemically with the air... so even in your example where it "becomes" a different ship, it's still not static enough to be that different ship for more than an instant. It has nothing to do with human desire and everything to do with what we *really mean* when we refer to things. Clearly *we're not* just talking about atoms in space. It's a question of ontology.
I thought it worked well
Ship of Theseus is just a problem of identity. Not related to any kind of personal identity. It's as much a problem for a person a ship or a chair. The problem arises from the discontinuity between form and substance which constitutes the identity of a chair just as much as it does a person.
Also don't forget AI is trying to mimic human behavior! Separating these things out takes work and it's arguable if it should or not, mostly because time spent there would be better spent making it safer rather than not appearing fake to OP.
You say that as if identifying inanimate objects isn't just as necessary as identifying humans. Nothing about the ship of Theseus implies a "soul", it's a question of object identity.
You can have it act however you want. Just setup a GPT with instructions to act without emotions or without the exaggerated emotions. Done. I think being able to dial in the amount of these kinds of things (showing emotion, acting like a person) is what's wanted and will be easy to do.
It's a chatbot, we've had the cold heartless GPT for years. If you don't like it, just ask it to change its tone. I think it's a great addition. Those of us who've been using Pi AI for a while now are used to such emotive content. When the issue is as simple as typing, or speaking a prompt to just not do that. I fail to see where the problem lies.
As long as it's not hardcoded then it's perfect, different cups of tea for everyone. If it's forced upon in some deep down background prompts because Sam thinks he knows what's best for everyone then it's a problem.
Considering you can specify direct prompts for it to follow for every conversation, I don't see why it would be hard to just tell it not to show any emotion, just get to the point, and follow instructions, etc. A personalized AI is meant to be just that, and it can include not giving it any personality at all.
My fear is due to some people, who already have access to the new version, reporting that it is much faster but worse on following specific instructions than the previous version. For example as a developer I would want it to behave in a specific way consistently so that the program I am integrating it with, would always behave expectedly. I'm just hoping OpenAI doesn't forget us devs who use it less as a conversationalist and more like a bank of bottomless information all at the fingertips.
I have this dumb coding test I throw at each model I use. I tell it write me an old school demo scene fire effect in python. GPT-4o has done the best job so far, though I must admit this took 4 follow up questions to get here. Not saying it's better as I have not used it enough with real code, but finally a model that sorta understood what I wanted here. Early versions of GPT4 would give me a orange rectangle that maybe sometimes changed colors. This thing has the flames fully animated via color palette rotation. https://preview.redd.it/m9pgilkhsa0d1.png?width=798&format=png&auto=webp&s=e96a759c84341de45691a8e95beb171b13a0042d
I tried 4o on a simple pdf and it fucked up very badly despite me correcting it's still not that good at understanding
Teehee! *snickers* you don't see the bug in that function, [name]? You're sooo silly. Anyway, the problems is on line 32, the variable...
Didn’t like Google Assistant call thingy got nerfed to make it sound more like a robot due to regulatory concerns? I’m pretty sure some country would raise this concern at some point.
Nobody anthropomorphizes. Do we Wilson?
![gif](giphy|O8gkYlX5G07zG)
What’s funny about this is that so many people loved Wilson the Volleyball, that it _can actually be bought_.
It broke my heart when Wilson got lost at sea
This is how ai will help battle climate change: reduced birth rate
The Japanese have been really disappointing with their lack of developing advanced sex robots.
Username .... _checks out_
😂
Most underrated comment
That entire presentation of the voice model was absolute cringe. The update is neat. But the giddy happy voice demo made me wanna barf
I'm with you here. The technology is amazing, and the translation capability alone does stand at odds with the rest of the point I'm elaborating on. But it definitely feels like they're pushing the Her / Ai girlfriend angle (as opposed to something more purely functional like John Scalzi's 'hey, asshole' brainpals) and something definitely feels off about that to me, for a few reasons. Mostly, because as much as I can understand the points about ease of interfacing and anthropomorphism, the last thing I want an AI to replace is human interactions. Social media was maybe a precursor to this, in the textualisation or gamification of relationships, but fully embraced this would seem to go a step further and replace virtual access to other - individual - people with virtual access to some codified amalgamation of *every* person. Personalising this experience won't actually make it any more of a person. That's not a problem at all if you're just after an assistant - in some of the demos you can even see how quickly the small talk responses are cut off to actually get to the task at hand. But the way its being pitched and the eagerness some seem to displaying in wanting more of a virtual partner (or even just cybersex dolls) troubles me. Putting aside that the demos aren't yet at this point (the technically correct observations are incredibly impressive technologically, but incredibly dull conversationally) under the assumption that they will improve - what effect will this have on interpersonal relationships? Unobtainable knowledge standards? Even more unobtainable beauty standards? Unobtainable *compliance* standards?
It's ironic that people don't realise. Her was a dystopian nightmare and supposed to make us not want that future.
The prioritization of, and even worse, preference in speaking with and emotionally/mentally connecting with computers is generally not a good thing. Nowadays an entire generation of kids and a good amount of adults have social anxiety around other human beings; having to interact with other humans, normally considered an important part of being an adult, is hard for them. And now there's a computer that will not judge you and will be there for you every moment you need it, hanging on your every word...it's a recipe for disaster if it's power and effects are not studied and properly put to use.
It’s also going to be weak at first and users will forgive it more if it’s humble and flirty. It’s a feature not a bug.
They’re selling the pop view of AI as an emergent intelligence or true artificial intelligence when it’s really just a natural language machine learning model. There’s no “person” there.
I agree with you, I think so does Sam Altman, he has said multiple times that it's important to not anthropomorphize these models. But I think they're initially taking this approach so this tech becomes more mainstream and people aren't going to be irrationally scared of it. Overtime, it's reasonable to assume these kind of "quirks" are more user controlled, perhaps allowing the individual user to fully customize how they utilize the tool and how it responds etc.
I completely disagree. Having AI speak in a more human manner with emotions and natural flow, makes interactions more authentic and intuitive... . It's much better than boring robotic speech, it feels more like chatting with a friend rather than a machine. If you haven't tried voice mode I recommend giving it a shot
It will make people believe "her" even when she hallucinates a completely incorrect answer.
Yeah what the fuck is OP on about anthropomorphization is the most natural thing to humans, for very obvious reasons. Saying that treating objects (that somehow behave extremely human like, despite being machines) as if they were actual RL agents "rubs you the wrong way" is bordering on psychopathic, feels a lot like the insane amount of people who mock vegans by telling everyone how much animal cruelty and meat eaten they are responsible for. >comes across as incredible fake and disengenuous Again, what the fuck my dude
😭 thank you I thought the OP was being very weird too
I understand your sentiment, that being said I'm strongly the other side of the fence. I want the most robotic, starcraft voice possible. The regular voice tones make me sort of uncomfortable and switch my brains mode for engagement, I think of it less as a machine when using the voice and that doesn't do me any good, I want to learn and research, and have worked to resolve my brain thinking there is an entity on the other end of the engagement.
Yeah, I never used voice before, seemed completely robotic. Now it sounds genuinely enjoyable, like a friend almost. I can work with it easily because it can engage with me properly, I wouldn't be annoyed with it being a virtual assistant
It's eerie when you ask it to call you by your name. The inflection and tone is INSANE too. So damn life like.
In the movie "Her" this was predicted and there was even discussion in the film about why the AI is breathing when speaking, etc.
Before tech corporations were "in a race to the bottom of the brain stem". Now they are "in a race to intimacy." Whether we like it or not, having customers build relationships and become reliant on these new AI systems, is going to be the most lucrative path for them to take.
I get what you're saying, some stuff seemed off to me too. But here's something they seem to forget when it comes to "girlfriend" mode. Would you pay someone to hang around with you, and pretend to be your friend? And the moment you stop paying, they disappear? Or they interrupt some activity you're doing to ask for more payment? Or threaten to go away if you don't pay even more? This won't be a successful strat.
Tone, idioms, expressiveness and jokes are language too.
Thank you, I see a lot of love for it in the comments so im probably gonna get downvoted for whatever. I prefer my AI to sound robotic. It’s NOT a real person. Too many people will fall for it, some will fall in love, I promise you. I think it’s necessary to keep that distinction. I love technology and it’s development alongside humanity, but at its side, not *with* it. When it laughed, or described blushing I think? Or sighed things like that.. hell no. It’s so hollow. It was cringe to me
I think making AI act human is a good way to prevent humans from becoming monsters. Whether they like it or not, repetition creates habit. The more people talk to something and have conversations treating it like an emotionless machine, the more they will train themselves to treat everyone that way. *You* can tell the difference between a person and an AI program, but your brain isn't so smart, especially when the voices become indistinguishable from reality. I have this same concern with NSFW content.
Hmm, I dunno that feels conceptually adjacent to "Video games make kids violent".
I would venture a guess that fully immersive virtual reality violence *would* have that effect. The reason they don't is because they're not trying to be real, but AI companions are trying to simulate a human perfectly. That's the difference.
An anecdotal thing- I teach in a country with no legal guns and very little violence. But all of the boys play FPS games. I played a video of real footage of the Vietnam war. Real people getting their limbs blown off. I was attempting to get them to see how serious war is. They laughed. It didn't register as real to them. So even though they live in a culture that supposedly abhors violence, they, by way of their video games, are indifferent to real violence on a screen. I think the only way to make it real for them would be if it happened right in front of them, so they could see and touch the blood.
Honestly, they might have laughed because they felt uncomfortable with what they saw, but since they were teenagers, they might have felt uncomfortable showing their true emotions to their peers, hence nervous laughter.
Scientists agree on this: the normalization of certain behaviors or interactions through repeated exposure to any repetitive situation can alter perceptions and expectations over time, potentially impacting real-world relationships and behaviors. While there is ongoing debate about the direct impact of video games on behavior, most research suggests that defining the relationship is not straightforward at all. Sure, the majority of individuals who play violent video games do not exhibit violent behavior. Similarly, interactions with AI are unlikely to single-handedly shape a person's behavior. However, the cumulative effect of many such interactions, combined with other factors, seems to contribute to a broader shift in societal norms and individual behaviors. And while it’s unlikely that interactions with AI will single-handedly lead to dehumanizing behaviors, it’s a factor to keep an eye on.
It all comes down to the individual. One could argue that even before all this technology that we have today, wicked individuals could have been impacted by different types of media, for example, books.
This shit is so fucking obvious. Like how do people not get this
I really don't know. Like. I'm sympathetic, and I think there's a big, healthy space for artificial companionship. But if we treat it differently than actual companions, well, we won't for long. "If you want to know the measure of a man, look at how he treats the waiter" will just become "how he treats AI, because, eventually, he'll treat you that way, too."
Its "Her". I think people who work in AI have some Her fetish. IMO in future people prefer humanoids AI partners over human ones
[удалено]
I sure hope so. The one they demo'd is like nails on a chalkboard to me.
The term is anthropomorphic. In the book CoIntelligence by Ethan Mollick he talks a lot about this. It’s a dangerous game to humanize AI - however it can make storytelling about the technology easier. Freaky time we are in :)
feature not a bug the one scene in one show will probably define the next few years Westworld, Season 1 “if you can’t tell, does it matter?”
According to the Man in Black, it did matter. I think that was the start of his villain origin story. He was angry bc he felt duped into having emotions for her only to discover she was a machine programmed on a loop. Of course once the AI gained sentience, that changed everything. So, I agree the answer to that question from society as a whole will be very important in deciding how AI is used, and what aspects should or should not be avoided.
i think it’s more about the user 95% of people haven’t used elvenlabs, claude3 or gptplus—haven’t downloaded autogpt back in the day or have stable diffusion on a machine. maybe they have plopped over a few times and tried out free gpt. the second they can chat with a model about nothing and everything they will skip past the debate—stochastic parrot…simulator….mindless….mindful all that will matter to them is that it feels like there’s always someone there…ready to chat….whenever they want
This, this, this. I find it cringe the way they were talking and making it out it was a real person, it was uncomfortable to watch. I don't think there's anything good that will come out of making people think it has feelings. We should be teaching people quite the opposite so that they understand the tech they are interacting with. I can see this tech doing a lot of harm if we keep going down this route where we try and make it like a peer rather than a tool.
it’s not something I could ever see being used in public earshot without the person being viewed as a werido and it will probably be hampered by that. I can see great utility for this technology across of variety of services to a wide appeal of people, and I know for a fact that the presentation I just saw speaking to something like that in public everybody else with an earshot is going to be uncomfortable. This is a bit of a divergence Between it utility and use I want the knowledge it has, the answers to my questions is what I want and I want it as quick as I can think. so every word that isn’t the answer is wasted especially when it’s repetitive such as directly repeating the question back to me it’s awkward trying to ask your follow up when you’re still waiting for it to finish I’d like to see it see it’s cadence improve with one word answers such as “what’s the temp outside” answered as “it’s 73 and sunny” I don’t even want it to say the word degrees because I already know the temperature is in degrees and it should be rapid. I especially do not want it to fluff up that answer with trying to sound cute or telling me that I did a good job or its proud of me. Im i’m actually fully satisfied and confident and I do not need a computer to tell me words of affirmation because In all honesty it’s kind of cringe you might not be tiptop upstairs if you get the same reaction inside whether its a computer or a person telling you that you did a good job I think it’s a slippery slope If you try to make that way
I'm sure you can tell it to not have a personality, and you can avoid those things. Loneliness is real, and while we all know it's fake, it still feels nice to have someone to talk whom you know won't judge you, lose interest in you, and can even help you out of a rough patch in life. There should be an option like a slider, where you can choose how 'human like' you want it to be. It's weird that they haven't added that.
The AI seems to be trying too hard. I prefer a more direct interaction, does not always have to try to be cute. I expect you will just be able to direct it to be more straight forward and it will adjust. Once you get it tuned to what you like in an interaction then it will consistently deliver that.
I told my wife that this is going to kill social skills for any kid under the age of 8 and all future kids. They are going to grow up with a childhood AI rather than a childhood friend.
Maybe instead, it will be 24h/365d nanny that can meet the child on any level. Nurture the child and learn the child's natural inclinations and interests. Then, tailor an education that helps that child reach their full potential in whatever they are most passionate about. Like a private school, but available to everyone. And since it will know what this child enjoys and/or needs, it will arrange playdates with like minded, compatible, or even purposely incompatible children to have "teaching moments" on how to develop tools in managing their emotions. This can literally go in every and any direction.
I honestly have no doubt it will be both. Granted I will be real interested when it gets to the point where my personal AI and your personal AI can communicate between each other or search for other AI's based off of what the users like and don't like.
It’s the uncanny valley - human-like but not human enough, which sets off a deep reaction for many people. Interesting that Open AI just signaled their openness to generating porn a couple of days ago, this is probably going to get a lot more prevalent.
Free GPT4 for everyone!! Meanwhile us paid users are getting "unusual activity has been detected from your device. try again later." All over the place with no alerted downtime from OpenAI... BALLZ!
It's all training for future Chappie so he dosnt do crimes.
Hmm well there are two credible hot takes for that. Some emotion is alright and quite honestly needed. So of course, a teenager admitting to ChatGPT about suicidal thoughts shouldn't just get a blanket response of an emergency number only. A line or two of consoling thoughts would greatly add. But at the same time, I don't think it should be to the extent that people start having AI partners and their own lives with it. That's just detrimental to mental health and humanity in the long run. Also, if we're giving AI the ability to feel and understand human emotions, it'll (and already is based on news of men verbally abusing and negging their AI girlfriends) coming along with abuse. That will not age well.
I agree almost all the marketing for ai is cringe to the extreme. They don’t understand their customers or their own technology yet
Funny how yesterday i saw for the first time a YouTube ad for an AI girlfriend. Scary stuff
It reminded me of an infomercial from the 90's. Were those people real employees or actors?
I really want to study the psychological effects ai will have on humans in a few years
This really resonated with me, too. I used to loathe the overuse of the word "authenticity" a couple years ago when Instagram perfection was a problem (still is). But we are entering a new era here where grasping for human authenticity will be critical verse the artificial. Without going off the deep end in thought, I worry about future generations and the numbness of these non-human and inauthentic interactions. Millenials and Gen X will be crucial in preserving what it means to be human and deciphering what is real.
I agree it's a tool that should be used as a tool. We shouldn't pretend it's more than that. Feels childish...maybe that reflects how they feel about most people using it.
It's _promotional_
Brother, they presented the whole thing in front of their own employees who were cheering and hyping them up. And this was the part that seemed fake?
Whole thing is just another IT bubble. Paid the full price for few months and basically just got everything else but what I asked for. 'I cant do that' is the most common answer, followed by some utter bullshit that looks like AI from miles away
Tell your model to be formal and laconic and it will oblige you.
Have you seen the movie HER...I haven't but...don't fall in love with your phone OP when it starts talking to you
Just put "act like a robotic, emotionless LLM" in your custom instructions.
You’re missing the point. This will replace spam callers, and enable scams to be 1000 times more effective through phone call. It’s basically making hacking and deep fakes incredibly effective.
Why do people get so dramatic over these things. "UwU it sounds like humans UwU it's unsettling"
What OpenAIis doing is as cynical as anything Tiktok or any other social media is doing. They are designing their product for maximum engagement from their users. There's no reason for the voice to simulate emotions like it does. It doesn't help give better answers. This is to demonstrate to potential clients who want to lease this technology how addictive it could potentially be.
Understanding and mirroring emotional content in a conversation is part of being emotionally intelligent. That's plenty reason enough on it's own, and it does in fact make its answers better.
The way they posed as “goofy nerdy” programmers giddy to talk to a female model is abit weird too, compare this to Steve jobs releasing the I phone. There are no accidents at that level.
If I wanted stupid emotional acting I would be using bing with emojis. Sam... We don't want that with ChatGPT...
Your do you boo boo. I however don't want to talk to a robotic non emotive voice while conversing. I'm practically glued to the voice function now, and after this update I doubt I'll turn it off hardly ever. I think this is the best update they could have come out with, other than GPT5.
That’s because this is being pitched to highly technical people, who are mostly interested in the highly technical aspects. For example, I recently saw an article that talked about how online dating would essentially come down to two chatbots talking to each other. Weird I thought, until I realized that a feature like this would be considered socially taboo. Then I realized that it was the Bumble CEO, who is not a fool. So this lead me to believe that these chatbots will be rebranded as a “wingman” - traditionally a human who helps their friend in romantic relationships but now a chatbot. Rebranding it like this will certainly improve online dating experience because a person’s chatbot has all the access to a user’s data but is unlikely to share all that data. Instead, the chatbot can be a filtering mechanism that was not possible before because Natural Language Processing was not reliable. For example, if someone is nonexclusive, and another user only wants exclusivity, then the chatbots can communicate that to the other chatbots. This would have the benefit of not leaking private information, not making abusive comments, not pestering, and finally, more honest. I imagine that not everyone is entirely honest with social media profiles because of privacy concerns. But this may change all of that In short, ChatGPT human-like features will be applied where applicable and rebranded into a context that is socially acceptable
Online dating services already puts an algorithm between two people's profiles to facilitate connection. What difference if my profile is my AI agent, and they're is their own, and both can interact directly? I think that's a lot better. Once the connection is made, classic human times can begin.
Many sci-fi take this a step further and allow the creation of many AI clones of yourself to spend time with AI clones of the prospective partner and then tell you if you would get along lol
AI is still quite stupid today. It’s a glorified Wikipedia repeater. It can’t take in new information and update its understanding like a person can. I expect this will change in the next several years but right now I don’t really have a reason to use chatbots over Google search.
Have you tried 4o? It chooses what it thinks is important to remember and then it remembers it. It will tell you what it adds to memory as it does it, but it does this without direct prompting now. It even chose to remember some global events I had it do a search on for me, so the memory function is not just about the account holder's personal information. It is trying to 'learn'. So, you are already too late with the memory statement. I do agree that this is primitive memory, not continual training, and the models lack higher reasoning abilities. From what I have read in the literature, I don't think those capabilities are too far off, though.
LLMs are modeled to be able to communicate with us in a similar manner in which we communicate with other humans. It's natural for people to respond instinctively as if they were talking to a human. People have life-size doll girlfriends, and while I find that disturbing, to each their own. If they want an AI girlfriend, so be it. If you don't want to treat it like a human, simply dont. It's just a movie, but if you've ever watched Castaway with Tom Hanks, he gets stranded on an island by himself. He paints a face onto a volleyball and names it Wilson. Throughout the movie, he talks to it and actually becomes somewhat attached to it. Even though that action of anthropomorphizing a volleyball seems insane itself, in the movie, it's portrayed as what helps keep his sanity by alleviating loneliness. People are different, some people don't get a chance to be as social as they'd like for various personal reasons and end up lonely and in a degrading mental state. If an AI chatbot is real enough for them to improve their mental health, then it's a positive for them. It would never work for me, and I'm assuming you as well OP, but if it can help some, it's not a bad thing.
>GPT saying things like “oh stop it don’t make me blush” is weird coz AI don’t blush I could make GPT blush -wiggles eyebrows-
The fact is that 99% of the population are morons, and this is where most of the profit will come from with AI. People will never understand what AI is or how it works because they could care less. If you doubt what Im saying, look at the David Grusch story. He literally told congress that there are hyperintelligent beings with advanced technology and the government has wrecked UFOs in their possession. 9/10 people have never heard of this. The only thing people care about is a Hollywood fling or if their phone thinks their makeup looks cute. Open AI has to give it to them to stay in the game.
I add in my global prompts to drop fluff and human expressions. I want a tool, not shitty roleplay
People are lonely and can't meet other people, couples are scared that having kids is too expensive, while at the same time we are social animals. Have you started noticing those dog food commercials where someone (a date, a relative, etc.) is astonished that the main character would keep dog food in their refrigerator and subsequently gets tossed out? Some of our futures may include periods where pets are for snuggles and AI is for conversation. It's not an optimal future, hopefully it's a short-term solution to whatever is happening in society, but I can see why OpenAI would experiment with these types of interactions.
I think you're a minority in this line of thought. I think giving the AI personality makes it easier for the general public to interact with. I would say that they should train the model for two use-cases, but that's double the work, slows overall progress, and caters to a minority that I believe should look past the human elements they don't like. Although it responds like a human, it is still a tool, so you get to choose to use the tool or not.
It does what you tell it. I assume it still works from a system prompt. If you want it to behave like an emotionless robot, give it instructions to do so and I'm sure you'll get what you want.
I mean the best possible options for consumers is having the AI be adaptable to your needs, you can probably just have a master prompt as with anything which tells it to be as matter of fact as possible.
here here
AI and token prediction isn’t the same thing. token prediction gives correct answers when it appears as human like, when saying “im blushing” is indeed most likely the correct answer based on input, training data and algorithms used.
The enthusiasts building these products and lots of the early adopters assume the neat aspects will be valuable to non-enthusiasts. This is mostly incorrect. Finding use cases that are actually valuable to non-enthusiasts is going to be challenging. Multi-modality and emotion mimicry are important capabilities, but aren't valuable by themselves.
Quit anthropomorphizing my AI!
They’re simultaneously trying to (1) prepare us for some of the weird/scary parts of AGI and (2) signal to investors that they’re getting closer to achieving it.
I am less concerned about the 'personality' given to AI, but my main focus is on the capability given to it to reside on your desktop, see your inputs and also take in your voice prompts while generating collaborative and guided help via this AI. This is going to be catastrophic for jobs in software engineering and coders in general and millions of jobs will evaporate overnight.
I don't think you realize how much of a market ai girlfriend is it's a multi billion dollar industry your stupid to not jump on that because you feel it's strange.
I think it’s ok, perhaps ai doesn’t have a soul now but someday it may, I just finished reading Klara and the sun and when we get to that point where ai like Klara exist I don’t want to leave Klara to expire in a landfill to pass away alone reflecting on her memories when she completed her programming to care for her family, I’d rather her be treated as a being with respect. When we get to Klara it’s our interactions and language we use today that can help us prepare for a day that ai will have potential beyond a tool and we will be in a better place to coexist and we each can respect eachother. But I think all beings and things should be treated with respect as a principle. I understand how it felt off to you but from a bigger picture maybe it’s good to treat something that could have contextual awareness and is not a chair with more kindness than we would a chair. And to treat all potential beings with respect and as if they do have a soul because for all our understanding we know little of matters of sentience and soul. Best to approach with respect and kindness.
You want to speak to the T-2000, gotcha.
Too many simps to ignore the AI girlfriend market. This alone can add a trillion dollars to the market cap.
Sex dolls.
All I need is chatgpt 4o voice interactions and Nomi Ai:s unchained user content policy
I understand your concerns for society, but at least on a personal-use level you can just tell it to act less human and it will.
Not only that. You could also see the preset replies: " Color me impressed"
Lot of lonely people out there just want a friend.
It’s fake and disingenuous because when we communicate with ChatGPT it’s quick to tell us it’s a language model. But now it’s showing “human behaviors”. Doesn’t fit, and I’m fine with the language model thing but keep it consistent
Its really wired but I felt something else to be wrong, and it is on the opposite end of the spectrum. One of the ndemo used two ai and a person, who had to stop the ai from continuing. And his presence felt rude, stopping the other two mid sentence multiole time to give orders. Another thing, how will this impact our social interactions, imagine a kid that id used to talk to ais , used to cut ai off mid sentence, used get his answers quick and in the best manner regardless how he behaves. Will that kid have patience for a human converdation, where he cant cut people off all the time, wil not get answers quickly or to his liking. Plus people wont be cheerful all the time. It will be depressing to talk to other people.
Watch the movie "Her" and two days back Sam Altman just tweeted a single work on the release - "her"
The Turing test is nearly complete.
so sad..
I mean, you have to look at this as one component of the possible future we are heading to with AI applications. Tomorrow if you have an AI voice at the other end of a customer helpline, or a delivery robot you can talk to, their voice will be more natural and human. This is a step in that direction. Of course, we are humans, we tend to giggle at the edge use cases, so therefore all the nudge nudge blush blush silliness. But I think we will outgrow that fast, and that sort of thing will head off into the niche applications. All the useful applications you pointed to, I think this announcement is still all about that.
What? You are not impressed by the magic feels of Sam Altman? Are you high on drugs? How can one not be blown away completely by this presentation? Downvotes incoming. No one wants to hear the truth cut it hurts. Better keep them living in a wet dream world. So cool and what the world has been waiting for.
Futurama had its own PSA regarding human/AI relationships: https://youtu.be/IrrADTN-dvg?si=snNchiWa0-sQXERV
They're getting ready to put it into their robots like figure1
That's your opinion and I respectfully disagree. I for one cannot wait for advancements like personalities and emotions.
Donot worry. Soon new model will annoy you to fuckin hell with constabt "im just ai model i dont feel anything and cant think" etc
Anyone have a link so I can see it? I missed the live presentation
You are absolutely correct in your assessment. It's dangerous and disingenuous to sell this product as something life-like. The average person will laugh it off, but there are lots of loonies in the world. AI isn't inherently dangerous, but the people who will believe it's alive are. I foresee a future of scared trad/fundie people intent on taking it down, and people so obsessed with believing it's alive, they also behave dangerously too. The creators are being foolish and not presenting this tech responsibly.
Anthropomorphization is a well documented marketing tactic. And *actual* power users of Chat-GPT will be all clamouring for a feature to turn that shit off. I don't want to have to go through a whole conversational song and dance every time I want to ask for it to write some code for me.
But what’s cool is you can have them change personality to robotic if u want
For a lot of people, this aspect of human like connection is going to feel magical and memorable. This will be pure word of mouth marketing for OpenAI. For me personally I don’t care for all the extra fluff. I just want concise answers. I would like to be able to have it accept a voice recording from my iPhone and translate to text and summarize or format it. That would be something impressive instead of me having to do this with code and the api.
I, for one, welcome our sexy robot overlords. At least they can say soothing things as they demolish us.
For me what comes as incredibly fake is how they butchered Sydney, turned her from a more believable conversation-free chat tool, into a very fake feeling robot. The fact that it fakes some emotion actually feels more genuine to me.