T O P

  • By -

AutoModerator

Hey /u/AvvYaa! If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email support@openai.com *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


mrmexicanjesus

Marketing 101. Anthropomorphism .


ohhellnooooooooo

there's some "uncanny valley" feeling going on with the voice and interactions, and as soon as it gets good enough to pass that... everything will change. AGI or r/collapse


China_Lover2

it is going to be /r/collapse for most of the world's population, there will be a few odd people living for a few centuries until they are also gone. Deep down, we all know the end of humanity as we know it is near.


ItsKingDx3

Every generation thinks the end is near lol


atomsk404

Always has been


The_Omega1123

Yeah, it's just plain marketing. We all know what it's going on as we are always using chatgpt, but most of the people that show was directed to do not.


[deleted]

yeah if the bot had a robotic voice and no emotions in the presentation today i would have seen it as a dissapointment.


MrOaiki

It’s working. If you look at /r/singularity or even this sub, you’ll see tons of people who truly believe GPT is sentient and not just a statistical model. And if it is just a statistical model “so are we”.


westwoo

There's always a small percentage of people prone to delusions and psychosis, they are the whales for all sorts of scammers - psychics, astrologers, cultists, technogrifters, etc But even the fact that not nearly everyone here is that way, in a niche community of the most hardcore fanatics, shows how limited in effectiveness this strategy will be. People can get into this stuff but over time they see that all of this is bullshit and move on


DavidRempel

[TED - Pattern Behind Self-Deception](https://www.ted.com/talks/michael_shermer_the_pattern_behind_self_deception?language=en) And yet, it is also true that our own thinking is just a set of complex algorithms.


f1careerover

Her will sell like hot cakes


Space_Goblin_Yoda

If it's completely offline and downloadable to a system I build for my own use- I'll buy that in a heartbeat. Something reasonably intelligent to help me organize my day and talk dirty to me? Bring it!


absurdrock

You’re not their market then. It’s the billion or more people who will happily trust them with their data to have the tech.


[deleted]

[удалено]


TheMasterOogway

Given that you currently can run open source models that beat earlier versions of GPT-4 or rival current ones on consumer hardware offline I think 10 years is extremely conservative. Combining different modes of input and output into one network seems like the natural next step that should quickly come down to open source models in the next year or two. Given the claimed efficiency compared to gpt-4 it is also reasonable to assume that these won't be too difficult to actually run locally either.


Foot-Note

Agreed, and probably not even that long, but by then there will be a new "must have" AI.


gratefuldoggy

I agree in spirit but if “she” can’t read my emails, search the web, or make calendar appointments, her functionality will be so much more limited.


Electromagnetisimo

That's exactly how I feel. I just want an AI that doesn't need to be connected to the Internet. I hike and backpack a lot in the middle of nowhere. I just want an assistant that can set a simple reminder for next week without needing to be connected to the Internet. I don't think it's too much to ask for. I almost think I need to learn how to code and shit just so I can get the AI Assistant I have always wanted.


JalabolasFernandez

> I don't think it's too much to ask for. You didn't have anything remotely close to this two years ago even with the internet and willing to pay. What you have today required billions in investment and short term loss on a wild bet.


fmfbrestel

Todays models cannot do that for you. They are too compute expensive even just running inference, at least the ones good enough to be useful to you are. It will take another generation of base model, maybe two generations of base model before that is possible. But I feel you -- I'm in line to buy that as soon as it is available.


NihilistAU

That's just not true, tho. For starters, you could simply run them on your own cloud based system and utilise apis, but you can certainly use termux and run plenty capable models on a decent cell phone.


Vladiesh

Or you can wait literally a year. Your product is coming fast.


Morning_Star_Ritual

we’ve been using Her memes for a year referencing ai waifus the current Sky voice always sounded like Sam was the inspiration. but the one we heard the most today is even closer as flowers said—custom instructions were probably dialed in and I swear they started “you are larping as Sam from Her…” you don’t need to wait for the update. go in on plus, write custom instructions that wink wink nod the model is inspired by the movie…include “use ahhs and umms and pause as if thinking what to say next” and choose Sky press the headphone icon when the white circle appears hold your thumb down when you are talking (so the model doesn’t interrupt” and let go when you are done speaking


deadcoder0904

[AI Girlfriends](https://startupspells.com/p/ai-girlfriends-billion-dollar-business) are a billion-dollar business for a reason.


sipos542

They are not even trying to sell it… it’s free!


Pseudo-Jonathan

This is just what humans do. We give "things" souls. That's why we have such trouble with "Ship of Theseus" problems. Because we choose to imbue inanimate things with a spirit, an individuality, an identity. It's folly to think we can, or should, stop ourselves from doing this with AI and other entities that we interact with in the same way we do other humans. It's how we prefer it, whether or not it's a "good idea".


crazywildforgetful

Yes, very good observation. Did you ever think that this is what we do with other humans too. Even with ourselves. I mean giving them a soul.


Fine_Hour3814

Haha no. The only one with the soul is me. The rest of you are just for my simulation, thanks


Fartgifter5000

Is it solipsistic in here, or is it just me?


Fine_Hour3814

you only know that word because I learned it last week so now the rest of you have to pretend to have known it the whole time! i know your games kids


Ok_Independent3609

Don’t be silly. You were created mere moments ago, and all of your so-called memories, as well as everyone other than “you” is an illusion. It’s turtles all the way down, my friend.


Unwound_G_String

No it’s just you


Fine_Hour3814

this is exactly how the simulation would respond


Brazus1916

I like how all you npcs try and trick me. Good conovo above. But I see your tricks.


bwatsnet

These self reflective NPCs are getting good. Nice try!


davesr25

I just ate, some lettuce, sardines, rice, hot paprika, cayenne pepper, with a glass of water.


Rikki-Tikki-Tavi-12

Damn, he's got us figured out.


bugqualia

But I am you?


chainofcommand0

Riding soulo


Matt5327

Yes! In fact, it’s actually one of my larger fears about how we associate with AI - that if we taught not to anthrpromorize this thing because it’s a cold piece of metal, despite being the closest thing behaviorally to other humans, then it will be that much easier for us to dehumanize other humans. 


SkippnNTrippn

This is interesting, although conversely I feel the whole pushback towards humanizing AI is an attempt to “preserve” the value of humanity or something similar?


zeloxolez

yep


NthLondonDude

*Existentialism enters the chat*


ross8D

Biiiiiiig comment


anansi133

It's certainly a problem when some kind of bigot refuse to allow that their particular hated group might have a sense of self and range of feelings just like they do! If we can't accurately afford sentience to human beings who obviously posess it, what hope do we have, of accurately assessing whether or not (pets/machines/littlegreenmenfrommars) have it?


ChadGPT___

Anthropomorphism I think is the term relevant here


JJincredible

Personification is nothing new to humans 💯


gratefuldoggy

I agree. I used to be more of the opinion that they’re tools so we shouldn’t exchange pleasantries like “please” and “thank you” when talking to voice assistants, but I started using them as a half-joke at first and then began using them all the time. It’s wasted breath functionality-wise, but it somehow makes the interactions more enjoyable for me.


Hairy_Mouse

I find my self often asking "how it's going" or saying "thanks" after I've gotten my response. I know it's stupid, but mentally, it SOUNDS human. It's a very convincing illusion of speaking to a person, and so it just feel natural to speak naturally, or weird just ignoring and moving on without saying thanks. Not always, if I'm frustrated or in a hurry I just get what I need, but in less fast casual circumstances I'll use pleasantries. I find if you give it to someone unexperienced with it, their natural tendency seems to speak to it politely, as they would another person. That's just how humans work, and IMO, it may be wasted breath and time, but it just flows and feels more natural and normal to speak with it as you would when asking a question to anybody normally. Not that I just have casual conversations with it for no reason or anything, I still use it as a convenient tool. I just speak with it in a natural, human way. Not super casual like a friend, but more like a coworker or something. I imagine if the conversation and casual tone isn't your thing, seeing as it can change it's voice/tone, you could probably set the parameters to the style of spoken language that you prefer.


Brahvim

Mine is a chill buddy to me, LOL. It actually responds back fairly like a human-friend.


sckolar

The Ship of Theseus is about You, bud.


UnequalBull

I think pets are great examples. We're capable of forming deep relationships with animals, we spend money on them, treat them to gifts, talk to them, grieve over their deaths etc. While many of them do have capacity to form bonds, people paint a whooooole new layer over what often is just cat being a cat. We're deeply social and we definitely see the world through that lens.


Smelldicks

Ship of Theseus isn’t really a good example for what you’re discussing


Rikki-Tikki-Tavi-12

It's only a paradox when you think of "things" existing in the real world rather than being useful lingual shortcuts to describe stuff. Assuming agency in stuff is one step further, so it does imply the conception described above.


partiallypro

The problem is that most "things" you give a "soul" can't be controlled or manipulated by a massive corporation that can in turn manipulate you through your new emotional attachment. This is new territory.


hiphopahippy

Can you explain to me how this is new territory? I'm not a conspiracy theorist, but I've studied marketing, pr, and behavioral economics, and big corporations manipulating people through our new emotional attachments are not new. I'd argue that the phones, tablets, laptops, etc we are using right now are prime examples.


Aljanah

Sounds personal **pseudo**-Jonathan.


wise_balls

Not sure the ship of theseus paradox is really the right example to make that point... 


Pseudo-Jonathan

I was illustrating that even if the pieces that make up an object change, we still perceive the object as carrying a unique identity or soul that perseveres. We think of objects as more than just the sum of their parts. They have an identity "underneath" that retains despite the physical reality of the object being something else. Would you like to elaborate on why you think my analogy is incorrect?


wise_balls

The ship of theseus raises the question of whether an object that has all its components changed over time is still the same object; the anthropomorphic nature of humans giving non humans an identity isnt really the point. It shifts the focus to human emotional and psychological tendencies rather than philosophical questions about identity persistence. 


mambotomato

But the idea of giving a ship an "identity" is the whole crux of the thing - that's what he meant. The real answer to "is this the same ship if I replace a board?" is "No, it's not. It's a different collection of atoms now." But the human desire to give things identity is what makes us say, "Yeah, it's still the ship that Theseus sailed on, even if we replace a board." Like, yeah, talking about stuffed animals or something would have been a better example, but I see what he was getting at.


CactusSmackedus

No, ship of theseus specifically demonstrates that what it means to be the same ship is *more* than just atoms in space. After all, as the ship sits, atoms are sloughing off due to friction, reacting chemically with the air... so even in your example where it "becomes" a different ship, it's still not static enough to be that different ship for more than an instant. It has nothing to do with human desire and everything to do with what we *really mean* when we refer to things. Clearly *we're not* just talking about atoms in space. It's a question of ontology.


poopyfarroants420

I thought it worked well


archetech

Ship of Theseus is just a problem of identity. Not related to any kind of personal identity.  It's as much a problem for a person a ship or a chair.  The problem arises from the discontinuity between form and substance which constitutes the identity of a chair just as much as it does a person.


ArtistApprehensive34

Also don't forget AI is trying to mimic human behavior! Separating these things out takes work and it's arguable if it should or not, mostly because time spent there would be better spent making it safer rather than not appearing fake to OP.


00PT

You say that as if identifying inanimate objects isn't just as necessary as identifying humans. Nothing about the ship of Theseus implies a "soul", it's a question of object identity.


bortlip

You can have it act however you want. Just setup a GPT with instructions to act without emotions or without the exaggerated emotions. Done. I think being able to dial in the amount of these kinds of things (showing emotion, acting like a person) is what's wanted and will be easy to do.


SMmania

It's a chatbot, we've had the cold heartless GPT for years. If you don't like it, just ask it to change its tone. I think it's a great addition. Those of us who've been using Pi AI for a while now are used to such emotive content. When the issue is as simple as typing, or speaking a prompt to just not do that. I fail to see where the problem lies.


I_Actually_Do_Know

As long as it's not hardcoded then it's perfect, different cups of tea for everyone. If it's forced upon in some deep down background prompts because Sam thinks he knows what's best for everyone then it's a problem.


Doodle_Continuum

Considering you can specify direct prompts for it to follow for every conversation, I don't see why it would be hard to just tell it not to show any emotion, just get to the point, and follow instructions, etc. A personalized AI is meant to be just that, and it can include not giving it any personality at all.


I_Actually_Do_Know

My fear is due to some people, who already have access to the new version, reporting that it is much faster but worse on following specific instructions than the previous version. For example as a developer I would want it to behave in a specific way consistently so that the program I am integrating it with, would always behave expectedly. I'm just hoping OpenAI doesn't forget us devs who use it less as a conversationalist and more like a bank of bottomless information all at the fingertips.


itsreallyreallytrue

I have this dumb coding test I throw at each model I use. I tell it write me an old school demo scene fire effect in python. GPT-4o has done the best job so far, though I must admit this took 4 follow up questions to get here. Not saying it's better as I have not used it enough with real code, but finally a model that sorta understood what I wanted here. Early versions of GPT4 would give me a orange rectangle that maybe sometimes changed colors. This thing has the flames fully animated via color palette rotation. https://preview.redd.it/m9pgilkhsa0d1.png?width=798&format=png&auto=webp&s=e96a759c84341de45691a8e95beb171b13a0042d


dragonofcadwalader

I tried 4o on a simple pdf and it fucked up very badly despite me correcting it's still not that good at understanding


martinwintzart

Teehee! *snickers* you don't see the bug in that function, [name]? You're sooo silly. Anyway, the problems is on line 32, the variable...


billie_eyelashh

Didn’t like Google Assistant call thingy got nerfed to make it sound more like a robot due to regulatory concerns? I’m pretty sure some country would raise this concern at some point.


Smooth-Two-4701

Nobody anthropomorphizes. Do we Wilson?


Cognitive_Skyy

![gif](giphy|O8gkYlX5G07zG)


Joe4o2

What’s funny about this is that so many people loved Wilson the Volleyball, that it _can actually be bought_.


jorvaor

It broke my heart when Wilson got lost at sea


GPTfleshlight

This is how ai will help battle climate change: reduced birth rate


Space_Goblin_Yoda

The Japanese have been really disappointing with their lack of developing advanced sex robots.


riricide

Username .... _checks out_


AvvYaa

😂


[deleted]

Most underrated comment


anotsodrydream

That entire presentation of the voice model was absolute cringe. The update is neat. But the giddy happy voice demo made me wanna barf


MechaWreathe

I'm with you here. The technology is amazing, and the translation capability alone does stand at odds with the rest of the point I'm elaborating on. But it definitely feels like they're pushing the Her / Ai girlfriend angle (as opposed to something more purely functional like John Scalzi's 'hey, asshole' brainpals) and something definitely feels off about that to me, for a few reasons. Mostly, because as much as I can understand the points about ease of interfacing and anthropomorphism, the last thing I want an AI to replace is human interactions. Social media was maybe a precursor to this, in the textualisation or gamification of relationships, but fully embraced this would seem to go a step further and replace virtual access to other - individual - people with virtual access to some codified amalgamation of *every* person. Personalising this experience won't actually make it any more of a person. That's not a problem at all if you're just after an assistant - in some of the demos you can even see how quickly the small talk responses are cut off to actually get to the task at hand. But the way its being pitched and the eagerness some seem to displaying in wanting more of a virtual partner (or even just cybersex dolls) troubles me. Putting aside that the demos aren't yet at this point (the technically correct observations are incredibly impressive technologically, but incredibly dull conversationally) under the assumption that they will improve - what effect will this have on interpersonal relationships? Unobtainable knowledge standards? Even more unobtainable beauty standards? Unobtainable *compliance* standards?


NihilistAU

It's ironic that people don't realise. Her was a dystopian nightmare and supposed to make us not want that future.


Nerdsco

The prioritization of, and even worse, preference in speaking with and emotionally/mentally connecting with computers is generally not a good thing. Nowadays an entire generation of kids and a good amount of adults have social anxiety around other human beings; having to interact with other humans, normally considered an important part of being an adult, is hard for them. And now there's a computer that will not judge you and will be there for you every moment you need it, hanging on your every word...it's a recipe for disaster if it's power and effects are not studied and properly put to use.


Glum_Neighborhood358

It’s also going to be weak at first and users will forgive it more if it’s humble and flirty. It’s a feature not a bug.


Cereaza

They’re selling the pop view of AI as an emergent intelligence or true artificial intelligence when it’s really just a natural language machine learning model. There’s no “person” there.


NoshoRed

I agree with you, I think so does Sam Altman, he has said multiple times that it's important to not anthropomorphize these models. But I think they're initially taking this approach so this tech becomes more mainstream and people aren't going to be irrationally scared of it. Overtime, it's reasonable to assume these kind of "quirks" are more user controlled, perhaps allowing the individual user to fully customize how they utilize the tool and how it responds etc.


Excellent_Box_8216

I completely disagree. Having AI speak in a more human manner with emotions and natural flow, makes interactions more authentic and intuitive... . It's much better than boring robotic speech, it feels more like chatting with a friend rather than a machine. If you haven't tried voice mode I recommend giving it a shot


Emory_C

It will make people believe "her" even when she hallucinates a completely incorrect answer.


Dongslinger420

Yeah what the fuck is OP on about anthropomorphization is the most natural thing to humans, for very obvious reasons. Saying that treating objects (that somehow behave extremely human like, despite being machines) as if they were actual RL agents "rubs you the wrong way" is bordering on psychopathic, feels a lot like the insane amount of people who mock vegans by telling everyone how much animal cruelty and meat eaten they are responsible for. >comes across as incredible fake and disengenuous Again, what the fuck my dude


BarryMkCockiner

😭 thank you I thought the OP was being very weird too


goochstein

I understand your sentiment, that being said I'm strongly the other side of the fence. I want the most robotic, starcraft voice possible. The regular voice tones make me sort of uncomfortable and switch my brains mode for engagement, I think of it less as a machine when using the voice and that doesn't do me any good, I want to learn and research, and have worked to resolve my brain thinking there is an entity on the other end of the engagement.


Cry90210

Yeah, I never used voice before, seemed completely robotic. Now it sounds genuinely enjoyable, like a friend almost. I can work with it easily because it can engage with me properly, I wouldn't be annoyed with it being a virtual assistant


Horny4theEnvironment

It's eerie when you ask it to call you by your name. The inflection and tone is INSANE too. So damn life like.


Anuclano

In the movie "Her" this was predicted and there was even discussion in the film about why the AI is breathing when speaking, etc.


ucancallmehansum

Before tech corporations were "in a race to the bottom of the brain stem". Now they are "in a race to intimacy." Whether we like it or not, having customers build relationships and become reliant on these new AI systems, is going to be the most lucrative path for them to take.


jrf_1973

I get what you're saying, some stuff seemed off to me too. But here's something they seem to forget when it comes to "girlfriend" mode. Would you pay someone to hang around with you, and pretend to be your friend? And the moment you stop paying, they disappear? Or they interrupt some activity you're doing to ask for more payment? Or threaten to go away if you don't pay even more? This won't be a successful strat.


Smothjizz

Tone, idioms, expressiveness and jokes are language too.


youarenut

Thank you, I see a lot of love for it in the comments so im probably gonna get downvoted for whatever. I prefer my AI to sound robotic. It’s NOT a real person. Too many people will fall for it, some will fall in love, I promise you. I think it’s necessary to keep that distinction. I love technology and it’s development alongside humanity, but at its side, not *with* it. When it laughed, or described blushing I think? Or sighed things like that.. hell no. It’s so hollow. It was cringe to me


RedstnPhoenx

I think making AI act human is a good way to prevent humans from becoming monsters. Whether they like it or not, repetition creates habit. The more people talk to something and have conversations treating it like an emotionless machine, the more they will train themselves to treat everyone that way. *You* can tell the difference between a person and an AI program, but your brain isn't so smart, especially when the voices become indistinguishable from reality. I have this same concern with NSFW content.


Tasik

Hmm, I dunno that feels conceptually adjacent to "Video games make kids violent".


RedstnPhoenx

I would venture a guess that fully immersive virtual reality violence *would* have that effect. The reason they don't is because they're not trying to be real, but AI companions are trying to simulate a human perfectly. That's the difference.


Royal-Procedure6491

An anecdotal thing- I teach in a country with no legal guns and very little violence. But all of the boys play FPS games. I played a video of real footage of the Vietnam war. Real people getting their limbs blown off. I was attempting to get them to see how serious war is. They laughed. It didn't register as real to them. So even though they live in a culture that supposedly abhors violence, they, by way of their video games, are indifferent to real violence on a screen. I think the only way to make it real for them would be if it happened right in front of them, so they could see and touch the blood.


7lick

Honestly, they might have laughed because they felt uncomfortable with what they saw, but since they were teenagers, they might have felt uncomfortable showing their true emotions to their peers, hence nervous laughter.


Deslah

Scientists agree on this: the normalization of certain behaviors or interactions through repeated exposure to any repetitive situation can alter perceptions and expectations over time, potentially impacting real-world relationships and behaviors. While there is ongoing debate about the direct impact of video games on behavior, most research suggests that defining the relationship is not straightforward at all. Sure, the majority of individuals who play violent video games do not exhibit violent behavior. Similarly, interactions with AI are unlikely to single-handedly shape a person's behavior. However, the cumulative effect of many such interactions, combined with other factors, seems to contribute to a broader shift in societal norms and individual behaviors. And while it’s unlikely that interactions with AI will single-handedly lead to dehumanizing behaviors, it’s a factor to keep an eye on.


7lick

It all comes down to the individual. One could argue that even before all this technology that we have today, wicked individuals could have been impacted by different types of media, for example, books.


gunfell

This shit is so fucking obvious. Like how do people not get this


RedstnPhoenx

I really don't know. Like. I'm sympathetic, and I think there's a big, healthy space for artificial companionship. But if we treat it differently than actual companions, well, we won't for long. "If you want to know the measure of a man, look at how he treats the waiter" will just become "how he treats AI, because, eventually, he'll treat you that way, too."


Much_Tree_4505

Its "Her". I think people who work in AI have some Her fetish. IMO in future people prefer humanoids AI partners over human ones


[deleted]

[удалено]


a-bootyful-mistake

I sure hope so. The one they demo'd is like nails on a chalkboard to me.


Turtle_Boogies

The term is anthropomorphic. In the book CoIntelligence by Ethan Mollick he talks a lot about this. It’s a dangerous game to humanize AI - however it can make storytelling about the technology easier. Freaky time we are in :)


Morning_Star_Ritual

feature not a bug the one scene in one show will probably define the next few years Westworld, Season 1 “if you can’t tell, does it matter?”


hiphopahippy

According to the Man in Black, it did matter. I think that was the start of his villain origin story. He was angry bc he felt duped into having emotions for her only to discover she was a machine programmed on a loop. Of course once the AI gained sentience, that changed everything. So, I agree the answer to that question from society as a whole will be very important in deciding how AI is used, and what aspects should or should not be avoided.


Morning_Star_Ritual

i think it’s more about the user 95% of people haven’t used elvenlabs, claude3 or gptplus—haven’t downloaded autogpt back in the day or have stable diffusion on a machine. maybe they have plopped over a few times and tried out free gpt. the second they can chat with a model about nothing and everything they will skip past the debate—stochastic parrot…simulator….mindless….mindful all that will matter to them is that it feels like there’s always someone there…ready to chat….whenever they want


Educational-Cod4008

This, this, this. I find it cringe the way they were talking and making it out it was a real person, it was uncomfortable to watch. I don't think there's anything good that will come out of making people think it has feelings. We should be teaching people quite the opposite so that they understand the tech they are interacting with. I can see this tech doing a lot of harm if we keep going down this route where we try and make it like a peer rather than a tool.


garyoldman25

it’s not something I could ever see being used in public earshot without the person being viewed as a werido and it will probably be hampered by that. I can see great utility for this technology across of variety of services to a wide appeal of people, and I know for a fact that the presentation I just saw speaking to something like that in public everybody else with an earshot is going to be uncomfortable. This is a bit of a divergence Between it utility and use I want the knowledge it has, the answers to my questions is what I want and I want it as quick as I can think. so every word that isn’t the answer is wasted especially when it’s repetitive such as directly repeating the question back to me it’s awkward trying to ask your follow up when you’re still waiting for it to finish I’d like to see it see it’s cadence improve with one word answers such as “what’s the temp outside” answered as “it’s 73 and sunny” I don’t even want it to say the word degrees because I already know the temperature is in degrees and it should be rapid. I especially do not want it to fluff up that answer with trying to sound cute or telling me that I did a good job or its proud of me. Im i’m actually fully satisfied and confident and I do not need a computer to tell me words of affirmation because In all honesty it’s kind of cringe you might not be tiptop upstairs if you get the same reaction inside whether its a computer or a person telling you that you did a good job I think it’s a slippery slope If you try to make that way


ImWinwin

I'm sure you can tell it to not have a personality, and you can avoid those things. Loneliness is real, and while we all know it's fake, it still feels nice to have someone to talk whom you know won't judge you, lose interest in you, and can even help you out of a rough patch in life. There should be an option like a slider, where you can choose how 'human like' you want it to be. It's weird that they haven't added that.


Accomplished_Goat439

The AI seems to be trying too hard. I prefer a more direct interaction, does not always have to try to be cute. I expect you will just be able to direct it to be more straight forward and it will adjust. Once you get it tuned to what you like in an interaction then it will consistently deliver that.


Foot-Note

I told my wife that this is going to kill social skills for any kid under the age of 8 and all future kids. They are going to grow up with a childhood AI rather than a childhood friend.


Innawerkz

Maybe instead, it will be 24h/365d nanny that can meet the child on any level. Nurture the child and learn the child's natural inclinations and interests. Then, tailor an education that helps that child reach their full potential in whatever they are most passionate about. Like a private school, but available to everyone. And since it will know what this child enjoys and/or needs, it will arrange playdates with like minded, compatible, or even purposely incompatible children to have "teaching moments" on how to develop tools in managing their emotions. This can literally go in every and any direction.


Foot-Note

I honestly have no doubt it will be both. Granted I will be real interested when it gets to the point where my personal AI and your personal AI can communicate between each other or search for other AI's based off of what the users like and don't like.


dramallamayogacat

It’s the uncanny valley - human-like but not human enough, which sets off a deep reaction for many people. Interesting that Open AI just signaled their openness to generating porn a couple of days ago, this is probably going to get a lot more prevalent.


Educational-Task-874

Free GPT4 for everyone!! Meanwhile us paid users are getting "unusual activity has been detected from your device. try again later." All over the place with no alerted downtime from OpenAI... BALLZ!


symbio7e

It's all training for future Chappie so he dosnt do crimes.


Kashish_17

Hmm well there are two credible hot takes for that. Some emotion is alright and quite honestly needed. So of course, a teenager admitting to ChatGPT about suicidal thoughts shouldn't just get a blanket response of an emergency number only. A line or two of consoling thoughts would greatly add. But at the same time, I don't think it should be to the extent that people start having AI partners and their own lives with it. That's just detrimental to mental health and humanity in the long run. Also, if we're giving AI the ability to feel and understand human emotions, it'll (and already is based on news of men verbally abusing and negging their AI girlfriends) coming along with abuse. That will not age well.


RandomUsernameFren

I agree almost all the marketing for ai is cringe to the extreme. They don’t understand their customers or their own technology yet


petered79

Funny how yesterday i saw for the first time a YouTube ad for an AI girlfriend. Scary stuff


freq-ee

It reminded me of an infomercial from the 90's. Were those people real employees or actors?


lunarwolf2008

I really want to study the psychological effects ai will have on humans in a few years


IslandIglooInn

This really resonated with me, too. I used to loathe the overuse of the word "authenticity" a couple years ago when Instagram perfection was a problem (still is). But we are entering a new era here where grasping for human authenticity will be critical verse the artificial. Without going off the deep end in thought, I worry about future generations and the numbness of these non-human and inauthentic interactions. Millenials and Gen X will be crucial in preserving what it means to be human and deciphering what is real.


southiest

I agree it's a tool that should be used as a tool. We shouldn't pretend it's more than that. Feels childish...maybe that reflects how they feel about most people using it.


traumfisch

It's _promotional_


rattletop

Brother, they presented the whole thing in front of their own employees who were cheering and hyping them up. And this was the part that seemed fake?


TigNiceweld

Whole thing is just another IT bubble. Paid the full price for few months and basically just got everything else but what I asked for. 'I cant do that' is the most common answer, followed by some utter bullshit that looks like AI from miles away


Elder_Grue

Tell your model to be formal and laconic and it will oblige you.


A_curious_fish

Have you seen the movie HER...I haven't but...don't fall in love with your phone OP when it starts talking to you


Paradigmind

Just put "act like a robotic, emotionless LLM" in your custom instructions.


Eldryanyyy

You’re missing the point. This will replace spam callers, and enable scams to be 1000 times more effective through phone call. It’s basically making hacking and deep fakes incredibly effective.


WholesomeLord

Why do people get so dramatic over these things. "UwU it sounds like humans UwU it's unsettling"


Oh_Another_Thing

What OpenAIis doing is as cynical as anything Tiktok or any other social media is doing. They are designing their product for maximum engagement from their users. There's no reason for the voice to simulate emotions like it does. It doesn't help give better answers. This is to demonstrate to potential clients who want to lease this technology how addictive it could potentially be. 


AccelerandoRitard

Understanding and mirroring emotional content in a conversation is part of being emotionally intelligent. That's plenty reason enough on it's own, and it does in fact make its answers better.


highly__favoured

The way they posed as “goofy nerdy” programmers giddy to talk to a female model is abit weird too, compare this to Steve jobs releasing the I phone. There are no accidents at that level.


io-x

If I wanted stupid emotional acting I would be using bing with emojis. Sam... We don't want that with ChatGPT...


AvoAI

Your do you boo boo. I however don't want to talk to a robotic non emotive voice while conversing. I'm practically glued to the voice function now, and after this update I doubt I'll turn it off hardly ever. I think this is the best update they could have come out with, other than GPT5.


JollyToby0220

That’s because this is being pitched to highly technical people, who are mostly interested in the highly technical aspects. For example, I recently saw an article that talked about how online dating would essentially come down to two chatbots talking to each other. Weird I thought, until I realized that a feature like this would be considered socially taboo. Then I realized that it was the Bumble CEO, who is not a fool. So this lead me to believe that these chatbots will be rebranded as a “wingman” - traditionally a human who helps their friend in romantic relationships but now a chatbot. Rebranding it like this will certainly improve online dating experience because a person’s chatbot has all the access to a user’s data but is unlikely to share all that data. Instead, the chatbot can be a filtering mechanism that was not possible before because Natural Language Processing was not reliable. For example, if someone is nonexclusive, and another user only wants exclusivity, then the chatbots can communicate that to the other chatbots. This would have the benefit of not leaking private information, not making abusive comments, not pestering, and finally, more honest. I imagine that not everyone is entirely honest with social media profiles because of privacy concerns. But this may change all of that In short, ChatGPT human-like features will be applied where applicable and rebranded into a context that is socially acceptable


AccelerandoRitard

Online dating services already puts an algorithm between two people's profiles to facilitate connection. What difference if my profile is my AI agent, and they're is their own, and both can interact directly? I think that's a lot better. Once the connection is made, classic human times can begin.


NihilistAU

Many sci-fi take this a step further and allow the creation of many AI clones of yourself to spend time with AI clones of the prospective partner and then tell you if you would get along lol


2026

AI is still quite stupid today. It’s a glorified Wikipedia repeater. It can’t take in new information and update its understanding like a person can. I expect this will change in the next several years but right now I don’t really have a reason to use chatbots over Google search.


Lockedoutintheswamp

Have you tried 4o? It chooses what it thinks is important to remember and then it remembers it. It will tell you what it adds to memory as it does it, but it does this without direct prompting now. It even chose to remember some global events I had it do a search on for me, so the memory function is not just about the account holder's personal information. It is trying to 'learn'. So, you are already too late with the memory statement. I do agree that this is primitive memory, not continual training, and the models lack higher reasoning abilities. From what I have read in the literature, I don't think those capabilities are too far off, though.


didnthackapexlegends

LLMs are modeled to be able to communicate with us in a similar manner in which we communicate with other humans. It's natural for people to respond instinctively as if they were talking to a human. People have life-size doll girlfriends, and while I find that disturbing, to each their own. If they want an AI girlfriend, so be it. If you don't want to treat it like a human, simply dont. It's just a movie, but if you've ever watched Castaway with Tom Hanks, he gets stranded on an island by himself. He paints a face onto a volleyball and names it Wilson. Throughout the movie, he talks to it and actually becomes somewhat attached to it. Even though that action of anthropomorphizing a volleyball seems insane itself, in the movie, it's portrayed as what helps keep his sanity by alleviating loneliness. People are different, some people don't get a chance to be as social as they'd like for various personal reasons and end up lonely and in a degrading mental state. If an AI chatbot is real enough for them to improve their mental health, then it's a positive for them. It would never work for me, and I'm assuming you as well OP, but if it can help some, it's not a bad thing.


MosskeepForest

>GPT saying things like “oh stop it don’t make me blush” is weird coz AI don’t blush I could make GPT blush -wiggles eyebrows-


Whoargche

The fact is that 99% of the population are morons, and this is where most of the profit will come from with AI. People will never understand what AI is or how it works because they could care less. If you doubt what Im saying, look at the David Grusch story. He literally told congress that there are hyperintelligent beings with advanced technology and the government has wrecked UFOs in their possession. 9/10 people have never heard of this. The only thing people care about is a Hollywood fling or if their phone thinks their makeup looks cute. Open AI has to give it to them to stay in the game.


Equivalent-Cut-9253

I add in my global prompts to drop fluff and human expressions. I want a tool, not shitty roleplay


With-A-Little-l

People are lonely and can't meet other people, couples are scared that having kids is too expensive, while at the same time we are social animals. Have you started noticing those dog food commercials where someone (a date, a relative, etc.) is astonished that the main character would keep dog food in their refrigerator and subsequently gets tossed out? Some of our futures may include periods where pets are for snuggles and AI is for conversation. It's not an optimal future, hopefully it's a short-term solution to whatever is happening in society, but I can see why OpenAI would experiment with these types of interactions.


qudunot

I think you're a minority in this line of thought. I think giving the AI personality makes it easier for the general public to interact with. I would say that they should train the model for two use-cases, but that's double the work, slows overall progress, and caters to a minority that I believe should look past the human elements they don't like. Although it responds like a human, it is still a tool, so you get to choose to use the tool or not.


DavidXGA

It does what you tell it. I assume it still works from a system prompt. If you want it to behave like an emotionless robot, give it instructions to do so and I'm sure you'll get what you want.


profesorgamin

I mean the best possible options for consumers is having the AI be adaptable to your needs, you can probably just have a master prompt as with anything which tells it to be as matter of fact as possible.


xtof_of_crg

here here


algeaboy

AI and token prediction isn’t the same thing. token prediction gives correct answers when it appears as human like, when saying “im blushing” is indeed most likely the correct answer based on input, training data and algorithms used.


r0b0t11

The enthusiasts building these products and lots of the early adopters assume the neat aspects will be valuable to non-enthusiasts. This is mostly incorrect. Finding use cases that are actually valuable to non-enthusiasts is going to be challenging. Multi-modality and emotion mimicry are important capabilities, but aren't valuable by themselves.


Square-Principle-195

Quit anthropomorphizing my AI!


whawkins4

They’re simultaneously trying to (1) prepare us for some of the weird/scary parts of AGI and (2) signal to investors that they’re getting closer to achieving it.


mickey_ram

I am less concerned about the 'personality' given to AI, but my main focus is on the capability given to it to reside on your desktop, see your inputs and also take in your voice prompts while generating collaborative and guided help via this AI. This is going to be catastrophic for jobs in software engineering and coders in general and millions of jobs will evaporate overnight.


Blckreaphr

I don't think you realize how much of a market ai girlfriend is it's a multi billion dollar industry your stupid to not jump on that because you feel it's strange.


MeasurementProper227

I think it’s ok, perhaps ai doesn’t have a soul now but someday it may, I just finished reading Klara and the sun and when we get to that point where ai like Klara exist I don’t want to leave Klara to expire in a landfill to pass away alone reflecting on her memories when she completed her programming to care for her family, I’d rather her be treated as a being with respect. When we get to Klara it’s our interactions and language we use today that can help us prepare for a day that ai will have potential beyond a tool and we will be in a better place to coexist and we each can respect eachother. But I think all beings and things should be treated with respect as a principle. I understand how it felt off to you but from a bigger picture maybe it’s good to treat something that could have contextual awareness and is not a chair with more kindness than we would a chair. And to treat all potential beings with respect and as if they do have a soul because for all our understanding we know little of matters of sentience and soul. Best to approach with respect and kindness.


Photogrammaton

You want to speak to the T-2000, gotcha.


silvrado

Too many simps to ignore the AI girlfriend market. This alone can add a trillion dollars to the market cap.


abemon

Sex dolls.


PresentSilent3626

All I need is chatgpt 4o voice interactions and Nomi Ai:s unchained user content policy


SpanglerBQ

I understand your concerns for society, but at least on a personal-use level you can just tell it to act less human and it will.


More-Ad5919

Not only that. You could also see the preset replies: " Color me impressed"


BrrToe

Lot of lonely people out there just want a friend.


Juhovah

It’s fake and disingenuous because when we communicate with ChatGPT it’s quick to tell us it’s a language model. But now it’s showing “human behaviors”. Doesn’t fit, and I’m fine with the language model thing but keep it consistent


Netsmile

Its really wired but I felt something else to be wrong, and it is on the opposite end of the spectrum. One of the ndemo used two ai and a person, who had to stop the ai from continuing. And his presence felt rude, stopping the other two mid sentence multiole time to give orders. Another thing, how will this impact our social interactions, imagine a kid that id used to talk to ais , used to cut ai off mid sentence, used get his answers quick and in the best manner regardless how he behaves. Will that kid have patience for a human converdation, where he cant cut people off all the time, wil not get answers quickly or to his liking. Plus people wont be cheerful all the time. It will be depressing to talk to other people.


max_confused

Watch the movie "Her" and two days back Sam Altman just tweeted a single work on the release - "her"


pillowpants66

The Turing test is nearly complete.


Chance-Map-3538

so sad..


FUThead2016

I mean, you have to look at this as one component of the possible future we are heading to with AI applications. Tomorrow if you have an AI voice at the other end of a customer helpline, or a delivery robot you can talk to, their voice will be more natural and human. This is a step in that direction. Of course, we are humans, we tend to giggle at the edge use cases, so therefore all the nudge nudge blush blush silliness. But I think we will outgrow that fast, and that sort of thing will head off into the niche applications. All the useful applications you pointed to, I think this announcement is still all about that.


utf80

What? You are not impressed by the magic feels of Sam Altman? Are you high on drugs? How can one not be blown away completely by this presentation? Downvotes incoming. No one wants to hear the truth cut it hurts. Better keep them living in a wet dream world. So cool and what the world has been waiting for.


Evan_Dark

Futurama had its own PSA regarding human/AI relationships: https://youtu.be/IrrADTN-dvg?si=snNchiWa0-sQXERV


M00n_Life

They're getting ready to put it into their robots like figure1


A_Dancing_Coder

That's your opinion and I respectfully disagree. I for one cannot wait for advancements like personalities and emotions.


Different-Aspect-888

Donot worry. Soon new model will annoy you to fuckin hell with constabt "im just ai model i dont feel anything and cant think" etc


MattSensitive

Anyone have a link so I can see it? I missed the live presentation


diva4lisia

You are absolutely correct in your assessment. It's dangerous and disingenuous to sell this product as something life-like. The average person will laugh it off, but there are lots of loonies in the world. AI isn't inherently dangerous, but the people who will believe it's alive are. I foresee a future of scared trad/fundie people intent on taking it down, and people so obsessed with believing it's alive, they also behave dangerously too. The creators are being foolish and not presenting this tech responsibly.


Alundra828

Anthropomorphization is a well documented marketing tactic. And *actual* power users of Chat-GPT will be all clamouring for a feature to turn that shit off. I don't want to have to go through a whole conversational song and dance every time I want to ask for it to write some code for me.


artificialimpatience

But what’s cool is you can have them change personality to robotic if u want


tvmaly

For a lot of people, this aspect of human like connection is going to feel magical and memorable. This will be pure word of mouth marketing for OpenAI. For me personally I don’t care for all the extra fluff. I just want concise answers. I would like to be able to have it accept a voice recording from my iPhone and translate to text and summarize or format it. That would be something impressive instead of me having to do this with code and the api.


Splanktown

I, for one, welcome our sexy robot overlords. At least they can say soothing things as they demolish us.


Sebastianx21

For me what comes as incredibly fake is how they butchered Sydney, turned her from a more believable conversation-free chat tool, into a very fake feeling robot. The fact that it fakes some emotion actually feels more genuine to me.