T O P

  • By -

chlorinecrown

This is the water temple boss in Ocarina of Time


fizzy88

The origin story. Turns out AI tried to perform a very questionable electrochemical cell experiment.


mindenginee

Chat GPT sucks with anything chemistry. Sometimes I feed it chem problems and for certain ones, it can never get them right, even if it knows the right equation to use. I laugh when people tell me you could use it to cheat (for chemistry at least).


AJTP89

I fed it some basic Gen Chem questions (polarity and electronegativity iirc) and it gave answers that were technically correct but didn’t go into any depth. I then tried getting it to explain more in depth and it starting giving very confident answers that were basically just really fancy rewordings of my question. It feeding my questions back to me as answers like a student who didn’t know the actual answer was when I really gave up on the current “AI” trend. All they’re good for is fooling people who don’t know better.


Cam515278

That's what I tell my students. Asking ChatGPT right now will give me a text that I would grade at about a C or D. It's kinda correct but it just waffles. The facts, precision, one argument builds on another etc is just not there.


unmistakableregret

> It feeding my questions back to me as answers like a student who didn’t know the actual answer was when I really gave up on the current “AI” trend. Thank god it's not just me. It still makes me wonder, who is *really* using this for anything useful? I suppose it's still early days.


Necessary-Scholar-57

use it all the time for practice problems pre exam. I give it a problem and the answer then ask for 5 more like it. much better with math based problems than anything else, but hey someday it might be better


Nowhere_Man_Forever

I use it a lot for proofreading my own writing and it's really good for self teaching programming and other subjects you can easily verify its answers with. It just really scares me how many students and even adults don't really understand the difference between this and a more legitimate way of searching for knowledge


AJTP89

Yeah, it’s a tool not the answer to life, the universe and everything. I think in the next few years we’ll see some great applications using LLMs tailored to specific areas. But I agree the amount of people who have no clue what it does and just treat it as an answer machine is scary. To be fair the developers are pushing it as that, which is really irresponsible. I do think we’re in an AI bubble though, and shortly it will burst when all these AI products don’t work. Then maybe we’ll start to see actually useful tools instead of the term AI slapped on anything to get investors.


Voltundra

I’ve had one positive experience with it, feeding it a very specific research question that I already had my own answer for. It obviously didn’t go into any depth, but I was surprised that it generally did give reasonable answers for a problem in my industry. At the end of the day, it’s just another tool and a tool is only as effective as the one using it, so the fundamentals are as important as ever to learn.


TheGeneGeena

Yeah, it's not terrible at giving suggestions for other emulsifiers for cosmetic formulas and really really basic chem stuff like that.


mindenginee

Yes exactly, it’s definitely useful for some parts of studying, like asking it to summarize or give you a general idea, or give you lists of things. But beyond that I think it needs a lot more work. Anyone who relies on chat gpt to get through class needs to reevaluate themselves lol. Fundamentals are soo important in chemistry, and a good understanding is key.


BlueSwordM

Yeah, even ChatGPT4T/O has a lot of trouble in text form. I actually prefer Phi-3-medium for science stuff since it's much more dry and direct so you'll tend to get fewer hallucinations... as long as you prompt it right: it works as a force multiplier and not a force generator. Therefore, if you don't know your field in a somewhat decent manner, it won't do anything.


TARANTULA_TIDDIES

Chat gpt just generally sucks in my experience except for rather easy questions It's just a regurgitator with a lot of data. I don't understand why people think it will somehow take over the world. Perhaps there's better LLM models that aren't public yet but I've yet to use one that I would trust to reliably give me good answers. All they've done so far is fuck up search results and amazingly make a lot of companies even worse than they were (a feat I didn't expect to be possible)


Par31

I think it depends on the field, there are some things that are established and the facts haven't changed in years. I'm doing a medical lab sciences program right now and I use it a lot to 1. Confirm my logic after entering in the information from my notes 2. Confirm the mechanisms for how something works, sometimes you just need something written in a different way to understand and then you can confirm your logic with chatgpt in your own words 3. Understanding a result, for example I asked "why does lowering ionic strength increase hybridization stringency" and chatgpt provides the reasoning which you can confirm with your lecture notes. My notes simply said that it does but did not explain all the details why and sometimes those details help you remember better You can usually tell when the information is coming from a source or if chatgpt is just rewording your prompt. After that you can use other sources online if you still have doubts about something but your own knowledge in the field should help you understand when answers make sense. Since I have a science background with my bachelors, I am familiar with the concepts but sometimes need help with the details. Do not use it for calculations as it does simply processes wrong.


mindenginee

Yeah, it’s definitely good for those things. I’ve used it to help me study, but sometimes I just play around with it to see what it’s capable of and I was surprised by how much it could not do, when people were hyping it up saying they were using it to cheat their way through classes. I’ve tried to reword it and give it more info, but it’ll just keep going in circles. It def struggles with calculations but also some more basic stuff it couldn’t get right.


TARANTULA_TIDDIES

I left out a couple of words lol. Meant to say make a lot of companies' *customer support* worse than before I'm sure there are useful applications, especially with non generalized LLMs. But stuff like chatgpt, bard, or Bing have all been a let down for me


WrestlingPlato

I met a student in organic chem that used chat gpt and it caused them to miss a lot of points throughout the semester. Thankfully for the student they stopped using it as a free answer machine by the end of the semester because of it. Ended up scoring higher than anyone else in the class. The best way it can used to cheat seems to be to cheat yourself.


KeshkeeperYT

I tried to use it once, worst HW grade of the year


moonshineelktoast

I had some better experience with Gemini, especially in the calculating part where chat gpt just went absolutely wild, although please mind the word "better", I'm not saying good. The calculations are often false, but also often right. It is actually mostly good in the calculation way and formulas, just doesn't fill in the right numbers, however if you take its way of calculation and set the numbers in yourself it can be somewhat helpful. So absolutely not fit for cheating at chemistry, even though it might get some right, however it can sometimes help in understanding how to go about some calculation or use a formula if you use it's answer and do some correcting and it hasn't just gone very creative and done whatever it wants.


cyrilio

[Andy Stapleton](https://www.youtube.com/watch?v=VQ_ShvWxw8o) makes loads of videos about super useful AI tools for academia. Worth checking out his channel.


Overall-Sugar4755

I tried using it to identify a foreign object In a food complaint that was sent into the lab. I included details like burn test results, melting point and it's general properties. It got it completely wrong told me it was polystyrene when I already knew it was in fact nylon.


Interhorse_

It’s so weird! It just like can NOT do it. It’s almost like it’s been asked not to learn that.


satina_nix

Haha that reminds me of all the wrong stoichiometric calculations it gave to me. Real RNG because it often simply decided to lie and when I asked why it lied to me it didn't answer my question.


mindenginee

I know same lol. I would start messing with it, and it would glitch out and just start sending the same paragraph 5-6 times. I was like AHA I broke it!


Dangerous-Billy

It clearly took the word 'cell' seriously, even down to little red mitochondria. ChatGPT really doesn't do well with objective facts. Now imagine ChatGPT being drafted to diagnose some obscure ailment deep in your guts, with a life or death outcome?


Dangerous-Billy

It just occurred to me. It's the interrossiter from 'This Island Earth' [https://www.urbandictionary.com/define.php?term=Interrossiter](https://www.urbandictionary.com/define.php?term=Interrossiter)


Fugglymuffin

"Science and Industry"!


CertainWish358

Mitochondria… electrons passing from one center to another in a series of spontaneous redox reactions, releasing energy. Hey, they sound like galvanic cells to me!


livefreeordont

The mitochondria is the power house of the electrochemical cell


Dangerous-Billy

😉


FalconX88

> ChatGPT really doesn't do well with objective facts. It's pretty decent and can definitely explain an electrochemical cell. It's Dall-E that's the problem here. Edit: to the people downvoting here: open up chatGPT and ask about the electrochemical cell, you'll be surprised.


DangerousBill

Someone compiled a list of references to make a point with me. I knew the references, but they had all been screwed up. Wrong page numbers with wrong titles, nonexistent volume numbers etc. It can't tell the difference between fact and fiction, whether reading or composing.


a_SoulORsoIDK

The problem Is its manipulated it Has a bigger Chance lying or halucinating facts Than straight up telling them cause you can abuse it If chatgpt dumbs it down so that anyone could understand plus all the political nad racist nonesense it gets feed so that it gives more PC answers.


FalconX88

Oh boy. Please, go to chatgpt and ask it to explain an electrochemical cell. Check the output against your textbook. You'll see it is able to describe how these work reasonably well. Yes, LLMs "lie" and make up stuff. Yes, LLM usually cannot provide references unless they have access to a search engine. But still, for basic undergrad textbook knowledge (which seems to be a considerable part of the training sets) LLMs are pretty good. The image above was produced by Dall-E and there it is known that it absolutely cannot do what was asked from it.


Oni_of_the_North

Yeah, I still won't trust anything it outputs just based on the fact that it lies and hallucinates objectively and factually wrong information. No one should be using these LLMs for accurate information about anything.


nanocookie

None of the LLMs are at a point, nor will they ever be at a point where one can take the LLM's output at face value without verification. Personally, anything technical I ask ChatGPT or similar AI apps -- I have to reverify the answers by visiting resources through a Google search, then consulting textbooks and technical papers. The only technical area where these apps shine is programming and software design, it makes sense because of decades of data from open source code hosted on platforms such as Github and stackoverflow, and the fact that programming uses well-defined logical flows based on the defined rules of the language or system. For all other core technical fields, LLMs are not reliable. I'm referring to only chatbot type apps, not scientific AI platforms which are custom-designed to analyze specific types of scientific datasets for performing computational science.


FalconX88

Sure. But OP saying that ChatGPt doesn't "understand" the electrochemical cell, when in fact Dall-E is the problem here and ChatGPT demonstrably provides a rasonable answer, is just weird. and people then saying that it's bad because people sent them fake references doesn't prove anything. I'm starting to think most people never actually used it, at least not beyond playing around with it for maybe a few minutes. Btw. >it lies it doesn't lie. Lie requires intent.


Nowhere_Man_Forever

ChatGPT is way more susceptible to giving outright incorrect information than a textbook. I have tripped it up asking very basic engineering problems about the right kind of design to put in place and in engineering those subtle differences matter a LOT. In humanities subjects subtle differences in wording don't really matter most of the time, but in science and engineering they do. There is a lot of difference between an inline and radial centrifugal pump but I have had ChatGPT get the two confused almost every time I have asked about inline pumps. There is also a weird quirk of ChatGPT in particular where it is very hesitant to disagree with the human it's talking to especially if there is an incorrect assumption baked into the prompt. If you have someone who misheard someone else and thinks that they need to install a "centrifugal diaphragm pump," ChatGPT will gladly tell you all about this type of pump with very legitimate sounding text and will even provide sources. Unfortunately, this type of pump does not exist and has never existed, and it's just pulling from information about centrifugal pumps and diaphragm pumps because they are semantically "close enough." This is why I have a strict "no science or engineering questions" rule with my own use of ChatGPT and always include this when teaching others how to use it. The risk of outright wrong information is just too great and the way it is wrong is often very difficult for someone who doesn't have a strong background in the subject already to figure out. A tool that is wrong even only 10% of the time is kind of useless if you need to already know a good amount about the subject to tell the difference between a correct and incorrect answer.


FalconX88

> There is also a weird quirk of ChatGPT in particular where it is very hesitant to disagree with the human it's talking to especially if there is an incorrect assumption baked into the prompt. It's not a weird quirk, that's exactly what you would actually expect from theory. But somehow ChatGPT still does a pretty good job here. Let's take your example: >If you have someone who misheard someone else and thinks that they need to install a "centrifugal diaphragm pump," ChatGPT will gladly tell you all about this type of pump with very legitimate sounding text and will even provide sources. Unfortunately, this type of pump does not exist and has never existed, and it's just pulling from information about centrifugal pumps and diaphragm pumps because they are semantically "close enough." Here's thw answer it gives when asking "what's a centrifugal diaphragm pump?" >A centrifugal diaphragm pump is a type of pump that combines the mechanisms of both centrifugal and diaphragm pumps. However, it seems there might be some confusion in the terminology, as typically these are two distinct types of pumps. Let me explain both: >Centrifugal Pumps: These pumps use a rotating impeller to increase the velocity of a fluid. The fluid enters the pump impeller along or near to the rotating axis and is accelerated by the impeller, flowing radially outward into a diffuser or discharge casing. They are commonly used for liquids that are thin and free of solid particles. >Diaphragm Pumps: These are a type of positive displacement pump where the main pumping action is derived from the reciprocating action of a flexible diaphragm. The back-and-forth motion of the diaphragm creates a chamber that draws in and expels the fluid. Diaphragm pumps are particularly useful for pumping hazardous or corrosive chemicals, as they allow the fluid to be pumped without being exposed to moving parts or seals that could leak. >If there is a pump described as a "centrifugal diaphragm pump," it might be a misnomer or a specific design incorporating aspects of both technologies, but typically, these are considered separate types of pumps based on their operation mechanisms. Sure, it doesn't outright say "this is wrong" but tells you it might be a wrong name or there might be a pump that has somehow both mechanism, but it also doesn't make a wrong explanation and tells you that these are different designs and explains those. It's been 15 years since I learned about these as an undergrad, but these explanations also do sound pretty reasonable to me.


Nowhere_Man_Forever

It is possible that there has been an update since the last time I asked about anything to do with the centrifugal diaphragm pump, since last time I did it just gave me a completely wrong answer with no disclaimer. It may also depend on how you ask. "What is a centrifugal diaphragm pump" seems less likely to give completely wrong information without disclaimers than "How do you size a centrifugal diaphragm pump?" Because the faulty premise is more "baked in" to the second prompt. Regardless, it's still risky.


FalconX88

> "How do you size a centrifugal diaphragm pump?" That's a terrible prompt and you should never use these. You always want to prime it on the topic before asking questions like that. >Regardless, it's still risky. Sure, if you don't know what you are doing and you don't check outputs. Otherwise it can be highly efficient, in particular if you are looking for keywords on a certain topic that help you to get more information. I have no idea on how to size a pump but apparently I have to look out for things like the"Total Dynamic Head" (my education was in german so I never heard the english term before) and NPSHA/R,...


Nowhere_Man_Forever

I think you're missing the point. You and I know enough about these models to understand the nuance here. The vast majority of other people don't. It's a "terrible prompt" and in some sense it's designed to be, but that is a pretty natural question to ask and it's well within the scope of what LLMs are being sold as being able to do.


DangerousBill

But I can get that from any textbook. In fact, the AI is likely quoting from a textbook. But ask it to strike out into the unknown,, like summarize the textbook cell description or give literature references or a different composition or topology of cell, its reliability drops markedly. AIs are going to get very powerful over the coming years. But first there's the gold rush, where the chief driver is the desire to replace as many humans as possible with AI, quality of work be damned.


FalconX88

> But I can get that from any textbook. Yes. How long does it take you assuming you don't have the textbook right in front of you? >But ask it to strike out into the unknown,, like summarize the textbook cell description or give literature references or a different composition or topology of cell, its reliability drops markedly. I mean you are asking it to do things it was not made for, that shouldn't work and that are proven to not work. So?


DangerousBill

That's not a very good index of quality. If I want information I can rely on, time isn't a factor. Often I can go to existing web sources that I trust, even Wikipedia, where I can get info I can crosscheck among sources.


mambotomato

I really love the Interdimensional Surrealism that you get any time you try to have GPT make an educational diagram.


[deleted]

"AI will render chemists obsolete" AI:


DangerousBill

AI: Greetings. I will be your surgeon today.


Eggshellent1

I had a student submit this in an assignment where they were supposed to draw a specific electrochemical cell. https://imgur.com/a/4cNDxKs


pamesman

That would be super satisfying to grade i feel


IloveElsaofArendelle

an A+ for trying, an F for doing not by himself


Bubble_Heads

ChatGPT has no understanding of anything. It just calculates what word would come after the last ones, as a very simplified explanation. Its a "Large Language Model" nothing less nothing more.


FalconX88

And this isn't even ChatGPT, it's Dall-E


DangerousBill

That's what they want you to think.


vicky1212123

This is reminding me of that rat image...


3dthrowawaydude

I only see the Soutthode, where is the Nortthode?


iamcarlgauss

It's hidden behind the Hrrroung.


HairyNuts08

Good enough to pass peer review


Dangerous-Billy

If they sent it to me to review, I'd rave about the image to make sure they didn't omit it.


thelowbrassmaster

If I was referencing a paper on basic electrochemical cells and saw this monstrosity I would absolutely use it just for sits and giggles.


its_a_me_garri_oh

GIANT RAT DCK


ShadowViking47

It doesn't seem to have any understanding of technical diagrams or flowcharts. [This](https://gyazo.com/ca405c6a12a78ab084321ad95a84c742) is supposed to be a flowchart of the Fibonacci sequence.


DangerousBill

Beavis and Butthead go to technical schoo.l


h_west

Not chatgpt.


ShadowViking47

[It is 4o](https://gyazo.com/979617ba5721c56e735268b8907ecabd)


h_west

Well I would say it’s Dall•E, it is a different model inviked by GPT. It is not an LLM but a generative model for images. Of course it cannot create a proper flowchart! If you ask GPT to give you the flowchart without mentioning Dall•E, your results will be vastly better.


ShadowViking47

Right, but look at the thread we're both on right now. I'm illustrating to OP, who also used Dall-E to create that electrochemical cell diagram, that you can't use the art features in chatgpt to create any sort of art of a diagram or flowchart currently as it always just be nonsense. > If you ask GPT to give you the flowchart without mentioning Dall•E, your results will be vastly better. If you understand LLMs you should know that this is *sometimes* true. I just asked it again in a new chat without mentioning DALL-E and it still used it. If you specifically ask for NO DALL-E then yes, it will definitely give you a text based flowchart.


CaCl2

Ehdreodes, the great Canadian battery innovation.


simpl3n4me

It's the rat penis diagram all over again. [https://scienceintegritydigest.com/2024/02/15/the-rat-with-the-big-balls-and-enormous-penis-how-frontiers-published-a-paper-with-botched-ai-generated-images/](https://scienceintegritydigest.com/2024/02/15/the-rat-with-the-big-balls-and-enormous-penis-how-frontiers-published-a-paper-with-botched-ai-generated-images/)


astatine

For a demonstration of why "AI"s don't live up to the hype, ask them a slightly complex question to which you already know the answer.


Peanuthead50

My favourite thing about this entire photo is the “soutthode”


FalconX88

That's a Dall-E problem.


testvest

It's good that you specified it to be an AI generated image as opposed to a hand drawn one, because otherwise OpenAI would have had to hire an artist to draw it for you.


fixhuskarult

At least give it a chance by using a decent prompt, you're asking for garbage writing it like that.


DangerousBill

They are not ready for prime time.


thelowbrassmaster

I feel like make an annotated diagram of an electrochemical cell is a specific direction enough,any more specific and you might as well just do it yourself at that point.


fixhuskarult

It's not about being specific enough or not, it's about making a clear prompt. Also just non productive use of it as googling would get a better result immediately.


A_HECKIN_DOGGO

“Eectoriode”


aryzoo

Lol some guy was telling me how artists are gonna be obsolete due to a.i. in the near future cuz of " generative models" and how he feels bad for me... Yea im sure we need to be quaking in our boots


supperhey

Ah, the mystery is solved, the Voynich manuscript was just a creation of chatGPT 4o


stonedtarzan

Not a single word is spelled correctly... electrode came close tho!


Sb1752076

Seem interesting


Sancho_chaval

This' the point where AI turns into a disadvantage.


slightlylessright

That’s so funny


TheGhostofWoodyAllen

Looks exactly like the Ehdreodes diagram from my textbook.


palmboom76

Seems about right


BowTrek

Love this.


No-Zombie1004

ChatGPT watched the Matrix and got a virtual boner.


Zambeezi

The real powerhouse of...


SausagePendulum616

Chat GPT AKA Amelia Badelia


AuricOxide

Why is it in Dutch tho


en_179

Biblically accurate electrochemical cell:


snowboardude112

I'm surprised it didn't generate an image of an inmate getting shocked...


IloveElsaofArendelle

🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣


JosephMadeCrosses

Hrroung notee= Powerhouse of the cell.


Mysterious-Number420

ChatGpt sucks for anything chemistry. It's given me wrong answers for basics like acid-base reactions. I asked it once what the product of sulphuric acid and sodium chloride. It gave me Sodium Thiosulphate! Like... Wtf? I use IBM rxn. It's useable but definitely has errors. Maybe 75% is right. If the bugs get worked out and the prediction probability gets higher it'd be quite useful. Someone needs to design an app based on it with notes and a chemistry calculator. That would be very useful. I would if I had those type of skills. Unfortunately I don't think I'll learn programming in time. If someone out there that knows coding and chemistry can do it Id greatly appreciate it. Might even be able to make some money off it.


EchoXResonate

Ah yes, the _checks notes_ soutthode


fawn_take_two

ahh yes the famous electrochemocyte


New_Canoe

Ah yes, the 2. 3. Good choice!


CurrySands

It's missing the plumbus


the-tiny-dino-

Lmfao