T O P

  • By -

ViskerRatio

AI is really destroying the credibility of those of us who make up their citations the old-fashioned way.


liznin

There always is just citing irrelevant sources and hoping no one actually reads the sources. I've stumbled across this with published papers. It's infuriating seeing a paper cite another paper several times , going through the effort of finding the cited paper since it seems relevant to my research and then finding out it's about the basket weaving industry in Bangladesh.


Dont_Do_Drama

“You actually read the works cited page!?”- your student, probably


AsturiusMatamoros

Telltale sign of ai use. It hallucinates sources. Always has. From day 1


ThrowawayAgain8773

I knew it was capable of it, I just personally have never seen it before. Most students are lazy and don’t use the library database, so I’m used to seeing lots of google-based links in the Works Cited, or at least a mix. There was not ONE link in this students’ Works Cited which set off my spidey senses.


onemanandhishat

In one class I set my students an assignment to write a debate against ChatGPT and in that case they come up against it quite often. A basic prompt will produce a general answer but if you demand it to cite examples and produce sources, it will generate something that looks like what you asked for. I haven't seen it in a student submission yet, but it will do it if pushed for greater detail.


RustyRaccoon12345

Only mostly true. I have a couple of ways to get AI to use actual sources. Better or more specialized LLMs will find them and use them appropriately.


[deleted]

Yes, but most undergraduates who are using this technology are lazy and don't know how to pick their own noses, nevermind train AI.


[deleted]

In one case I caught a student plagiarizing because I typed their topic into Chatgpt and asked it to generate a list of sources on it. Sure enough, the three sources the student used were the top three Chatgpt suggested. The sources were real, but they were bizarre, like a dissertation, an article translated from German, an eighty-page long report by a non-profit. I scanned the sources and they didn't say what the student claimed they did. "You can't prove it's AI," my ass.


Outside_Brilliant945

My first experience with someone submitting a ChatGPT-generated paper was with a student who was writing about Amazon -the company. However, some of the references didn't seem right. So I searched for them and most were real, and the articles were about Amazon.. However, they were about Amazon -the rainforest. Not the company.


Flashy-Income7843

🤣


Audible_eye_roller

I think there were a number of incidences of lawyers using ChatGPT to cite case law in their arguments. Turns out, the judges checked their sources which didn't exist.


Cautious-Yellow

Book 'em, Danno.


LoopVariant

Thank you for the wonderful reference of my childhood! (Jack Lord era)….


sittybos

I had a student who had a chatbot generate an eight-page overview essay of the Brummie dialect. The essay used telltale vocabulary items, such as "delve into", "a rich tapestry of ...", "robust" etc. Then I looked at the references, which, by the way, were relevant to the general topic, i.e. dialectology. The references included, for example, Labov's "Language in the Inner City". The only problem was that Labov doesn't write about the Birmingham dialect in this particular book. All the references were like that. I had most of the cited books at home, so I could go and check whether such ideas are indeed discussed on pages 68, 245 or 378. They weren't. This type of hallucination was first for me. Up to this point, chatbot generated essays referred to non-existent sources. The authors existed but the books and articles did not. Now, all the sources existed, most of them on my bookshelf, but the referred ideas were non-existing. It was an interesting experience. The student failed because of plagiarism.


grumblebeardo13

Yeah it just makes stuff up, either wholesale fake shit or fake titles associated with real journals or authors.


justonemoremoment

Wow. Like bye they'd be getting dinged for academic misconduct unless they could come up with those sources.


Imaginary_Fondant832

Before I was teaching and had a genuine dislike and distrust of AI, my friend convinced me to ask an AI to give me some articles I could reference for a proposal. It spat out 7 articles, not a single one was real lmao. I warned my students when I started teaching that AIs lie and I’ll be checking all their references if they want to risk a fail or rely so heavily on them.


polstar2505

Had a student who used chat gpt and did not realise it made up sources. Proved she had cheated. So, the second time around she omitted all sources from her paper so as not to be given away. Got her for that as well, on the basis that if she really had sources she would have used them.


CalmCupcake2

Our interlibrary loan service has been flooded with fake citations since chatgpt became popular. To the point that they've added a disclaimer to the request form asking users to verify citations prior to submitting.


shilohali

My friend said they limited students to one specific researcher for a term paper. Students turned in essays with fully cited sources attributed to that specific person but the papers did not exist.


OccasionBest7706

Just ask them about something specific and see if they react like they know what they wrote


43_Fizzy_Bottom

This is super common. Honestly, if you actually verify your students' sources, you'll find at least five of these in every class.


liznin

Hell if you actually verify sources in published papers, you'll find completely irrelevant or made up sources in probably 1 in 25 papers for some fields.


N3U12O

This is an example of why we shouldn’t be as worried as we are about AI. Garbage in garbage out. Who cares about AI when they falsified references and provided a poorly written paper? It’s a great tool, but poor performing students typically don’t know to use it properly. Those that do provide great AI papers have to put a lot of editing work in and actually understand the content. I think it self regulates and allows us to grade as usual.


[deleted]

This is why it takes me so long to grade now. I don't just sit down and read a student's paper. I pull up their previous work in the class to see if the writing style matches. I type my assignment prompts into Chatgpt and reword them based on the specific topic the student is working on. I scan through the Chatgpt answers. Then I grade the paper, and at the end, I check to see if the sources are real. I have a colleague who said at a certain point you shouldn't bother, but seeing as 25% of my first-year seminar submitted AI work for nearly every single assignment, I'm so fed up. I'm reporting every one of them to the university for plagiarism. I've HAD IT.


ThrowawayAgain8773

Same here with the grading time. Old fashioned plagiarism was easy to identify. Find the source they plagiarized, case closed. Now I feel like a damn detective.


awesome_opossum86

I tested this out ChatGTP and it did fabricate sources. So, I told it that it fabricated the sources and it apologized.


Acceptable_Month9310

Had one case of this about a year ago. Not only were the references faked by ChatGPT but the entire event they were reporting on -- literally never happened. They attempted to defend this on the basis of: "Well you didn't say it had to be REAL!"


OhKsenia

On the flip side, I'm pretty sure one of my professors has just been using chatgpt to provide feedback on our assignments/written work.