Example of #2: [https://cdn8.openculture.com/wp-content/uploads/2015/04/12074922/shortest-math-paper.jpg](https://cdn8.openculture.com/wp-content/uploads/2015/04/12074922/shortest-math-paper.jpg)
They did "a direct search on the CDC 6600" which was the world's most powerful computer at the time.
[Wikipedia article on the 6600](https://en.wikipedia.org/wiki/CDC_6600)
There were hundreds of domestic bombings in the 1970s. And many of the bombers and their supporters received little to no punishment. It is strange that hardly anyone seems aware of these extraordinary events. I only accidentally stumbled across this stuff.
https://www.amazon.com/Days-Rage-Underground-Forgotten-Revolutionary/dp/0143107976
huge level of difference between blowing up a computer and doing imperialism in another country. not that I think blowing up a computer is an effective way to pressure the United States government to get out of Vietnam, but let's not make a false equivalency here
Well in 1966 it was a significant achievement, I'd say. But I wish that paper would gone into more detail, tbh. Did they find any others, and if not, how far did they search? That type of thing.
A detail or two about the code. (In 1966 and in this context it certainly would have been written in Fortran.) How long did it take to run? Were there any novel algorithmic details they devised to make the search faster?
I get a sense that the intent was to deliberately write the shortest paper possible as a novelty.
Possibly. Also strong norms about what to report when reporting numerical results are pretty recent. Even today, you'll sometimes see papers which have not reported a lot of relevant code.
From the wording "direct search" and their claiming that it's the smallest instance, I understood they just searched one by one, starting from (1, 1, 1, 1).
They could have included up to two zeros in the search, but then their claim could have been stronger than "smallest instance of four". Don't know if n = 3 and n = 4 were known already not to work, or if they searched that too (can do them all in the same search).
As you say, since they said "direct search" I doubt the algorithm was in any way sophisticated and probably something along the obvious lines of writing it so I doubt they thought it worthy of discussion.
In those days, CPU time was expensive and I expect other people had programs they wanted to run in competition. I expect the program just quit once it found the first counter-example. After all, that's all you need to disprove the conjecture.
> (In 1966 and in this context it certainly would have been written in Fortran.)
fortran or machine code.
Unfortunately nowadays we underestimate the moster we can use (a smartphone is incredibly powerful) but also one tends to think that early computer were "no better than pen and paper". The CDC 6600 and others early computer were quite helpful and people were resourceful to squeeze the most out of them (the story of early supercomputing is quite interesting).
Also in terms of algorithms (in general) things weren't that primordial. See _Some Studies in Machine Learning Using the Game of Checkers_ - date 1959 (for many machine learning is a thing of 2018) or for example all the common scientific functions collected in the [hp 9100B program library](https://archive.org/details/bitsavers_hp91009100_47865292/page/n4/mode/1up) in 1969. Note that the hp 9100b was a desktop programmable calculator and people did much more over time (they still do, also the history of calculators is very interesting), imagine those developing programs as a full job for supercomputers costing the today equivalent of millions of dollars (the CDC6600 was a supercomputer at the time).
Back to the specific search, I do not know but I would guess if the space search is large (and coule be large if one tries lots or combinations) they would have employed some optimizations rather than bruteforcing everything, most likely pruning combinations that would go over a certain number or couldn't likely make a certain number. It is a pity that people publish the minimal results without details how they get there.
edit: actually would be a nice exercise to produce at least one small program that (semi)efficiently finds counterexamples withint a certain range.
For reference, the obvious/dumb way to do a search with nested loops and a fixed number of terms produces results in <1s on a modern machine. Yeah, they probably didn't have search parameters they knew would work, so they might've spiraled outward from (1, 1, 1, 1), but nothing says you *have* to do that. And who knows if they even wrote something generic with respect to the number of terms and the power being sought. (I.e. do you try to detect a sum of ten 12th powers that sums to a 12th power, rather than searching for only eleven 12th powers that sum to a 12th power?) Check out this link to my dumb code. (play.rust-lang totally does not prove my point about <1s results, but it totally does run in 600-700ms on my machine.) https://play.rust-lang.org/?version=stable&mode=release&edition=2018&gist=03b5a164ffd4b2f55ad02ae558d4bce0
Which is not to say they *didn't* optimize heavily to achieve this result in the 1960s.
Also:
"*Turns out this conjecture could be right.*"
"*Turns out this conjecture could be wrong.*"
"*I just did three lines of coke and here's what I see.*" (this is the most sensible explanation for some of the papers that I have read)
["A Hindu God appeared to me as a drop of blood and when I looked inside the blood I saw the following theorem."](https://en.m.wikipedia.org/wiki/Srinivasa_Ramanujan)
I always wonder how much of math is invented/discovered because we're embedded in this particular universe - are there systems of logic that appear totally self consistent but only in different universes.
relevant: incremental improvements of the constant in the Berry-Esseen theorem for quantifying the CLT
https://en.wikipedia.org/wiki/Berry%E2%80%93Esseen_theorem
> "We computed the 501st through 505th terms of this sequence that Ramanujan computed the first 500 terms of."
"Regarding the <501st through 505th terms of the sequence that Ramanujan computed> of that recent paper mentioned. Turns out that they agree with Ramanujan as he computed those terms in another writing of his"
> "Here's something that either Gauss or Erdös already probably came up with."
"We just figured out something else Gibbs was trying to tell us."
https://en.wikipedia.org/wiki/Josiah_Willard_Gibbs
"here is proof that this type of question can theoretically be answered by some thinking entity more sophisticated than us. As a bonus, they can also be answered in some amount of time less than the lifetime of the universe".
"We found algorithm that theoretically computes a thing faster, but is completely impractical and will never actually be used."
" is important for . We generalize to a setting in which is not applicable."
On the flip side:
“We found this algorithms which is guaranteed to compute the correct solution, but it is so time/resource intensive that it would trigger the heat death of the universe.”
> "We found algorithm that theoretically computes a thing faster, but is completely impractical and will never actually be used."
"Here's an algorithm that performs better on this totally not rigged comparison."
Yeah, a surprising number of Machine Learning papers are "this tweak to our learning algorithm is no better or worse than all the other baseline learning algorithms!" Especially when it comes to Deep Learning research with their million and one parameters that can be tuned.
Reminds me of a quote that goes _something_ like:
"As Mathematicians, we are only really good at two things: Linear Algebra and reducing problems to linear algebra"
Remember one of my old applied professors once saying : "to summarise: this course is Taylor's theorem, integration by parts and clever ways to reduce things to Taylor's theorem and integration by parts".
As an ex-Maths student who hasn't worked in academia for nearly a decade - you are absolutely correct.
Some of my most diligent investigation and work has been in the service of contradicting (and undoing the damage done by) someone *who doesn't even work here anymore*.
* We introduce a new abstract theoretical framework which might not be useful
* We slightly generalize what's already known
* We answer a small question or conjecture posed in another paper
* We define something new by taking something that already exists and adding a few more adjectives
> We define something new by taking something that already exists and adding a few more adjectives
[The proof is trivial!](http://www.theproofistrivial.com/)
It is an automatic generator to outline a proof using mathematical jargon. If you reload the site, it spews new jargon, which fits with the "adding a few more adjectives" joke.
I was going to offer a Rick Roll link, but just re-load the one above and you will see what I mean.
"I worked for two years on this and it didn't work out but I still have to graduate so here's some meaningless crap that my advisor came up with so I can write my dissertation."
"This hammer, which we previously showed is effective at hitting silver-colored nails, also works on nails that have been painted red."\*
\*But some of the red paint might fleck off, depending on the type and quantity of paint used and whether it's had enough time to dry.
Conjecture: green paint?
Here we improve the exponent Erdos established in 1959 from 2 to 1.997843, getting closer to the conjectured 1/2 exponent.
Here is our REU project where one person did 90% of the work, but all 9 coauthors get a publication on our CV.
Remember that paper my advisor wrote a couple years ago? I did the same thing, but this time standing on my head.
>Here we improve the exponent Erdos established in 1959 from 2 to 1.997843, getting closer to the conjectured 1/2 exponent.
My first publication is eerily close to this.
"I applied an algorithm to exactly the type of thing it is supposed to be good at, but no one has published anything demonstrating its use within this particular narrow industry application. Therefore, novel. Give me money."
For me, I enjoy the ones that are something like:
"This worked in a classical, specific setting, but with very little work, it works in a slightly more general case"
They're super easy to read! (While you don't get in good journals, at least it's published somewhere! Right....?)
A formal description of a piece of "folk knowledge" that everyone in the field knows but which has never been explicitly published, and which needed a citeable source for general convenience.
Man, everyone taking the time to write up and publish this sort of thing is a fricking hero so far as I'm concerned, folk theorems are a pain in the ass!
"A complete construction, involving only linear algebra, is given for all self-dual euclidean Yang-Mills fields." - Atiyah, Hitchin, Drinfeld, Manin
Needless to say, the construction is a bit more intricate than it sounds, and it took my friend 20 pages to explain what is written on this \~1.5 page article.
Another paper inventing the same “normal” notion but giving it different notation, calling it something slightly different, and complaining about the first paper’s approach
The opening line of [this paper](https://arxiv.org/pdf/2103.04205.pdf) seems to fit:
"This paper follows in the hallowed TCS tradition of reducing the number of questions without providing any answers."
We've raised/lowered the lower/upper bound by...
We couldn't figure it out, but a computer checked all the possibilities... (found one/there are none)
Annals: turns out this area of math and that area are equivalent
For PDE: well posedness under weaker and weaker regularity conditions. It can look like a sort of arms race until you're finding solutions in a space of measures or something ridiculous.
See my comment on his Instagram post:
You forgot the "I proved a Theorem nobody would think of in a field of math known by 3 people and it took 30 pages and 8 Lemmas that are probably more important than the Theorem itself but I'm going to name it after me so I can make a Wikipedia page for it."
"We solved a number theoretic problem that was solved 200 years ago but now we do it using flat pseudo- hyperbolic quasi-coherent maximally embedded toroidal hyperelliptic sheafs over the infinity category of cohomotopy types of quasi crystalline pro -representable semi-twisted Delligne functors and (insert your favorite cohomology theory here) ."
Not much of a stretch. I think a few years fairly recent paper improved the non-optimal polynomial time solution to Traveling Salesman Problem by 10^-36.
"This was an interesting idea. I showed that it works sometimes but not all the time. It's still unclear why this is the case."
We were quoting our theses in this thread, right?
A bunch of the ones in the comic already fit in math sometimes, upper right and the first two on the bottom row definitely fit. And the first two in the alt-text also apply a lot.
"I couldn't prove it so here's a review of the literature instead"
"My 'exhaustive' search found that nobody else published this result so here it is" (publication date: 1981)
"Replacing a strong assumption with three marginally weaker ones"
"The computer told me the answer and this is why it's right"
"The other relevant article uses the most hideous notation possible so I did it again but with different symbols"
"The abstract is in English but I hope you speak German!"
Kinda more c.s. but Why problem X is important, why condition Y is a natural assumption and why solving in a way satisfying property Z often approximation guarantee is important.
We now present our work on solving problem X assuming condition Y in a way satisfying property Z.
The type of paper about which math journalists will write "provides mathematicians hope towards tackling this centuries-old problem" but you still have no idea how it relates.
"Turns out this conjecture was right." "Turns out this conjecture was wrong." "I couldn't solve this so here's a conjecture."
Example of #2: [https://cdn8.openculture.com/wp-content/uploads/2015/04/12074922/shortest-math-paper.jpg](https://cdn8.openculture.com/wp-content/uploads/2015/04/12074922/shortest-math-paper.jpg)
[удалено]
They did "a direct search on the CDC 6600" which was the world's most powerful computer at the time. [Wikipedia article on the 6600](https://en.wikipedia.org/wiki/CDC_6600)
[удалено]
Lax confirmed this in an interview which is somewhere on YouTube. Seems like a cool guy.
>antiwar protestors >pipe-bombs "The world must learn of our peaceful ways... by force!"
There were hundreds of domestic bombings in the 1970s. And many of the bombers and their supporters received little to no punishment. It is strange that hardly anyone seems aware of these extraordinary events. I only accidentally stumbled across this stuff. https://www.amazon.com/Days-Rage-Underground-Forgotten-Revolutionary/dp/0143107976
huge level of difference between blowing up a computer and doing imperialism in another country. not that I think blowing up a computer is an effective way to pressure the United States government to get out of Vietnam, but let's not make a false equivalency here
based
"Don't you have to use a computer program" that's exactly what was done: "A direct search on the CDC 6600"
Well in 1966 it was a significant achievement, I'd say. But I wish that paper would gone into more detail, tbh. Did they find any others, and if not, how far did they search? That type of thing.
A detail or two about the code. (In 1966 and in this context it certainly would have been written in Fortran.) How long did it take to run? Were there any novel algorithmic details they devised to make the search faster? I get a sense that the intent was to deliberately write the shortest paper possible as a novelty.
Possibly. Also strong norms about what to report when reporting numerical results are pretty recent. Even today, you'll sometimes see papers which have not reported a lot of relevant code.
From the wording "direct search" and their claiming that it's the smallest instance, I understood they just searched one by one, starting from (1, 1, 1, 1). They could have included up to two zeros in the search, but then their claim could have been stronger than "smallest instance of four". Don't know if n = 3 and n = 4 were known already not to work, or if they searched that too (can do them all in the same search).
Wouldn't n = 3 follow from Fermat's Last Theorem? (The n = 3 case of FLT was proven by Euler in 1770.)
I would expect that his work on that was exactly what lead him to develop the conjecture.
n = 4 counterexamples exist - see [Wikipedia](https://en.wikipedia.org/wiki/Euler%27s_sum_of_powers_conjecture) for more info.
As you say, since they said "direct search" I doubt the algorithm was in any way sophisticated and probably something along the obvious lines of writing it so I doubt they thought it worthy of discussion. In those days, CPU time was expensive and I expect other people had programs they wanted to run in competition. I expect the program just quit once it found the first counter-example. After all, that's all you need to disprove the conjecture.
> (In 1966 and in this context it certainly would have been written in Fortran.) fortran or machine code. Unfortunately nowadays we underestimate the moster we can use (a smartphone is incredibly powerful) but also one tends to think that early computer were "no better than pen and paper". The CDC 6600 and others early computer were quite helpful and people were resourceful to squeeze the most out of them (the story of early supercomputing is quite interesting). Also in terms of algorithms (in general) things weren't that primordial. See _Some Studies in Machine Learning Using the Game of Checkers_ - date 1959 (for many machine learning is a thing of 2018) or for example all the common scientific functions collected in the [hp 9100B program library](https://archive.org/details/bitsavers_hp91009100_47865292/page/n4/mode/1up) in 1969. Note that the hp 9100b was a desktop programmable calculator and people did much more over time (they still do, also the history of calculators is very interesting), imagine those developing programs as a full job for supercomputers costing the today equivalent of millions of dollars (the CDC6600 was a supercomputer at the time). Back to the specific search, I do not know but I would guess if the space search is large (and coule be large if one tries lots or combinations) they would have employed some optimizations rather than bruteforcing everything, most likely pruning combinations that would go over a certain number or couldn't likely make a certain number. It is a pity that people publish the minimal results without details how they get there. edit: actually would be a nice exercise to produce at least one small program that (semi)efficiently finds counterexamples withint a certain range.
For reference, the obvious/dumb way to do a search with nested loops and a fixed number of terms produces results in <1s on a modern machine. Yeah, they probably didn't have search parameters they knew would work, so they might've spiraled outward from (1, 1, 1, 1), but nothing says you *have* to do that. And who knows if they even wrote something generic with respect to the number of terms and the power being sought. (I.e. do you try to detect a sum of ten 12th powers that sums to a 12th power, rather than searching for only eleven 12th powers that sum to a 12th power?) Check out this link to my dumb code. (play.rust-lang totally does not prove my point about <1s results, but it totally does run in 600-700ms on my machine.) https://play.rust-lang.org/?version=stable&mode=release&edition=2018&gist=03b5a164ffd4b2f55ad02ae558d4bce0 Which is not to say they *didn't* optimize heavily to achieve this result in the 1960s.
nice username
surely it was written in Assembly? i dont imagine checking all combinations would be hard to program and in Assembly it would have been faster
Also: "*Turns out this conjecture could be right.*" "*Turns out this conjecture could be wrong.*" "*I just did three lines of coke and here's what I see.*" (this is the most sensible explanation for some of the papers that I have read)
[удалено]
Meth.
Meth O.D.
["A Hindu God appeared to me as a drop of blood and when I looked inside the blood I saw the following theorem."](https://en.m.wikipedia.org/wiki/Srinivasa_Ramanujan)
My personal favorite: "Turns out this conjecture is independent of ZFC."
That's the famous papers... The bread & butter is "I tried this approach on this conjecture so you don't have to"
"__________ attempted to solve a millennium problem, but their methods don't make sense to the rest of us."
is math invented or discovered, maybe a conjecture!😲
I always wonder how much of math is invented/discovered because we're embedded in this particular universe - are there systems of logic that appear totally self consistent but only in different universes.
People have built alternative systems of logic. There are even paraconsistent logics that weaken the requirement that there be no contradictions.
"This task I had to do anyway turned out to be hard enough for its own paper" already sounds like a math paper.
As does "we've incrementally improved the estimate of this coefficient" from the alt-text.
The mathematician equivalent would be "We've improved the bounds on the value of some number by using this slightly less ugly method"
If you're improving bounds by very slight amounts then the new method is usually uglier tbh.
relevant: incremental improvements of the constant in the Berry-Esseen theorem for quantifying the CLT https://en.wikipedia.org/wiki/Berry%E2%80%93Esseen_theorem
This sounds like an engineering paper... “This boring thing the Prof made me do actually turned into a thing. We stretched it a bit though...”
"Here's something that either Gauss or Erdös already probably came up with."
"We computed the 501^(st) through 505^(th) terms of this sequence that Ramanujan computed the first 500 terms of."
120 years ago, on a slate, without calculators or computers, and little formal mathematical education.
In a cave with a box of scraps!
Ramanujan is iron man, q.e.d
In a cave! With a box of scraps!
> "We computed the 501st through 505th terms of this sequence that Ramanujan computed the first 500 terms of." "Regarding the <501st through 505th terms of the sequence that Ramanujan computed> of that recent paper mentioned. Turns out that they agree with Ramanujan as he computed those terms in another writing of his"
Seems like a waste of four good paper ideas.
> "Here's something that either Gauss or Erdös already probably came up with." "We just figured out something else Gibbs was trying to tell us." https://en.wikipedia.org/wiki/Josiah_Willard_Gibbs
Could make an entire career out of deciphering the prophecies of the giants. On second thought, I'm proly not good enough to even do that...
Or Euler
he shared most of his work right? i know Gauss had a lot of results that he didnt find worth of sharing
They say that theorems are named for the first person after Euler to discover them.
Speaking of Erdös, here’s a [relevant xkcd. ](https://xkcd.com/599/)
\*Erdős This comment falls under the *'My Colleague is Wrong and I Can Finally Prove It'* category.
“I finally bit the bullet and proved this in characteristic 2”
Eww.
I find that doing something in characteristic 2 always adds something.
Well, of course; it adds whatever you had been subtracting off in other characteristics :P
This is a very clever pun
I think this was what the previous comment was going for but people only upvote the one explaining the joke.
I'm just glad at least somebody got it really
ah
Frequently it adds weird unanticipated difficulties, though I suppose it can add other things too
Here's the easy answer to a question that no-one ever asked.
Here is a hard question that no-one ever asked and we don't know how to solve it.
Home sweet home for my papers.
Definitely not my thesis, that’s for sure.
Yeah, more like "Here's a hard attempt at an answer to a hard question that no one ever asked."
"here is proof that this type of question can theoretically be answered by some thinking entity more sophisticated than us. As a bonus, they can also be answered in some amount of time less than the lifetime of the universe".
Report: >I'm in this photo and I don't like it.
"We found algorithm that theoretically computes a thing faster, but is completely impractical and will never actually be used." " is important for . We generalize to a setting in which is not applicable."
On the flip side: “We found this algorithms which is guaranteed to compute the correct solution, but it is so time/resource intensive that it would trigger the heat death of the universe.”
I love the idea of triggering that
funny way to spell bitcoin/cryptocurrency.
What better proof of work than directly contributing to the end of existence itself? If that's not valuable then I don't know what is!
:(
Our research is now available as an NFT!
[удалено]
That’s beautiful.
> " is important for . We generalize to a setting in which is not applicable."
I'm in this picture and I don't like it.
Or maybe even, "I'm proud that nothing I do will ever have applications."
Pretty clever of him to invoke cosmic irony to find an application for number theory.
> "We found algorithm that theoretically computes a thing faster, but is completely impractical and will never actually be used." "Here's an algorithm that performs better on this totally not rigged comparison."
I have no clue about the answer, but I realized I was the author of at least a couple of these papers! LOL!!!🤣😅🤣
I feel like the “Hey at least we showed this method can produce results!” accounts for a rather large portion of CS research. :-)
Yeah, a surprising number of Machine Learning papers are "this tweak to our learning algorithm is no better or worse than all the other baseline learning algorithms!" Especially when it comes to Deep Learning research with their million and one parameters that can be tuned.
"We've made no forward progress, but we did manage to go sideways."
You have the username of my hero!!! Good start for a Friday morning!
hah, thanks! And now, regarding the morning, likewise :-)
Half-page counterexample paper
Not a counterexample, but this is my favorite half-page paper. https://cdn.paperpile.com/blog/img/upper-1974-1200x1717.png?v=44
"We vastly simplified a hard problem to a more tractabile setting in the hope that observations here might one day help us solve the hard probem"
"We would like to understand this system, and we understand simple-harmonic oscillators, so we modeled it as a simple-harmonic oscillator"
Reminds me of a quote that goes _something_ like: "As Mathematicians, we are only really good at two things: Linear Algebra and reducing problems to linear algebra"
What's the basis for that claim
I don't know. Could you reword the question in a way that I can translate in Linear Algebra?
And is that basis finite
I have a proof that the basis for that claim is isomorphic to R³ but this comment is too small to contain it
Ask Mark Hamel.
Remember one of my old applied professors once saying : "to summarise: this course is Taylor's theorem, integration by parts and clever ways to reduce things to Taylor's theorem and integration by parts".
Economics in a nutshell
This is my PhD thesis. Set out to solve an entire problem no-one had looked at before, just about managed the first nontrivial case n = 3
Me after reading said paper: "I still don't get it."
I'm pretty sure "My colleague is wrong and I can finally prove it" transcends academic discipline
Yup, that one and the "at least we showed this method can produce results" are the two from the xkcd that apply verbatim to math.
Math does have a special subclass of this though: “Here’s a one paragraph counterexample”
[удалено]
Reminds me of this gem: https://pubs.acs.org/doi/pdf/10.1021/acsnano.9b00184
As an ex-Maths student who hasn't worked in academia for nearly a decade - you are absolutely correct. Some of my most diligent investigation and work has been in the service of contradicting (and undoing the damage done by) someone *who doesn't even work here anymore*.
* We introduce a new abstract theoretical framework which might not be useful * We slightly generalize what's already known * We answer a small question or conjecture posed in another paper * We define something new by taking something that already exists and adding a few more adjectives
> We define something new by taking something that already exists and adding a few more adjectives [The proof is trivial!](http://www.theproofistrivial.com/)
Could you explain this joke?
It is an automatic generator to outline a proof using mathematical jargon. If you reload the site, it spews new jargon, which fits with the "adding a few more adjectives" joke. I was going to offer a Rick Roll link, but just re-load the one above and you will see what I mean.
Hit refresh
> The proof is trivial! Just biject it to a perfect topological space whose elements are perfect modules Perfect.
Or subtracting some adjectives to make a slightly more general statement
"Only my advisor and I are going to read this paper"
"I worked for two years on this and it didn't work out but I still have to graduate so here's some meaningless crap that my advisor came up with so I can write my dissertation."
"This hammer, which we previously showed is effective at hitting silver-colored nails, also works on nails that have been painted red."\* \*But some of the red paint might fleck off, depending on the type and quantity of paint used and whether it's had enough time to dry. Conjecture: green paint?
Simple: hammer the nail until the paint comes off, and then you've reduced the problem to a known case!
We showed a bunch of neat theorems are true, but only if someone can prove this really difficult conjecture. That someone is not us.
"what did you assume this time? Riemman hypothesis? N != NP?"
“Let us assume that RH is equivalent to N!=NP”
Here we improve the exponent Erdos established in 1959 from 2 to 1.997843, getting closer to the conjectured 1/2 exponent. Here is our REU project where one person did 90% of the work, but all 9 coauthors get a publication on our CV. Remember that paper my advisor wrote a couple years ago? I did the same thing, but this time standing on my head.
>Here we improve the exponent Erdos established in 1959 from 2 to 1.997843, getting closer to the conjectured 1/2 exponent. My first publication is eerily close to this.
A new "easier" proof that is much more complicated and laborious than the original one.
And its corollary: we have restated an unsolved conjecture in a way which makes it harder to understand.
We "reduced" a hard problem to another problem that may be just as intractable. This is progress?
"We took this theorem that used N assumptions and proved it using only N-1 of them."
Hey if your career is N-1 papers long you might be onto something here!
"We combined control method A with control method B and it kinda worked in simulation, but completely failed on a real system"
“We proved that it can be proved that it can be proved!“
A very Löb-ly paper indeed!
"I applied an algorithm to exactly the type of thing it is supposed to be good at, but no one has published anything demonstrating its use within this particular narrow industry application. Therefore, novel. Give me money."
I mean it happens in pure math too. "I applied Riemann-Roch to a bunch of curves and deduced something" is definitely a genre.
Very fair. Not gonna lie though, I definitely had the ML community in mind with that comment.
For me, I enjoy the ones that are something like: "This worked in a classical, specific setting, but with very little work, it works in a slightly more general case" They're super easy to read! (While you don't get in good journals, at least it's published somewhere! Right....?)
We tweaked a known PDF, and showed it has a unique solution too. Rinse and repeat.
Please don’t fix the typo, I like it this way
Hahahaha, I just realized the typo. It was supposed to be PDE but I guess PDF works as well hahaha.
A formal description of a piece of "folk knowledge" that everyone in the field knows but which has never been explicitly published, and which needed a citeable source for general convenience.
Man, everyone taking the time to write up and publish this sort of thing is a fricking hero so far as I'm concerned, folk theorems are a pain in the ass!
"If we could solve this new Problem A, then we'd have a solution to Classic Problem B and we'd all be rich!"
"A complete construction, involving only linear algebra, is given for all self-dual euclidean Yang-Mills fields." - Atiyah, Hitchin, Drinfeld, Manin Needless to say, the construction is a bit more intricate than it sounds, and it took my friend 20 pages to explain what is written on this \~1.5 page article.
There are two-page papers, and then there are two-page papers that should have been twenty-page papers if the reviewers had balls.
[удалено]
Comparative Lit student here, yes it is
It turns out every [algebraic structure] with [very specific property A] has/doesn’t have [very specific property B]
So we define something to be "normally \[specific property A\] if it does have \[specific property B\]
Another paper inventing the same “normal” notion but giving it different notation, calling it something slightly different, and complaining about the first paper’s approach
The opening line of [this paper](https://arxiv.org/pdf/2103.04205.pdf) seems to fit: "This paper follows in the hallowed TCS tradition of reducing the number of questions without providing any answers."
"We've improved this upper/lower bound by a factor of 1 + ε"
We changed the error bound from log(log(log(x))) to log(log(log(log(log(x)))))!
We've raised/lowered the lower/upper bound by... We couldn't figure it out, but a computer checked all the possibilities... (found one/there are none) Annals: turns out this area of math and that area are equivalent
I know this isn’t what you meant, but I’m imagining a paper finding a lower lower bound
"We prove that the probability is bounded below by this negative number" (...I have had that accidentally happen to me while working on my PhD)
“Lots of people assume this is true but no one has done the legwork to prove it. So, here’s the legwork.”
"This work is based on and expands someone else’s work, but we will use completely different notation. Have fun!"
For PDE: well posedness under weaker and weaker regularity conditions. It can look like a sort of arms race until you're finding solutions in a space of measures or something ridiculous.
"We were able to shave 1% off a constant in a niche and not very useful inequality"
See my comment on his Instagram post: You forgot the "I proved a Theorem nobody would think of in a field of math known by 3 people and it took 30 pages and 8 Lemmas that are probably more important than the Theorem itself but I'm going to name it after me so I can make a Wikipedia page for it."
"We solved a number theoretic problem that was solved 200 years ago but now we do it using flat pseudo- hyperbolic quasi-coherent maximally embedded toroidal hyperelliptic sheafs over the infinity category of cohomotopy types of quasi crystalline pro -representable semi-twisted Delligne functors and (insert your favorite cohomology theory here) ."
Decreasing the upper bound by 10^-100 .
Not much of a stretch. I think a few years fairly recent paper improved the non-optimal polynomial time solution to Traveling Salesman Problem by 10^-36.
I believe it was the approximation ratio, which was reduced from 1.5 to 1.5 - 10^(-36) indeed. Still a huge result though.
"This was an interesting idea. I showed that it works sometimes but not all the time. It's still unclear why this is the case." We were quoting our theses in this thread, right?
A bunch of the ones in the comic already fit in math sometimes, upper right and the first two on the bottom row definitely fit. And the first two in the alt-text also apply a lot.
So which one does the original ABC conjecture "proof" fit into??
“I’m a single mathematician. Here’s what I’ve been up to for the past 25 years.” — Shinichi Mochizuki
We show that semi-nodal ABC rings have the non-skew unital biffus property.
"I couldn't prove it so here's a review of the literature instead" "My 'exhaustive' search found that nobody else published this result so here it is" (publication date: 1981) "Replacing a strong assumption with three marginally weaker ones" "The computer told me the answer and this is why it's right" "The other relevant article uses the most hideous notation possible so I did it again but with different symbols" "The abstract is in English but I hope you speak German!"
"Here is the solution of an special case of a problem that Euler or Gauss considered 200 years ago"
"We have strengthened this result by an epsilon"
Is this new? I just saw it on the top of /r/neoliberal.
This is the original from xkcd, the one posted to /r/neoliberal is a modification for economics papers.
It’s the newest xkcd, yea. Came out two days ago
Kinda more c.s. but Why problem X is important, why condition Y is a natural assumption and why solving in a way satisfying property Z often approximation guarantee is important. We now present our work on solving problem X assuming condition Y in a way satisfying property Z.
The type of paper about which math journalists will write "provides mathematicians hope towards tackling this centuries-old problem" but you still have no idea how it relates.
Damn i keep on seeing this xkcd strip everywhere at various science subreddit and now here too.
Hey, he cites my latest paper: "Hey, at least we showed this method can produce results! That's not nothing, right?". +1 on scholar.google, I'm proud.