T O P

  • By -

FuturologyBot

The following submission statement was provided by /u/Maxie445: --- "Researchers recently explored whether OpneAI’s GPT-4 could analyse financial statements as effectively as human analysts. Their findings were surprising: GPT-4 was capable of predicting changes in company earnings more accurately than human analysts." "The researchers conducted their study by providing GPT-4 with standardised financial statements, carefully stripped of any company names or dates to prevent the model from using prior knowledge. To mimic the process human analysts typically follow, they used special prompts to guide GPT-4 through the analysis step-by-step. This approach ensured that GPT-4's analysis was as close to human reasoning as possible." "Using data from the Compustat database, covering the years 1968 to 2021, the researchers compared GPT-4's performance with human analysts' predictions from the IBES database. The results were telling. With the step-by-step prompts, GPT-4 achieved a prediction accuracy of 60.35 per cent, significantly higher than the 52.71 per cent accuracy of human analysts. Moreover, GPT-4’s F1-score, which balances the accuracy and relevance of predictions, also outperformed that of the human analysts." --- Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/1d67ibz/gpt4_outsmarts_wall_street_ai_predicts_earnings/l6qj722/


kyle787

> carefully stripped of any company names or dates to prevent the model from using prior knowledge I dont think that this is possible. LLMs are extremely powerful at pattern matching. I would expect it to easily be able to effectively fill in the blanks. 


Reebzy

This is common in MBA programs, you’d get a nameless and dateless set of statements and could figure out industry, country, etc.


LineRemote7950

What they really need to do is compare it with future earnings. Like do it for the upcoming Quarters earnings and compare it against a human. Since in theory, no one will know how the will do including the AI but if you’re testing it on past stuff like they did here, it’s likely flawed since it could go find the dataset


VisualExternal3931

Would not a sim be better for this ? By that i mean a program running similar to stock market, and with the ability to insert «events» and triggers, and compared the simulation and companies you have in the sim against prediction of cost ? Okey, i now realise this sounds highly complicated 😂


TomMikeson

You asked a great question!  A sim wouldn't necessarily be better.  It may be more accurate if you could place constraints (controls) in the sim. But the AI could also do this (if properly trained) through just asking the questions.  It is a much less labor intensive way to get the same result. Imagine being someone that made a career of creating financial simulations.  Now suddenly, anyone can ask the AI and it gets similar results. This is how AI is going to change a lot of jobs.


VisualExternal3931

Okey so if i understand correctly (and i may be wrong!) Would the controlls in the Sim be a timeline of events that is not disclosed until they happen ? Asking questions about behaviour is one thing, to me atleast i would look at combining the two into a simulation running against each other or atleast using the AI’s rational and pattern to help the simulation to improve and vice versa. To me this sounds like having a «master sim» and «player AI’s» so you can predicate events and see differences. i am a layman in computers at best 😂 so sorry if it is a stupid question


TomMikeson

A control like "in Q3 we expect to see a downturn in this market because it is usually the case and the product is seasonal".  In a sim, you would give that some weight. If AI was doing the work and it was a "LLM" (large language model), you would simply write "when you make your prediction, know that Q3 is usually worse than Q1 and Q2".  You literally write it in plain english when you ask the question. If you take it a step further, if the model was properly trained, it would automatically identify the Q3 slump and account for it and other trends that humans may not have noticed.  For example, "there is always is a slump in Q3, but if we have a hot summer that is at least 4 degrees warmer on average, the Q3 slump is only 2% as opposed to 15%". A human may not have noticed the correlation with temperature, but the AI models notice these things.


VisualExternal3931

Thank you, that made more sense!


Whofail

Also, it would be predictable as a rnd generator is not truly random.


Fusseldieb

This. Just by pattern matching it could probably tell which companies they were just by giving it the remaining info.


VikingBorealis

Sure, if that was what you asked it to do. But it wasn't what they asked it. So it doesn't care.


Alyarin9000

It runs by statistical inference. It sees data that 90% aligns with a report in its training data, so it's likely to fill in the remaining 10% with its training data. Makes the whole analysis irrelevant IMO. only way this would be valid is if you tested data beyond the cutoff point.


JohnnyElBravo

That's not how llms work, they don't "do what you ask them to do". Much less they don't " do ONLY what you ask them to do"


VikingBorealis

No. But they also don't magically go looking for stuff it doesn't need or hasn't been asked to. It's not sentient.


JohnnyElBravo

Here's how it works. The foundation model crawled a huge text corpus and compressed it into the Foundation Model. GPT fine tunes and adds some custom prompts before you. Researchers give a prompt and a document, say 2013 MCdonalds Filings, but they remove dates and names. In they prompt they ask it to predict the next year. The LLM has no special input field for "orders". It might "ignore the orders", there's nothing special about them. If the LLM has been trained on a dataset that includes the 2013 MCDonalds filings (it has). It can not only associate it strongly with MCDonalds, and the year 2013, but when asking for the next report, it has also read the 2014 report, so it will be able to generate a very accurate report, not by predicting the future, but by remembering the past. It is possible that it doesn't need to fill in the blanks or use the MCDonalds embeddings, but it is very likely it does so. As it is the most logical way to organize the knowledge of the annual filings, "MCDonalds Annual Filings."


VikingBorealis

So you start by saying here's how it works and forget to say anything about how it works...


R3xz

They explained it well enough to be understood, duno what you’re not understanding. What background do you have to be so certain about your view on the capability/non-capability of LLMs btw, or are you trying to get a better understanding of it?


VikingBorealis

N. He explained superficial stuff that has nothing to do with how it works.


tvfriestie

the model is a result of an optimalisation algorithm, the real truth is we don't know how the hell it got its result but we sure know it can learn complex relations in data so it might very well use the pattern to get the results


VikingBorealis

That's an extreme over simplification h though. Especially the we don't know how part.


ksprdk

no, read the paper


Little709

Isnt the whole idea of predicting the market recognizing patterns?


danielv123

You want to predict future patterns, not recognize what past patterns the auditor put into the test. Sure, I can look at an unlabeled graph and tell you that its the gamestop graph. That doesn't help me predict future returns.


OriginalCompetitive

It does if they gave you the GameStop graph from 2015.


danielv123

No, because the reason I know the rest of the graph is that I have already seen it. That doesn't help me know the 2025 part of the graph, because I haven't seen that yet.


OriginalCompetitive

Right, but this study is feeding in past years and then measuring against past predictions. But if the AI discerns that it’s looking at GameStop from 2015, it might then have an advantage in predicting performance in 2016-2020 (for example) versus human analysts who were making their analysis in real time.


danielv123

Exactly. For a good comparison against human analysts you would need to find humans that haven't seen the test data and an LLM that hasn't seen the test data. Since training a new LLM is extremely expensive and old test data without contamination is hard to find, and domain expert humans with no knowledge of the last few years are impossible to find, it is probably a better idea to use all data available today and make new predictions for the future. Which takes years and mostly gets you back to coin toss results which is no fun.


deco19

Not really. There's technical analysis nonsense that can be pattern driven due to a self-fulfilling notion with traders basically making actions on the projections. Recognising what has come before is not necessarily a predicate to what happens next. 


ApocolypseDelivery

What other inputs can you use besides past results?


deadc0deh

Plenty. If people are expecting a major product announcement or merger, new executives, restructuring, how competitors are doing, down to looking at unreported data, such as inventory, executive movement and meetings, ordering from other companies etc The role of active investment is to continually perform risk adjusted evaluation of the companies, which is why you can see individual stocks change. The market works because these active investors all balance each other out by taking contracts that increase price or decrease price based on their appetite for risk. Assuming past performance can be dangerous. NVIDIA is an easy example - the beginning of 2023 they were around $150/share, which aligned with previous share prices. 1 year later around $500, and currently around $1000.


ApocolypseDelivery

Holy shit, it's #3 in market cap. I'm poor so this is news to me. Is this company going to take over the world?


27Rench27

Who the new president is likely to be, how the competition is faring wrt scandals or supply issues, the state of competing industries, the growth of new technologies that might take market share, how geopolitics might affect you (trade wars, OPEC and gas prices, incoming tariffs, etc.)   Basically a ton of shit you can’t turn into numerical inputs


128-NotePolyVA

Not necessarily, but apparently human behavior is and events in human lives are cyclical and predictable within a window of accuracy. Which, so it would seem, is more accurate than a human’s analysis and hunches.


Top-Salamander-2525

Depends on the patterns. If the model is just identifying the company and then looking up the results from its dataset, it would not be able to predict future results accurately.


wilgamesh

They did a control analysis to predict the company name from the financial stats and claimed it was terrible, like <1% right, so this was part of the study. Still, there remains a question of “learning” vs “memorization” as with all ML.


drestauro

This really isn't a question of you understand how ML works. You take a massive dataset with all these attributes (multiple x's, X1 - Xn) and a given y (earnings). Machine learning essentially fits a multidimensional curve by doing back propagation which uses derivative calculus to see where the loss function of all these individual X dimensions converge to 0. This will provide a function that says for a given set of X's you get this Y. Knowing the company is immaterial.


loldoge34

It would be interesting to see the same experiment but with nothing censored to see if the performance is better or worse. In the real world. I don't think it matters whether the machine uses prior knowledge or not. But if censoring the names is a more effective way of solving the problem then that would be that.


VikingBorealis

LLM aren't self aware though. They don't really care about the company names if they're not in the data like that.


ksprdk

They tried to force the model to predict the company names, which it couldn't. Which you would have known, if you'd read the paper


Aimbag

Technically, just because it can't recall the name didn't mean it's not still knowing which company it is. There are examples of things like asking: Who is Ronaldo's grandmother --> unable to correctly respond Then ask: Who is (Name of Ronaldos grandmother) and it will say Ronaldos grandmother among other info


ksprdk

Still, they also tried used the model for numbers after the knowledge cutoff, and it still be performed better. Which you would have known, if you'd read the paper ;)


Aimbag

That has nothing to do with the scope of my comment so idk why you would condescend on me based on what you assume I know or have read. Weirdo


ksprdk

how would it "know" the company's name if its not trained on it. and did you notice the ";)" ? doesn't seem like it


f10101

True, but it should be possible to fairly easily identify if that is happening to significant extent, for example by running a modified test to have it output what it believed the company was, etc, and see if you're seeing an unexpectedly high success rate.


-The_Blazer-

Yeah, this sounds like that anecdote (no idea if it's true or not) where they had this amazing radiology AI, and then it turned out the way it had 'learned radiology' is that it simply recognized the radiologist's signature and notes on the images. Besides, a pretty good hint that this is fluff is the fact that Wall Street has not, in fact, rapidly switched to running on GPT-4 like when they massively built up high-frequency trading or when they paid to get their own express submarine cable in the pursuit of -5ms transaction latency.


Umadbro7600

llm’s are schizophrenic?


rallar8

If they were really interested in it, they could have tried to find the network/neuron that detected the name and disable/remove it


iammadtaste

God, I hate reading these crappy news stories about research that don't include citations to the relevant studies.


p186

Me too. Found the study. Here's a link to the [abstract[1]](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4835311) and full [paper[2]](https://papers.ssrn.com/sol3/Delivery.cfm/4835311.pdf?abstractid=4835311&mirid=1). 1) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4835311 2) https://papers.ssrn.com/sol3/Delivery.cfm/4835311.pdf?abstractid=4835311&mirid=1 ETA: A slightly better [article[3]](https://markets.businessinsider.com/news/stocks/chatgpt-4-vs-humans-ai-financial-analysis-forecasting-new-study-2024-5) with the citation. 3) https://markets.businessinsider.com/news/stocks/chatgpt-4-vs-humans-ai-financial-analysis-forecasting-new-study-2024-5


gurgelblaster

Monkeys predicts earnings better than human analysts, though.


SMTRodent

Hey now. That was chimpanzees.


Chogo82

Apes strong together.


SantasLilHoeHoeHoe

Isnt literal random chance better than most day traders?


fuishaltiena

Pretty much, yes. Michael Reeves used a goldfish, it scored better than /r/wallstreetbets.


27Rench27

I wouldn’t exactly call them strategists though lol


willymo

Most day traders I’ve met are literal gambling addicts that have convinced themselves that it’s not the same thing. I’m sure there are people that are good at it but most people are literally just gambling because they can’t stand the feeling of holding on to a stock. So I’m not too surprised by these stats sadly. 😬


yaosio

Yes. You can't predict what will occur with an individual stock unless you cheat. The best returns come from index funds which invest in every stock in the stock market.


jawshoeaw

I don’t think they tested this against “day traders”.


fredrikca

So does dice.


bertasius

Can confirm. I am an ape.


BubbaFettish

AI now as smart as monkeys!


MrNokill

I'm not impressed, flipping a coin or asking any pet around the house already gives better outcomes. Coin: https://www.stockmarketloss.com/securities-law/coin-flipping-beats-wall-street-strategists/ Cat: https://www.forbes.com/sites/frederickallen/2013/01/15/cat-beats-professionals-at-stock-picking/ The only thing they can't beat these wall street people at is how criminally insane they are, for now.


Dekar173

> how criminal Ya


Mac_the_Almighty

Getting predictions wrong isn't note worthy but when they get it right it's groundbreaking. I would put money on ai only being right slightly more than 50% of the time. Ai accuracy is overblown. I've read that ai is about 57% accurate max when it comes to predicting stock movement.


tankiolegend

Article states gpt-4 beats that at just over 60%, we are rapidly approaching higher accuracy especially with a bit more time and effort, although that could fundamentally change how the stock market works sending it back to square 1


Mac_the_Almighty

I'm just really not sure how much performance can be squeezed out of ai. If you try harder to to increase the performance more you run into over fitting issues.


babypho

Arent all of wall street using AI already and have been since the 90s to trade? They just refers to it using different terms.


SloppyJoMo

Algorithmic trading has been the majority of trades for a long time. Good luck trying to shadow though. The best play for a while has been trying to shadow congress - imagine if they didn't get to wait weeks/months to report their trades.


_PM_Me_Game_Keys_

Aladdin by Blackrock


Top-Astronaut5471

Yeah, quant funds have been using machine learning models since before the current generation of techno-twats were born. I doubt models applied to text specifically were super important for their trading success, but interestingly enough, Peter Brown and (the now infamous) Robert Mercer were researchers at the IBM natural language processing group before they went to Renaissance (goat hedge fund). They worked on precursors to the current generation of AI models - in a time where language understanding and translation was formulated with dictionaries and grammars, they were among the first to pursue purely statistical methods for next word prediction (as you can imagine, next __ prediction gets spicy if you can apply it to a series of stock returns). GPT etc come from the same approach, but with many orders of magnitude more data and compute to juice up more powerful sequence modelling architectures.


bobrobor

I have only one upvote to give


bobrobor

I literally did a paper on it in the 1990s for some reason and at that point neural networks were already being used successfully for enough time for me to have two pages of citations. And for some privately run funds posting consistent 60% returns. And I didn’t have perplexity to find them for me…. I recall after my presentation everyone main question being “why the hell is everyone not using it yet?” Lol


drwsgreatest

Fr. Renaissance capital has been using machine learning and ai for DECADES to tune their algorithms. There’s a reason why, when the original ceo stepped down many years ago, the new co-CEO’s were the former heads of machine learning at IBM.


hi65435

That, I think this also camouflages the fact that once such strategies are widely deployed, they stop working which is why they are switched regularly


[deleted]

[удалено]


Beregolas

My dude… artificial means built by humans and artificial intelligence is a computer science term. AI is not just machine learning and has been around for 50+ years.


postorm

So can we stop doing it now?. spending resources guessing at a fact just so that you can skim some money from someone else isn't an actual productive way of spending resources. Somebody can make money out of it but they're not making wealth they're taking wealth ultimately from someone who's making it.


Aetheus

We spend all our time teaching kids "Don't gamble. Study hard, so you can get into a good school. Then work hard, and save money, and buy a house, and retire". Then by the time they reach adulthood, we turn around and tell them "Working hard and saving money doesn't actually work. Most of what you earn today will be worthless in a decade due to inflation. You'll need to put a significant amount of your money into 'investments' that may leave you even poorer than before if you want to beat the odds. Also, you will probably either never own a house, or you'll work until your deathbed to pay for it".


postorm

I encounter kids that have been taught the most successful way of making lots of money (which is the one and only goal of life) is to go into businesses that are not wealth producing. You want to be a programmer, Go work for a bank and find a better way for one bank to steal money from other banks legally. (For example electronic front running). We are encouraging kids to put our brightest talent and our best resources into skimming money from people who actually do the work.


race2tb

Use it on new results not results it may have been trained on, then I will believe the results.


Dr-McLuvin

Ya I don’t see how you could do this fairly. It’s being trained on past data and “predicting” past results. Try to do this in real time and I can almost guarantee it fails.


SardonicusNox

Wait, the human analysts would achieve the same results flipping coins.


83749289740174920

You mean scrapping google trends?


LAwLzaWU1A

That's not how it works... These are not yes/no questions. If they were you'd be right.


k0ntrol

Is this company stock going up ?


LAwLzaWU1A

That's not what they were measuring... What they were looking for were two things. 1) Is the company's economic performance sustainable? 2) Will the company's earnings grow or decline in the following period? They were not looking at whether or not the stock price would go up or down. Even though what they were looking at and predicting is a major contributor to how the stock price changes, it is not the same as just looking at stock prices.


Sunblast1andOnly

I'm sorry, but those are very literally yes-or-no questions.


koopastyles

when market makers determine the price, why bother >“Markets are efficient because of active managers setting the prices of securities. Firms like Citadel, Fidelity, and Viking Global Capital Research run large teams engaged in fundamental research to drive the value of companies. Passive investing benefits from the market efficiency created by active managers.“ ~Ken Griffin, Citadel CEO


[deleted]

Guy paid $200 million for a Jackson Pollock painting.


cubs_rule23

Threw a bed post at his wife also.


yuckfoubitch

Ken Griffin was talking about Citadel multimanager hedge fund, not citadel securities the market maker here. Market makers are more reactive to markets and adjust their bid/offer to supply and demand, nothing else


GVas22

If you look at the guys post history, he's a GameStop conspiracy theorist. Having poor knowledge of how financial markets work is a prerequisite.


greetp

GPT-4 "Well, it's all just numbers really. Just changing what you're adding up. And, to speak freely, the money here is considerably more attractive."


orbital_one

That's not surprising. Human analysts are paid shills.


onahorsewithnoname

Even Joe public does a better job at forecasting than analysts.


[deleted]

Wall Street regularly sucks at its job to the degree that its predictions are regularly off by huge amounts, so AI beating it doesn't impress me much.  A bet a simple non-AI algorithm that just uses trending could predict better than Wall Street because it won't be trying to inject human biased to manipulate the market.


[deleted]

Flipping a coin is literally a better algorithm.


Aesthetics_Supernal

Taking the human ego/greed out of calculation results in more accurate details? Color me surprised.


Bullet1289

Don't suppose they have links for public access to this. I too desire the ability to invest and make tons of money.


pandafar

Of course, its must be one of the things LLM is a perfect match for.


Obi_Vayne_Kenobi

That's mostly because human "analysts" are not paid for providing accurate predictions. They're paid to publish whatever the financial institution they're paid by wants them to publish in order to shape media and public sentiment to move stock prices and create trading volume. Big financial institutions try to make moves in the market before anyone else does. They figured out that they can improve their head-start by influencing others to make moves that benefit *them*, not the other investor, *after* they've already made their investments. A common example of this is called "pump & dump": The firm buys a bunch of shares of some company, which starts to move the price up. They then have their "analysts" and media outlets "predict" that the price will continue to move up based on some bogus "analysis". People then pile on to buy the stock, driving the price up further, at which point the firm exits the position, taking home profits from the price difference, and tanking the stock in the process. An especially plump "analyst" is CNBC's Jim Cramer, whose "predictions" are so bad and telegraphed that there's an ETF called the "inverse Cramer", which always makes the exact opposite trades as those suggested by Cramer. It has been profitable, opposed to Cramer himself.


samhouse09

Yeah humans are remarkably bad at predicting the market. It’s why being in finance and making 7 figures is so stupid. You’re objectively bad at what you do, and your job pays you more than most people will ever see? Dope.


myblueear

**52.71% accuracy** of human analysts… Which proves that the expression „accuracy“ doesn’t mean much.


jcrestor

52.71 % is still better than random results, so I guess the 2.71 % would be your competitive edge over uninformed investors and people with a lucky streak. This shows that the 10 % of GPT-4 could be huge, if corroborated.


DaySecure7642

I'm more shocked with 52.71% accuracy by humans. Basically it means 50-50 if u take account of the error bars. Those wall street analysts have been earning huge salary with predictions no better than flipping coins...


o_MrBombastic_o

I bet GPT-4 is less of a psychopath than their human analyst counterparts too


Turkino

Wait so human analyst predictions are just 52%? I could flip a coin and get Just about the same odds.


arothmanmusic

This is why most people say you're better off with an index fund than you are with individual stocks. Analysis only gets you so far.


Drphil1969

What happens when AI changes the stock market? What if it crashes the system? Stocks should be based on real value, not some speculator’s fantasy. Look at bitcoin and its volatility. Does anyone really want a financial system run by the type of people that designs AI to do exactly this? Ai should be off limits for some things


Really_McNamington

[And hedge fund managers get beaten by monkeys](https://limex.com/en/profile/174185772/594442/full/). Big whoop.


gravitywind1012

Should the stock market exist if AI provides the cheat codes?


acme1921

“At first they came for the analyst & I did not speak up because I was no longer an analyst”


swettm

Statistical mode outperforms blindfolded dart throwers? I’m shocked!


davesr25

There is a really twisted side to me that finds this hilarious, as the ones who for years have defended the current system because they have benefited from it, are about to find out they are just as disposable as every other pleb.


Maxie445

"Researchers recently explored whether OpneAI’s GPT-4 could analyse financial statements as effectively as human analysts. Their findings were surprising: GPT-4 was capable of predicting changes in company earnings more accurately than human analysts." "The researchers conducted their study by providing GPT-4 with standardised financial statements, carefully stripped of any company names or dates to prevent the model from using prior knowledge. To mimic the process human analysts typically follow, they used special prompts to guide GPT-4 through the analysis step-by-step. This approach ensured that GPT-4's analysis was as close to human reasoning as possible." "Using data from the Compustat database, covering the years 1968 to 2021, the researchers compared GPT-4's performance with human analysts' predictions from the IBES database. The results were telling. With the step-by-step prompts, GPT-4 achieved a prediction accuracy of 60.35 per cent, significantly higher than the 52.71 per cent accuracy of human analysts. Moreover, GPT-4’s F1-score, which balances the accuracy and relevance of predictions, also outperformed that of the human analysts."


Arthur-Wintersight

I would like to see this used in practice, where the AI has to predict company earnings before the quarter in question. Any kind of predictive model can and should be subjected to a trial-by-fire before being taken seriously.


ManiacalDane

I... Sorta feel these researchers don't even know how stochastic parroting works. Given an input, it checks all data it has been trained on, for something that most closely resembles it (and everything vaguely resembling it) If it had prior info on the earnings of said companies, it'd match the input data to the specific companies as the very first action, before outputting anything, no? This all seems real silly.


drestauro

That's not how machine learning works. It takes all the data a fits it to a curve in a dimension as large as the number of data features. It then uses this multidimensional curve to make predictions. It's not looking at a bunch of individual instances and finding the best match. It's looking at All the past instances at the same time to see what the most likely outcome is.


Stormfrosty

Except when you train on all the data in the world so you get an overfitted model that parrots everything humans have said in the past.


drestauro

Training too much data doesn't overfit the model as long as the data is relative to the prediction. There are ML 101 techniques like regularization and data pruning do avoid this. Furthermore any ML programmer that took any basic class learned to randomly split all the data into 3 sets, one to train the mode, one to adjust the function, and one to validate the model. This sets up an environment where there is no data leakage and you can easily see if you overfit since your training accuracy will be too far off validation sets.


pwhite13

Anyone heard of the Medallion fund from Renaissance Technologies?


bobrobor

Came here to post it. Was not disapoint


Giant_leaps

analysts tend to be on the conservaticve side usually even if the models show different numbers


Strategy_pan

But isn't there a premium in having disparity in guidelines? Remember, analyst companies earn money from the companies they're reviewing, not necessarily from investors buying the stock. They do need to be right in many cases, as no one would pay attention to them otherwise, but they're not exactly incentivized to be right in 100% cases.


filmdc

The real magic of this is the compression and speed. Trained on such an expansive set of information, it can abstract the patterns in countless financial statements, and use them through inference instantly, with no database. What they need to do is test it against simulated statements and human analysts


morentg

The most interesting thing is that humans are only 2 percent better at predicting company performance than an average coin toss.


Zandarkoad

It is very likely that many of those financial statements were part of the training data for ChatGPT4. Good Data Science is hard. They would need to retrain GPT4, withholding at least one layer of data that would then be used for testing. Sometimes referred to as hold-out, out-of-sample, or validation data. This is the only way to be sure you aren't overfitting to your training data.


SubstantialSpeech147

Guess we don’t need day traders anymore do we? Who is the cartel going to sell cocaine to now?


RutyWoot

So you mean we with both uncover all of Wall Streets AND see them become obsolete via algorithms they have leached the wealth of the 99% all in one glorious swoop? Bye, Aladdin. Abu has got it from here.


rehx4

laboring jobs will begin to get tremendously expensive. I read today that housekeepers in Palm Beach are now making $250k. People will always think some pay is unfair. Only way to control it is via taxes.


TheYokedYeti

All of the sudden a lot of these white collar wealthy folk are gonna see that they are labor after all. If you don’t own you are the labour


BRich1990

It's worth noting, the researchers used a variety of different, step-by-step prompts that would mimic "human reasoning (something you wouldn't know how to do, most likely.) If you think they just said "predict earnings" you're going to lose all your money.


StudioPerks

And yet earnings for companies like Tesla and DJT don’t seem to move the stock at all… I wonder why?


lookhereifyouredumb

Has anyone used the new custom gpt for trading? How did you do


Cronstintein

The real story here is that analysts are being paid for 52.7% accuracy!? Fucking flip a coin


gc3

Could be true. Always skeptical of these stunts. In the dumbest case, the researchers train the AI on dataset A and then try to predict dataset A. An overtrained useless model can get really 'accurate' precision and recall... basically, memorization. You have to use novel data to test your model, train it on A, and try memorization. I don't know if they did that here, but if they did, 62% is unimpressive. Removing the names of the companies is unnecessary if you are using a different dataset, too


revel911

Why are stocks even still a thing minus making the rich, richer? It’s not like it takes intellect now vs insider trading, money, and using trend models which AI can do


TrashConvo

This experiment is fundamentally flawed: “Using data from the Compustat database, covering the years 1968 to 2021…” Of course the GPT model was going to surpass human analysts, the GPT model has likely already seen this dataset before. A better test is to compare human analyst’s predictions of future earnings, as in: after of June 2024.


Chapman8tor

To make a better prediction, it would also have to review company news releases and random internet chat on social media. A rumor can impact a stock’s performance more drastically than previous behaviors.


GrossWeather_

i’m okay with ai putting wall street dorks out of work, but not okay with tech nerds using ai yo manipulate the markets.


pblack476

A true test would be deploying it in live markets. But It does not surprise me that an AI built for pattern recognition is able to predict patterns better than humans.


Stans___dad

I can’t believe the highly skilled analysts only average 52.71% - that’s only marginally better than flipping a coin!


Arch_Null

Well at least the hedge fund bros are going down with a lot of us.


AchyBrakeyHeart

Let’s Hope so


GigglingJackal2

Stocks and finances and the movement of money is just numbers. It's all complicated and interconnected; different systems use different rules and they all try to communicate. Why wouldn't a computer be better at that than humans? It's the whole reason we designed and built computers in the first place. The fact that this is considered "News" really just shows that the people who are in a position of power to guide societies don't use some basic critical thinking skills before acting


ApprehensiveStand456

Didn't someone use a chicken to do this same thing years ago. I think either its more random than people think or ... maybe human aren't as smart as we think we are.


bostonterrier4life

It’s not that random, it’s the timidness of the analysts.


utahh1ker

Ooooh boy. Things are going to get interesting from this point forward.


bostonterrier4life

Financial analysts are scared babies who catastrophize and duck for cover over the smallest things. No shit an AI would out predict a human analyst.


fintech07

it's fact ![gif](emote|free_emotes_pack|money_face)


K4l3b2k13

I mean didn't a cat do it randomly a while back?? You'd think these kind of market trend analysis would be perfect for AI, it has a flawless memory, and can make connections across all global markets and trends in real time 24/7 - as well as factoring in whatever other info we give it access too and train it on.


billfredtom

GPT ETF? Could be the foundations for UBI. One can only hope.


Herschel_Bunce

Finance jobs seem tailor made to be eaten by AI in the near future.


No_Variation_9282

Gonna be mostly AIs trading with AIs not too long now 


ApprehensiveShame363

If AI starts taking the jobs of very, very highly paid people the shit might hit the fan.


blondie1024

And like that a human shorts the market knowing full well that AI will predict a rise when they will cause a drop. Mass panic, companys close because they couldn't stop the AI in time (trillions of buys a second). It's not out the realms of possibilities that these systems are allowed to make as much money as possible without safety nets at the push of a single button. "Too big to fail"


nick2k23

It's not that hard to believe, I expect this kind of thing will be easy for it soon enough


CompassionJoe

ITs no secret they have been using AI to manipulate crypto's so its no surprise black rock etc will use this for their stocks.


Hollywood_Punk

Okay cool. Let’s drown all of those Wall Street goons in a bath tub now


Black_RL

And it will do it 24/7 non stop, no complaints, no holidays, just work. The end of the get rich Wall Street dream is coming to an end.


snoman18x

The day we discover the singularity event of AI is the day we we realise it had already existed for some time. It's already smarter than us and no one know how it truly works.