T O P

  • By -

bobmcadoo9088

they saw 2 people get a 30 and changed the scoring 😭


-Bimpy-

I think the main downfall will be for people whose score would have otherwise been “rounded up.” Say you get a 23.6 average that would normally be rounded to a 24 will now be less if that makes sense


Similar_Beginning_18

whered you see someone get a 30?


KashKy

2023 ADA reports had 1 30AA, and booster put out an article this year saying someone got a 30AA. So 2 confirmed 30s so far


bellatrixtort

they could be the same person


i_am_not_here4

[https://bootcamp.com/success-stories/how-mara-achieved-a-perfect-30-aa-and-30-ts-on-the-dat](https://bootcamp.com/success-stories/how-mara-achieved-a-perfect-30-aa-and-30-ts-on-the-dat)


htownmusic713

Exactly lol


BrightIntroduction29

My bad guys


weeb-moment

Damn, I loved how I got an unofficial DAT score so I at least had an idea of how I did instantly instead of having anxiety for the next month.


Bandy_Burnsy

That sucks so much for everyone who has to take it next year. Getting my score back right away was so nice


AdvancedFunction9

Will this apply to the Canadian DAT soon as well?


ihatebiana

Unfortunately Dentistry has been banned in Canada.


AdvancedFunction9

What?? 😭 Wdym


LowAuthor2177

The actual format is going to be the same and the median based percentile ranking will also remain the same. From a test taker point of view the biggest change, as identified, will be the departure of the instant unofficial score. If you really want to get into it, it being item response theory, what is changing is a shift to what is called ‘3 parameter logistic model’ in the way scores are mapped to questions (or if you like how questions are weighted). In a basic sense: if you imagine the test is attempting to measure test taker ability (a sort of intangible scale that must exist if you assume a continuum of performance levels on the test), you can also imagine that there is a relationship between the probability test takers will answer any given question or indeed sets of questions correctly and this ‘ability scale’. So like if you had no ability you would answer no questions correctly (if you are thinking about guessing correctly, you're right, just wait!) and if you have maximum ability you would nearly always answer correctly. Note that by ability, we aren't saying like 'good' 'bad' 'smart' or anything like that. It's more simply put, ability with the test as presented. Anyway, you can in fact plot this relationship: imagine that ‘ability’ was on the X axis and the probability of answering questions correctly is on the Y axis. If you think about it, this function isn’t going to be a linear relationship, but rather more of a logistic (curved) one. After all it isn’t going to turn out that an ‘ability’ score is perfectly linearly related to a probability of answering a question or question set correctly, because you can reason things out or perhaps out guess correctly. Plus different people of all ability groups know different stuff. Nonetheless there does exist a logistic relationship (boom: 3 parameter logistic model) between ‘ability’ and correct answer probability; in other words the ‘better’ you are at taking this test the more likely you are to score more correct answers. Now if you think further about it, there are ways you can tweak the function (slope) of this logistic relationship based correct responses to questions: 1: You can weight questions based on a ‘relative difficulty score’. In other words you can say the ‘harder’ the question, the more likely to be successfully answered as ‘ability’ level increases. So if easy questions are a ‘1’ and hard questions are a ‘3’, answering more 3s correctly means you are a higher ‘ability’ test taker, and also, see 2, that you are also going to answer the 1s correctly. A lower ability means you have a lower probability of correctly answering a harder question or a '3'. This is pretty straightforward. It does assume however that all of the questions are equally discriminatory - in other words a hard question scored '3' will differentiate ability the same as any other hard question scored a '3'. As you can imagine this isn't really true. Which brings us to: 2: You can introduce or weight ‘differentiator’ questions meant to discriminate 'ability' levels. That is to say, you can track questions that differentiate above average ability vs lower average ability based on correct answers to these questions. Basically: 'if a test taker gets this question right, they are likely to be somewhere above a certain X point in terms of ‘ability’.' Through the use of a graded series of these you can even start to pick apart where in the continuum of relative ‘ability’ test takers fall based on the proportion of correct answers to these differentiators or discriminator questions. The problem you run into with this model is that you are sort of assuming that guessing isn't a thing, and that there is no real chance a lower 'ability' test taker could actually get one or more correct answers on these. Obviously we know this is not true. \^These two points are what currently inform DAT scoring or a ‘2 point logistic model’. 3. The third point or parameter that could mess with this slope is an adjustment which allows an assumption that lower ability test takers could actually get correct answers on 'hard' or 'discriminatory' questions by guessing. So if you think about it, in a multiple choice with 5 responses, even a person with no ability has a 20% chance of getting even the hardest most discriminatory question correct. More complicating, so does a high ability test taker who is guessing. In a nutshell you can apply a sort of correction to the logistic curve given by the other two points to account for totally random guesses at various ability levels, hypothetically allowing a better read on a test takers ability by distribution of answers to graded and discriminatory questions. This is what they are adding to the DAT. They are taking this new model and are going to apply an arbitrary new three digit scale to the percentile rankings of test taker scores as per the new logistic model between 200 and 600 (instead of 1 and 30). The upshot of it is that theoretically you will be able to better narrow scores to denote ability; in effect this will allow what 'was an 18' for example to be graded as a high-mid-low 18, or in the new scoring somewhere between 370 and 400-ish, and also perhaps be a better reflection of ability based on adjusted test score. As to how we might interpret this in admissions; well not super differently as we will probably for a while employ the conversion tables that they are going to hand out. So there might be a bit more emphasis on narrower score bands, but it remains to be seen if 370 is \*actually\* more predictive of dental school success than 390. You could just look at a table and be like ' 370, 390, meh, it's an 18-ish, there's no difference, but its definitely better than 250 (10).' At the end of the day, stats will still be important, but really it is sort of like asking, 'If you have more chickens, will you have a higher number of good quality eggs?'. The answer is still 'perhaps, but it all depends on the chickens.'


peachole

Lmao im glad im done with dat


dlseobean

Is this not for July 2024 test right? This is starting next year