18 Comments
User's avatar
Augustus P. Lowell's avatar

That is undoubtedly true when played out across the full range of colleges and universities that admit across the full range of academic indices.

I wonder, however, how it plays out at the few most exclusive and selective of colleges and universities where essentially everyone selected has about the same academic index -- hovering around the very top of what is achievable. Would that not, in effect, be wholly to the detriment of those who -- for no reason over which they had any control -- had not faced those "serious personal challenges"?

The point is that your observation about differentiation only happens when there is room above the current academic index to achieve even more that you have. If a selective university is effectively selecting from a complete cohort already at the top of the academic index, there is actually very little (or no) room for demonstrating differentiation; and a built-in bias toward those who have endured hardship would, then, be completely equivalent to a built-in bias against those who have been lucky enough to have avoided hardship.

In some cosmic sense, that might seem like justice. On a personal level, I can assure you that it would not.

Perhaps that's why the public fights over such things are so vehement and so hard: because they are nearly always about what happens at the Harvards and Yales and MITs and Stanfords of the world -- and among potential students (and their parents) with sky high expectations -- and not about what happens at the second and third-tier level...

Expand full comment
Rajiv Sethi's avatar

Thank you for your comment.

My understanding is that even the most selective universities get applications from people with a range of scores, but suppose they could fill their class several times over with applicants having perfect scores. They would still have to choose which ones to reject. If they did so on the basis of life circumstances (school and neighborhood quality for example) then I think my argument still applies. As long as there are systematic differences across groups in life circumstances, the selection policy would fail a naive test of merit, since the composition (by race, color, national origin) of the admitted pool would differ from that of the applicant pool.

Would you agree or did I misunderstand your point?

Expand full comment
Augustus P. Lowell's avatar

I think (?) I don't agree (unless I misunderstood your answer). I was responding to the notion you mentioned that colleges might maximize "potential" (a rational and reasonable, if slippery, goal) by favoring applicants with fewer advantages and more personal challenges because, "Of two students with a given academic index, one who has enjoyed fewer advantages and faced more serious personal challenges is likely to have greater latent ability." That may very well be true -- but only if the converse is also true: that someone with _equal_ latent ability but _without_ disadvantages would be able to achieve a _higher_ academic index. If the academic indices of applicants are all clustered at the top of the distribution then that converse _can't_ be true: an applicant with fewer disadvantages -- or with actual higher latent ability -- cannot achieve a higher academic index because all the applicants are already essentially maxed out. At that point, whether or not someone did or did not have disadvantages doesn't necessarily say anything about what their latent ability is relative to someone else's -- we've reached the top of the scale and no more differentiation is possible. Under those circumstances, _if_ you were to favor the 'disadvantaged' over the 'not disadvantaged' then you would be favoring _something_ that felt, perhaps, socially satisfying -- but it would not necessarily or obviously be latent ability. In that circumstance, the good fortune of having had a 'normal' life would become an actual disadvantage. And yes, there is some spread in academic index among accepted applicants at the very top schools -- not everyone literally has the highest possible score. But they are all clustered very near the top; and there is almost certainly enough variability and uncertainty in the mapping between academic index and actual achievement that a few point difference up or down at the top end of the distribution probably doesn't actually indicate all that much actual difference either in achievement or in ability.

Expand full comment
Rajiv Sethi's avatar

I think I see your point. If you look at the paper linked in the first footnote, there is a simple model with two ability and two resource levels. The case you are considering corresponds to a situation of small elite capacity, in which case only those with high ability and also high resources are selected. Those with low resources don't make it even when they have high ability. So what you're arguing is theoretically possible. Whether it applies to our most selective schools is an emprirical question to which I don't have an answer.

Expand full comment
William J Carrington's avatar

Another critique of the Becker "outcome test" is that it can conflate intergroup differences in acceptance criteria (i.e. discrimination) with analogous differences in the distribution of "merit" (e.g., test scores, grades, etc.). In particular, suppose that group A's distribution of merit is simply a leftward shift of group B's distribution and that both groups are subjected to the same acceptance rule. This will of course imply that fewer members of Group A will be accepted, but it also implies that, *conditional on acceptance*, Group A members will have less merit and also worse post-graduation outcomes.

I wryly note that a) Becker was one of my thesis advisors and that b) Augustus Lowell and I were Exeter classmates:)

Expand full comment
Rajiv Sethi's avatar

Wow, small world...

The outcome test should be applied only by comparing marginal individuals (those on the threshold of acceptance) not the average, but unfortunately this distinction is not always made in empirical work. I think your critique applies to the average outcome test?

Expand full comment
William J Carrington's avatar

I think it applies to the marginal outcome test, too, as long outcomes are not predicted with an R2=1 regression.

Expand full comment
Rajiv Sethi's avatar

I'm a bit puzzled by this, for marginal individuals the post-enrollment performance should be the same since they have the same academic index (borderline between acceptance and rejection). Dan O'Flaherty and I discussed this in the context of police stops and contraband recovery in our 2019 book (p. 91), with an example where the average hit rate is misleading but marginal works (link below). The problem is that it's hard to identify marginal individuals in most cases.

https://books.google.com/books?id=Gm-LDwAAQBAJ

Expand full comment
William J Carrington's avatar

Forgive me if I'm being obtuse and not on point, but here's what I'm trying to say (and a point we've both acknowledged before, IIRC). We observe a noisy signal X* = X + u, where X is merit. Further, let's assume that post-schooling outcomes are a deterministic function g of X. Finally, assume that the pdf of X for group A is a leftward shift of the corresponding pdf for group B. Then E(g(X)|X*=x,group=A)<E(g(X)|X*=x,group-B). If this were the case, then we could have a non-discriminatory policy vis-a-vis X* and it would still fail Becker's non-discrimination test. Of course, one response to this is that a truly non-discriminatory acceptance policy would be one in which thresholds vary by group (which is not something I endorse, to be clear).

Expand full comment
Rajiv Sethi's avatar

I think I see how a noise term can cause the equality to fail for marginal individuals, but won't the ranking of expected performance depend on the distributions of u and the merit pdfs? Or is your claim that the ranking is unaffected by distributional details?

Expand full comment
William J Carrington's avatar

My working assumption is always that I'm confused (:)), but wouldn't the above inequality hold if, given the other assumptions, it were instead E(rank(g(X))|rank(X*)=x,group=A)<E(rank(g(X))|rank(X*)=x,group-B)? Feel free to just de-confuse me by directing me to the relevant page of your paper/book.

Expand full comment
Daniel Greco's avatar

Inferring anything from data on post-enrollment performance strikes me as extremely difficult. Even absent the kind of selection effect you mention, there's the worry that the difficulty of college classes (likely?) varies a lot by field, in ways that are very hard to measure.

Here's a scenario that people (like Richard Sander, or Peter Arcidiacono) who worry about "mismatch" effects associated with affirmative action discuss. Talented minority students with an interest in STEM who are admitted to top universities under affirmative action end up towards the bottom of their very difficult STEM classes, and disproportionately switch out of STEM, towards majors in which they get better grades. They (and the rest of society) might have been better off if they'd been at less competitive schools where they weren't intimidated out of majoring in STEM. I don't know the extent to which it happens, (I know there are studies finding it, but I'm sure it's controversial, and I don't know how large the literature is, or what a really well conducted meta-analysis would find) but suppose it does. How would it show up in data? It would show up in data about differing rates of switching out of STEM by race. But it might very well *not* show up in different GPAs by race. (If minority students end up taking easier classes, their GPAs might end up just as high.) And different rates of switching out of STEM by race are suggestive, but not a smoking gun of racial preferences in admissions. (Maybe it's explained by unwelcoming environments in STEM majors, rather than anything going on in admissions.)

Somewhat ironically, I think all the (good!) reasons why conservatives are generally skeptical of the "disparate impact" doctrine when it comes to antidiscrimination law apply here just as strongly as they apply in other contexts; inferring discrimination from statistics that vary by group (whether race, sex, or anything else) is extremely difficult.

Expand full comment
Rajiv Sethi's avatar

Thanks Daniel, you raise a lot of important points.

Columbia used to have a column on the transcript that gave the percentage A in each course, which could be used to address differences in course difficulty and grading standards. The problem is real, which is why I added the qualifier "in any given course" when discussing post-enrollment performance.

Regarding disparate impact, I agree completely and had *exactly* the same thought. Was planning to write a post on it at some point. People with principled objections to the doctrine seem to have adopted it wholesale when evaluating universities.

Did you see Arcidiacono on Glenn Loury's podcast? Worth watching the latest conversation. Some very disturbing anecdotal evidence from admissions files. And the transition out of STEM is a real problem, though not just due to mismatch. UMBC has managed to produce a lot of STEM graduates who may well have fallen through the cracks elsewhere.

Expand full comment
Troy Tassier's avatar

I was part of a committee investigating minority student retention in stem majors. One additional complication we found: Majority students tended to enter college with more AP and college level credits because of their advanced high school offerings. Because of these extra credits, these students had more flexibility to retake a class or to take a lighter course load and remain on track for graduation. Minority students faced a tighter constraint on this dimension and, at least anecdotally, were more likely to switch majors to stay on track for graduation. We didn’t have enough, or the right, data to explicitly test this, but if true, simply switching majors out of stem wouldn’t indicate the minority students were less able, instead it would just be another factor of a staggered start that was difficult to overcome.

Expand full comment
Rajiv Sethi's avatar

Were you able to test rates of switching conditional on entering credits or average course load?

Expand full comment
Troy Tassier's avatar

No. Unfortunately we didn’t have detailed individual data.Hoping the college might give it to us this year bc I’d like to see the results/effects.

Expand full comment
Daniel Greco's avatar

Showing % A in each course on the transcript strikes me as a great innovation; I wish more schools would do it.

Expand full comment