Taking feminist scholarship seriously: beyond the conceptual penis hoax

Remember the stunt pulled by Peter Boghossian and James Lindsay, that embarrassment to the skeptic community which shall forever be known as the conceptual penis hoax? (See also this panel discussion about the incident.) Well, this post isn’t about that, but it is about how to do serious (as opposed to farcical) criticism of feminist philosophy.

You may recall that Boghossian and Lindsay set out to prove that gender studies is a hopeless field of inquiry, condemned to nonsense by way of a priori ideological commitments. But B&L also proudly declared that they have actually never read a paper in gender studies. So, in order to show them and others how this ought to be properly done, I asked a close friend of mine, who works on feminist epistemology and gender studies (the two are closely allied fields), to provide me with a short list of her favorite papers to submit to my critical reading as an outsider. The remainder of this post is an analysis of one such paper, Elizabeth Anderson’s “Use of value judgments in science: a general argument, with lessons from a case study of feminist research from divorce,” published in the Winter 2004 issue of Hypatia, the leading feminist journal. The full paper can be downloaded here.

[Yes, this is a case of n=1, just like B&L’s hoax. But, unlike them, I’m not saying that Anderson’s paper is representative of the field. It would take too much work on my part to even approximate the level of readings necessary to make such a claim. However, while B&L targeted the lowest of the lowest journals in the field, or close to it, I am picking on a leading journal, and a leading author. Also, unlike B&L, I actually put several hours into this and read the thing from top to bottom, annotating it furiously. Enjoy the result.]

Briefly, here is what Anderson set out to do: “The underdetermination argument establishes that scientists may use political values to guide inquiry, without providing criteria for distinguishing legitimate from illegitimate guidance. This paper supplies such criteria. Analysis of the confused arguments against value-laden science reveals the fundamental criterion of illegitimate guidance: when value judgments operate to drive inquiry to a predetermined conclusion. A case study of feminist research on divorce reveals numerous legitimate ways that values can guide science without violating this standard.”

In other words, Anderson, typically for a feminist epistemologist, rejects the entrenched idea (among both scientists and most philosophers of science) that science ought to be conducted in a value neutral fashion. She acknowledges that this is problematic in some cases, but thinks she has a general answer to the issue, which clearly separates such problematic cases from those where values legitimately guide scientific research and in fact enhance its results. Notice that Anderson is not talking about values internal to science, such as truth, objectivity, etc.. Rather, she is referring to political values, attempting to defend the claim that, for instance, medical and psychological research carried out from a feminist perspective is better than (allegedly) value-neutral science.

While broadly speaking I actually reject Anderson’s position, no knee-jerk reactions along the lines of “this is sheer nonsense” will do. Anderson is a serious scholar, and some of her arguments are difficult to rebut, though I maintain that this can and should be done. So let’s get into a bit more detail.

Instead of following the progression of the rather substantial paper (which interested readers should consult on their own), let me jump straight to Anderson’s case study, which makes her case rather clearly. The topic is the effects of divorce on the wellbeing of the affected parties (parents as well as children), and she contrasts research carried out by a team led by Abigail Stewart, who published a report back in 1997 entitled “Separating together: how divorce transforms families,” with a few other papers reporting research carried out in what Anderson calls a “traditionalist” framework about the nature of the family (examples of these authors include Barbara Whitehead, George Gilder, and James Wilson).

Anderson provides a detailed comparison between what henceforth I will simply refer to as the feminist vs the traditionalist perspectives, in terms of eight criteria (pp. 12-18 of the paper):

Orientation to background interests
Framing the research questions
Conceiving of the object of inquiry
Deciding what types of data to collect
Data sampling
Data analysis
Deciding when to end an analysis
and Drawing conclusions

Let’s take a brief look at each in turn. In terms of background interests, Anderson claims, reasonably, that traditionalists frame things in terms of their own view of what a family ought to be: “The wife’s role is to be mother to her husband’s children; the father’s role is to be the husband of his children’s mother. According to its proponents, this arrangement is in the best interest of the children, and probably also the parents. Alternative family arrangements are judged progressively worse the further they depart from this ideal.”

The problem with this, as we shall see, is that it automatically orients the researcher toward certain lines of inquiry, mostly in terms of psychologically negative effects on the children, while ignoring or downplaying any positive aspects, not only on the children themselves, but on the mother, for instance. By contrast, “feminists approach divorce with greater ambivalence. Although feminists are critical of the patriarchal family, Stewart’s team was initially unsure how to assess divorce from the standpoint of opposition to sexism.”

Which leads us to the second point: framing the research questions. “Traditionalists, viewing married parents as the ideal, are apt to ask: does divorce have negative effects on children and their parents?” By contrast, “Stewart’s team was skeptical of this approach, on both methodological and normative grounds. Methodologically, it is virtually impossible to distinguish the effects of divorce from the effects of the problems in the marriage that led to divorce … Even when families with divorce are compared with families without divorce, but experiencing similar problems (for example, high spousal conflict), the two types of families always differ in other respects. … Stewart also had normative objections to the traditional research question. Focusing on negative outcomes reduces the possibility of finding positive outcomes from divorce.”

Consider the above carefully. Even if I will ultimately reject Anderson’s (and hence Stewart’s) approach, it is hard to argue that she (they) make very good points. If we simply read a technical paper, published in a psychology journal, on the negative effects of divorce on children, without being aware of the ideological biases of the authors, we are prone to take the results on board while simply assuming that the research has been done objectively. In reality, though, when it comes to research on politically and socially relevant human issues, there simply is no such thing as ideology-free and “objective” science. In this sense, then, to clearly state that one is carrying out the research from a particular standpoint (feminist, traditionalist, or whateverist) is helpful to the reader in order to better evaluate the results. A disclosure of ideological bias does not, by the way, automatically licenses the knee-jerk rejection of the findings, precisely because we all have biases, especially when it comes to these sort of issues.

Third, the conception of the object of inquiry: “The conception of divorce drawn from a clinical perspective focuses on the individual’s problems with an event in the past, stressing its negative aspects. Divorce is conceived in terms of ‘trauma’ and ‘loss’; it is seen as a ‘life stress’ that puts children ‘at risk’ for problems later in life. The phrases in quotations use what is known as ‘thick evaluative concepts’ — concepts that simultaneously express factual and value judgments.”

This is clearly problematic, not just from a feminist perspective. It may very well be that there are negative consequences to divorce, but if one sets up one’s entire inquiry in those terms, then one is guaranteed to find nothing but negative effects. By contrast, Anderson points out that Stewart’s team also had in place a “thick” conception of divorce, but this was a conception that was open to the possibility of positive, and not just negative, effects on the children and the mother. (Presumably, also the father, actually, though we are talking about a feminist approach here.)

How does this work? For instance, in the following way: “ from the point of view of at least one spouse [and hence not necessarily the woman], the marriage has typically been failing for years before divorce. To them, divorce is not an event, but a long process of coming to grips with that failure. The conception of divorce as a ‘loss’ represents the post-divorce condition as lacking some good that was present prior to the divorce. It fixes attention on the significance of divorce in relation to the past.” Again, it seems to me hard to argue against this broadened perspective of the conception and effects of divorce. The perspective adopted by Stewart’s team was one in which divorce was not conceived as the breaking up of a family, but rather as a transformation that ends up separating the parental from the spousal roles. This is of course a perfectly reasonable alternative to the traditionalist view.

Fourth, given the above, what type of data should researchers collect? “Stewart’s team gathered data on subjects’ post-divorce feelings and interpretations of changes they underwent, in addition to reports of more objective phenomena. This provided crucial data confirming the conception of divorce as an opportunity for personal growth. Women especially found this to be so, with 70 percent judging that their personalities had improved since divorce.”

Here, one could reasonably object to the inclusion of subjective first-person reports like the one just described, as they are not as quantifiable and “objective” as, say, statistics about school grades comparing the children of divorced vs non-divorced parents. Fair enough, but as I learned as a biologist, skewing things toward the quantitative often simply means that one ends up measuring what is easily measured, as opposed to what is really interesting or important. Moreover, again, we are talking about human experiences here, so a degree of subjective judgment simply comes along with the subject matter (unlike, say, my research as a biologist on weedy and invasive plants).

In terms of data sampling, Anderson again makes an interesting point: “A sample drawn from psychological clinics [as is standard in traditionalist approaches] will be biased toward those experiencing great difficulties coping with divorce, or misattributing their difficulties to divorce, and against those who find divorce liberating. Wallerstein’s work on divorce has been criticized on this ground. Her error lies not in adopting a value-laden conception of divorce, but in failing to draw a random sample of cases. Stewart’s team, by contrast, drew a less biased sample of cases from the divorce dockets.”

I find this contrast rather illuminating, but it’s easy to see how an assumption of objectivity on the part of the researchers would not easily lead someone to question why research data should not be drawn from psychological clinics, a choice in turn subtly informed precisely by a conception of divorce as a negative event that must lead to bad psychological outcomes.

Moving on to data analysis, Anderson distinguishes between what in statistics are called main and interaction effects. In my experience, and I find this unfortunate, lots of researchers focus on the “main” effects (notice what they are called!), meaning on the average, across the board effects of whatever variables they have been studying. This is because the so-called interaction effects (variable 1 x variable 2; variable 1 x variable 3; and so on, to include third and higher order interactions, when feasible) are more difficult to interpret, and require very large sample sizes in order to properly study them in terms of statistical significance.

But Anderson adds an ideological twist to this general problem: “The decision to focus on main effects, or to look for interaction effects, reflects background values. A main effects analysis accepts the average outcome as representative of the group, discounting individual variation. This makes sense if one believes that a single way of life is best for everyone. But for researchers who doubt this, attention to within-group heterogeneity is imperative.” Indeed: the way you do statistics may reflect your personal ideological biases about the subject matter you are allegedly objectively studying.

Next to last: when do we stop with our analysis? “The great temptation is to stop an analysis as soon as it reaches findings pleasing to the researchers, but to continue analyzing displeasing findings in the hope of explaining them away. To be sure, it is almost impossible to accept unwelcome findings at face value. Stewart’s team [for instance] found that some children appeared to suffer from regular visitation by their noncustodial fathers. Unhappy with this result, the team engaged in further analysis and discovered that high levels of post-divorce parental conflict interacted with regular father visitation to produce their finding. For parents still fighting after the divorce, regular visits were the occasion for regular arguments, which the children presumably anticipated with anxiety.”

This is an interesting and novel finding, but it came about precisely because of the ideological biases of the researchers, who where “unhappy” with the prima facie results. Of course Anderson is well aware that this is a slippery slope, but at the onset of her paper she clearly states that one’s own values ought to inform various aspects of one’s research, except the results themselves. To put it differently, one can be as unhappy as one likes, but if a reasonable alternative explanation cannot be found one still has to accept the verdict of the evidence. This, naturally, is harder to pull off in practice than it sounds in theory, which is one reason I will, below, end up disagreeing with Anderson and, by implication, the whole feminist epistemological approach.

Finally, what conclusions should researchers draw from their studies? “The main point of divorce research, as of much other research in the social sciences, is to answer evaluative questions on the basis of empirical evidence. Are children better off if parents who want a divorce stay together? What coping strategies make divorce go better or worse for the affected parties? The enterprise of answering these questions on the basis of evidence would make no sense if science were value-neutral in implication — that is, if ethics were science-free. It is not.”

This is a crucial point, and again one with which I find myself in broad agreement with Anderson. While other types of scientific research may be value neutral (though I do think this is a continuum), research on human subjects on issues of import to our social policies and moral choices are inextricably evaluative. There just is no way to do the research without having ideological biases. Our only option is to hide them and pretend they are not there or to wear them on our sleeves and make them clear to the world.

After having given Anderson her due, let me explain why I still disagree with the feminist approach to epistemology, and yet I do not fall back on the more classical idea that science is value-neutral and ought to be carried out in an unbiased fashion.

Throughout much of the paper, Anderson makes a large use of Helen Longino’s work on epistemology and the nature of science, particularly her 1990 book, Science as Social Knowledge, which I also highly recommend. (She wrote a more recent one, also worth checking out: The Fate of Knowledge. A shorter, accessible overview of her take on the nature of science is her article for the Stanford Encyclopedia of Philosophy: “The social dimensions of scientific knowledge.”)

But Longino’s view of science isn’t quite feminist in the sense advocated by Anderson. Indeed, it is much closer to a school of thought often referred to as “perspectivism” in philosophy of science. Longino takes seriously the above mentioned idea that scientists are never objective and value-free, for the simple reason that they are human beings, and modern cognitive science shows us that we are all (some more, some less) biased, consciously as well as unconsciously. But the answer provided by Longino is a bit more nuanced than simply “let’s do overtly feminist science and be done with it.” Rather, the idea is that quasi-objectivity is a property not of individual scientists (or even groups of scientists) but of science as a dynamics process.

This means not that scientists should approach their research in expressly biased terms, but rather that we should guarantee that scientific research is carried out by the broadest possible set of individuals, making sure that we include as many personal, ideological, political, and even religious perspectives as possible. Why? Because they will tend, in the long run, and on average, to cancel each other.

We can’t have research on divorce done by feminists only, because that would bias things toward one particular conception of family and divorce. But it can’t be done just by researchers who embrace a more traditionalist view of those topics either. Instead, let’s have those as well as many other perspectives represented by different researchers and schools of thought, which will then correct each other biases during the pre- and especially post-publication process of peer review. That’s how quasi-objectivity emerges: as the outcome of a process of social construction of science (in the benign, not post-modernist, sense of the word). This is not ideal, but it is by far the most realistic solution, given that science is done by human beings.

As you can see, then, it took me several hours of study of a single paper, and about 3,000 words of explanations here, in order to properly assess one particular study in feminist epistemology. That’s why the penis hoax thing is a joke, and it’s a joke on those skeptics that embraced it, not on feminist or gender studies. If you want to criticize academic scholarship you have to engage with it, seriously and charitably. And if you want to go from the critique of a single paper to that of an entire field, then you ought (ethically!) to devote hundreds or thousands of hours to it. Or the joke is on you.

52 thoughts on “Taking feminist scholarship seriously: beyond the conceptual penis hoax

  1. Massimo Post author

    Coel,

    Of course string theory isn’t an “ideology” as usually construed. But neither is the thinking that the patriarchal family is problematic. “Feminism” is an ideology, note the -ism. But what about eugenics? Science or (pernicious) ideology?

    My point was simply that when the data is irrelevant, then people tend to take positions based on personal preferences and other non-empirical criteria, and then turn around and defend them to the teeth. That begins to seem pretty “ideological” to me.

    Liked by 1 person

  2. saphsin

    I can see how having your own space can be used for isolating yourself from other perspectives & criticism, it can help engender a cult.

    But a tendency is not a necessary path. I don’t necessarily see anything wrong with having a journal or even a club for sub-disciplines. People who work on certain specific problems find it helpful to have an outlet that expresses certain views. Is it wrong for Massimo to have a Stoic-Meet Up for instance?

    As long as we help foster a public commitment for deliberative exchange among different perspectives, I don’t see the problem.

    Liked by 1 person

  3. Coel

    Hi Massimo,

    But what about eugenics? Science or (pernicious) ideology?

    Eugenics has two components:

    (1) We can use selective breeding to attain desired traits. This is the science bit. We know that it’s true and that it works because we have used it successfully for farmed animals for eons.

    (2) We should use selective breeding to attain desires traits. This is not a scientific statement (science does not do values or oughts, people do), instead it’s a political ideology.

    Liked by 1 person

  4. Robin Herbert

    There appears to be an ideology in acience towarda strong biological determinism, also towards the “there is nothing special about humans’ attitude.

    There has also been an ideological claim in psychology that there are no real male bisexuals which has skewed resrearch work for decades.

    There would be something to be said for being up front with ideological commitmenta if this was part of a process od avoiding this skewing effect or at least providing context.

    Like

  5. Massimo Malagoli

    Coel,

    I am afraid your disentangling of science and ideology for eugenics does not really work. In particular ideology was heavily intermingled with science in your point (1), namely in the identification of which traits to selectively breed for. The belief that you could selectively breed for such traits like personal character, IQ, criminal tendencies and so on, was based on a ideological reading of the science, and lead to such things like sterilization of physically and mentally disabled people and criminals, not to mention the project to exterminate entire populations.

    Liked by 4 people

  6. Coel

    Hi Massimo Malagoli,

    The belief that you could selectively breed for such traits like personal character, IQ, criminal tendencies and so on, was based on a ideological reading of the science, …

    Not really. It is indeed true that one could selectively breed for such traits. You don’t have to read science “ideologically” to reach that conclusion, all our experience with farm animals and domestic animals such as dogs tells us so (different breeds of dogs are very different in character, temperament and IQ, etc).

    … and lead to such things …

    No, the fact that you can treat humans as farm animals doesn’t imply that you should treat humans as farm animals and doesn’t automatically lead to people wanting to arrange society along those lines. Those are non sequiturs. (And if people really wanted to treat humans as farm animals they wouldn’t need the help of modern science to do it, we’ve known how to farm for thousands of years.)

    Like

  7. Disagreeable Me (@Disagreeable_I)

    Hi Massimo Malagoli,

    I am afraid your disentangling of science and ideology for eugenics does not really work. In particular ideology was heavily intermingled with science in your point (1), namely in the identification of which traits to selectively breed for.

    I am afraid that this point against Coel fails, because he never claimed that the identification of which traits to selectively breed for was a scientific issue. On my interpretation, he only claimed that the science indicates that we can breed for desirable traits, given that those traits are identified. I’m sure he would agree that the identification of such traits is an ideological or political issue.

    Liked by 1 person

  8. Massimo Post author

    Coel,

    The problem is that you are making all sorts of entirely unwarranted and highly questionable assumptions about the narrow-sense heritability, and therefore selectability, of complex human behaviors. That’s bad science.

    Like

  9. Coel

    Hi Massimo,

    The problem is that you are making all sorts of entirely unwarranted and highly questionable assumptions about the narrow-sense heritability, and therefore selectability, of complex human behaviors. That’s bad science.

    Entirely unwarranted? And yet things like twin studies tell us that the narrow-sense heritability of complex traits is high. As a rule of thumb, of the variance of a typical trait within a typical population, half-ish is genetic and half-ish everything else. Estimates of heritability of adult IQ are typically around 0.6 to 0.7. (e.g. link)

    Like

  10. Massimo Post author

    No Coel,

    Twins studies don’t estimate narrow sense heritability, but broad sense heritability. Which in quantitative genetics is the wrong measure, since it does not quantify potential response to selection. There are only two reliable ways of estimating narrow-sense heritability: complex breeding designs (unethical, unfeasible in humans) and actual multi-generation selection experiments (unethical, unfeasibale in humans).

    As Wittgenstein put it, admittedly in a different context: Whereof one cannot speak thereof one must be silent.

    Like

  11. Daniel Kaufman

    No one said that SJWs have overrun every corner of academia. Business schools tend to be conservative. But for some reason, you continue to argue with people who are not here.

    Pretty weird example you chose too. The response to that journal and its BLM issue, not to mention the rounds of apologies and rendings of garments that followed certainly suggest that the forces of Social Justice are powerful and feared within the liberal arts and some of the social sciences.

    Like

  12. SocraticGadfly

    Dan: NOT powerful enough to influenced the journal’s original piece.

    And, per earlier exchange, your last comment I think illustrates the general correctness of what I said.

    And, that’s why, even before you went down this road, with Coel already being on steroids (hope I can say that much) on this post, I had indicated I was going to duck out.

    Like

  13. Daniel Kaufman

    Socratic: I have no idea what you are talking about in your second paragraph or in your comparison with Coel.
    And if you know anything about the publishing of the original piece, you’ll know it had nothing to do with insufficient sympathy to social justice on the part of the editors.

    As for ducking out, I’m sorry that it causes you so much distress to hear a liberal lament the illiberal trends within the liberal arts and humanities. Nonetheless, this is my career, into which I’ve sunk several decades of my life, and I am going to speak the truth about it.

    Like

  14. Coel

    Hi Massimo,

    There are only two reliable ways of estimating narrow-sense heritability: complex breeding designs (unethical, unfeasible in humans) and actual multi-generation selection experiments (unethical, unfeasibale in humans).

    So the only fully reliable way of knowing whether selective breeding would work in humans is to try it and see (which of course we don’t want to do). OK, granted, but all the evidence that we do have suggests it would work. There are estimates of narrow-sense heritability in the literature. For example, Howarth et al (Molecular Psychiatry, 2009, 1-9) quote an estimate of “narrow sense” (additive) heritability for IQ in young adulthood as being in the range 0.58 to 0.73 (their Table 3). (Even half that number or a quarter of it would be ample for selective breeding to work over sufficient generations.)

    I also note that whenever you comment on such issues here, it is always in the direction of critiquing any suggestion that genes explain some of the variance in human behavioural traits, it is never critiquing comments that discount genetic effects entirely. It’s as though you want to default to the position that the genetic contribution is nil unless proven otherwise, coupled with a fairly high threshold of proof. Isn’t that an ideological default?

    Like

  15. Massimo Post author

    Coel,

    As far as I know, what Howarth et al. claim is simply not possible. If you want to link the article I might be able to take a look (busy weekend).

    But:

    “all the evidence that we do have suggests it would work.”

    No, all the evidence doesn’t point that way because there simply is no evidence on the matter.

    “whenever you comment on such issues here, it is always in the direction of critiquing any suggestion that genes explain some of the variance in human behavioural traits, it is never critiquing comments that discount genetic effects entirely. It’s as though you want to default to the position that the genetic contribution is nil unless proven otherwise, coupled with a fairly high threshold of proof”

    I don’t know in which parallel universe you live, Coel, but no. That is a blatant strawman. I wrote a damn book on gene-environment interactions, including a chapter on humans, and I would never suggest what you think I think: http://tinyurl.com/ybuc6goc

    Liked by 4 people

  16. Robin Herbert

    I have never seen any comments here that entirely discount genetic effects on human behaviour.

    So maybe the reason Massimo has nit critiqued any such comments is that there haven’t been any.

    Liked by 6 people

  17. Coel

    Some additional points:

    First, we know that human-line intelligence (or at least its proxy, cranium size) has increased markedly and progressively over several million years. This means that there must have been substantial narrow-sense heritability over that time. It is unlikely that, for unknown reasons, this has all vanished in the recent past.

    Second point: complex behavioural traits (such as cognitive ability) must necessarily be affected by hundreds of genes or more (since you can’t encode complex traits in a few genes). Indeed, GWAS have already shown that many hundreds of genes, at a minimum, are involved in IQ.

    Now, it is very unlikely that for multiple gene traits narrow-sense (“additive”) heritability is vastly lower than broad-sense heritability. If a trait is controlled by one or a few genes, then yes it is plausible that dominance relations play a large role, but when it comes to hundreds of genes strewn across the genome it gets implausible that the broad-sense heritability is composed only of dominance relations and epistasis, and that narrow-sense heritability is insignificant. Thus it is likely that the best twin studies of complex traits (involving hundreds of genes in hundreds of different genomes) are giving a fair indication of narrow-sense heritability.

    Note that, for example, an article in Nature Reviews Genetics (13, 640-653 2012) says you can estimate narrow-sense heritability (VA) from twin studies:

    “In extended-twin-family designs, the information from additional types of family relations together with the information from twins allows for estimating VA, VD, VC and VE simultaneously.”

    This was what the Howarth et al article mentioned above was doing, when it estimated narrow-sense (additive) heritability for IQ in young adulthood as being in the range 0.58 to 0.73.

    Lastly, the biggest meta-analysis of twin studies was published in 2015 in Nature Genetics (47, 702–709 (2015)). It meta-analyses: “17,804 traits from 2,748 publications including 14,558,903 partly dependent twin pairs, virtually all published twin studies of complex traits”.

    It says that for complex traits: “… across all traits the reported heritability is 49%”. It then says: “For a majority (69%) of traits, the observed twin correlations are consistent with a simple and parsimonious model where twin resemblance is solely due to additive genetic variation. The data are inconsistent with substantial influences from shared environment or non-additive genetic variation”.

    Or, paraphrasing, for most complex traits (and in line with the above argument) broad-sense heritability and narrow-sense heritability are pretty much the same thing.

    Overall conclusion: the variation in “human mind” traits really does have a strong genetic element (the rule-of-thumb of half genetic, half environmental is a good one), and this does extend to things like whether you vote and who you vote for. Popular culture and swathes of the social sciences really should accept this.

    Like

  18. Disagreeable Me (@Disagreeable_I)

    Hi Coel,

    Robin said:

    I have never seen any comments here that entirely discount genetic effects on human behaviour.

    And he’s right. You’re overstating your case. What you mean to say, I think, is that Massimo’s comments seem to always be on the side of dismissing specific claims regarding the genetic contributions to behaviour, and never on the side of promoting specific claims regarding genetic contributions to behaviour.

    Liked by 1 person

  19. Massimo Post author

    DM,

    “Massimo’s comments seem to always be on the side of dismissing specific claims regarding the genetic contributions to behaviour, and never on the side of promoting specific claims regarding genetic contributions to behaviour.”

    That’s right. The reason is because while it is experimentally more tractable to study environmental influences, it is mighty hard to prove genetic effects on complex human behaviors. So I think the burden of proof lies that way.

    Coel,

    “we know that human-line intelligence (or at least its proxy, cranium size) has increased markedly and progressively over several million years”

    Irrelevant to the topic at hand. Nobody is arguing that “human intelligence” is not influenced by human genetic makeup. The issue concerns specific, highly culturally modified, human behaviors.

    “complex behavioural traits (such as cognitive ability) must necessarily be affected by hundreds of genes or more”

    Why? This sounds like an apriori without empirical foundations. Yes, GWAS has shown hundreds of genes involved in “IQ,” but: (i) each gene explained a tiny portion of the variance, and overall that particular study didn’t get over a few percentage points, all effects combined. Which goes straight against your contention; (ii) “IQ” is a highly artificial measure of it’s-not-clear-what, exactly.

    “it is very unlikely that for multiple gene traits narrow-sense (“additive”) heritability is vastly lower than broad-sense heritability”

    False. In every animal or plant system studied so far narrow-sense heritabilities have been found to be lower. The more environments the study considers for selection studies, the greater the discrepancy between the two.

    “Nature Reviews Genetics (13, 640-653 2012) says you can estimate narrow-sense heritability (VA) from twin studies”

    It’s complicated, but they are using an indirect statistical model we have little reason to believe yields reliable estimates. Indeed, even direct breeding programs are known to significantly overestimate narrow sense heritability when compared to selection experiments. The latter remain the most reliable approach, by far. Too bad they can’t be done on humans.

    “the biggest meta-analysis of twin studies was published in 2015 in Nature Genetics”

    Irrelevant, as it is based on broad sense heritability. When the author say that their results are “consistent” with the assumptions they make about narrow sense heritability their statement is simply laughable, in part for the reasons explained above.

    Liked by 1 person

Comments are closed.