Where Do We Go Next? — II

Digital Humanities[for a brief explanation of this ongoing series, as well as a full table of contents, go here]

Yet another challenge: the rise of the Digital Humanities

A very different sort of challenge to the traditional conception of philosophical inquiry comes from the idea of the so-called “Digital Humanities” (DH). This is a complex issue, which includes both administrative pressures on academic departments to “perform” according to easily quantifiable measures and a broader cultural zeitgeist that tends to see value only in activities that are quantitative and look sciency (the broader issue of scientism [3]). I will not comment on either of these aspects here. Instead, I will focus on some basic features of the DH movement (yes, it is another “movement”) and briefly explore its consequences for academic philosophy.

One of the most vocal advocates of DH in philosophy is Peter Bradley, who has expressed his disappointment that too few philosophers attend THATCamp, The Humanities and Technology Camp, which, as its web site explains, “is an open, inexpensive meeting where humanists and technologists of all skill levels learn and build together in sessions proposed on the spot.” [4] This is odd because, as Bradley points out [5], philosophers have been developing digital tools for some time, including the highly successful PhilPapers [6], an increasingly popular database of published and about to be published technical papers and books in philosophy; the associated PhilJobs [7], which is rapidly becoming the main, if not the only, source one needs to find an academic job in philosophy; and a number of others.

Despite this, philosophers make surprisingly little use of computational tools, such as Google’s Ngram Viewer [8] (more on this below), which Bradley claims is a shame. As an example of its utility, he ran a quick search on the occurrence of the words “Hobbes,” “Locke,” and “Rousseau,” and obtained a diagram clearly showing their “relative importance” from 1900 onwards, as measured by the appearance of these philosophers’ names in books that have been digitized by Google. The result was that Locke and Rousseau have always battled it out while enjoying a significant advantage over Hobbes, and further that Rousseau was ahead of his English rival between the 1920s and ‘50s, but the opposite has been true since the late ‘70s. Now, I don’t know whether scholars of early modern philosophy would agree with such results, but I decided to play with Ngrams myself to get a taste of the approach.

I must say, it is rather addictive, and sometimes really satisfying. For instance, a (non-philosophical) comparison of Beethoven, Mozart, the Beatles and the Rolling Stones led to precisely the outcome I expected: Beethoven and Mozart are between two and ten times more “important” than the Beatles or the Rolling Stones, with Beethoven usually in the lead (except toward the very end of the 20th century), and the Beatles beating the Stones by a comfortable margin (Fig. 6). (Incidentally, Britney Spears barely made an appearance in a 2000-2008 search, and was almost 20 times less popular than Beethoven in 2008 [9]). Of course, it is more than a little debatable whether a popularity context reflected in an indiscriminate collection of books is a better assessment of these philosophers or musicians than the one that comes out of the technical literature in either field. Indeed, I’m not even sure whether the comparison between Ms. Spears and Beethoven is meaningful at all, on musical grounds. Also, as far as the philosophical example produced by Bradley is concerned, shouldn’t we at the least distinguish between the recurrence of certain names in philosophy books, books about politics, and books for a lay audience? It’s hard to imagine that they should all be counted equally, or subsumed into a single broad category. [10]


Despite these reservations, just like the diagrams on the relative influence of philosophers I presented in Chapter 2, this data ought to provide fun food for thought at the least for introductory courses in philosophy (or music), and it may — when done more systematically and in a sufficiently sophisticated manner — present something worth discussing even for professionals in the field. And the data certainly shouldn’t be dismissed just because it is — god forbid! — quantitative. As in the case of XPhi discussed above, the proof is in the pudding, and the burden of evidence at the moment is on the shoulders of supporters of the Digital Humanities. Bradley, for instance, does suggest a number of other applications, such as pulling journal abstracts from the now increasingly available RSS feeds and run searches to identify trends within a given discipline or subfield of study. This actually sounds interesting to me, and I’m looking forward to seeing the results, though one also needs to be mindful that these exercises can all too easily become much more akin to doing sociology of philosophy rather than philosophy itself (exactly the same point made earlier about XPhi).

Lisa Spiro built on Bradley’s points [11], noticing that by 2013 the National Endowment for the Humanities Office of Digital Humanities had awarded only five grants to philosophers at that point (four of them to the same person!), and that the American Philosophical Association meeting that year only featured two sessions on DH, compared to 43 at the American Historical Association and 66 at the Modern Language Association meetings (although, as the author notes, the latter two professional societies are much larger than the APA). Even so, as Spiro herself and several commenters to her article point out, this may be yet another case of philosophers engaging in hyper-criticism of their own discipline (see Chapter 1) while not recognizing their achievements. Besides the already noted PhilPapers, PhilJobs, etc., philosophy can boast one of the very first online open access journals (The Philosopher’s Imprint [12]), the first and only philosophy of biology online open access journal (Philosophy & Theory in Biology [13]), and what I think is by far the best online, freely available, scholarly encyclopedias of any field, the Stanford Encyclopedia of Philosophy and the Internet Encyclopedia of Philosophy [14], the quality of whose entries is so high that I regularly use them as either an entry point into a field or topic different from my own or as a quick reminder of where things stand on a given issue. The SEP, incidentally, actually predates Wikipedia!

While a case can be made that philosophers “went digital” before it was cool, and that there is not much reason to think they’ll retreat or disengage any time soon, it is worth broadening the discussion a bit, and ask ourselves what basic arguments in favor and against the whole DH approach have been advanced so far. As in the case of XPhi, the literature is already large, and getting larger by the month. Nonetheless, here is my preliminary attempt at summarizing what some of the defenders and critics of DH have to say at a very general level of discourse.

Just like in any discussion of “the old fashioned ways” vs the “new and exciting path to the future,” there is hype and there is curmudgeonly resistance. An example of the first one — in the allied field of literary criticism — is perhaps an article by Bill Benzon (2014), which begins by boldly stating that “digital criticism is the only game that’s producing anything really new in literary criticism.” It is obvious to retort that new may or may not have anything at all to do with good, however.

The standard example mentioned in this context is the work of Stanford University Franco Moretti, a champion of heavily data-based so-called “distant reading.” The idea, which can easily be transferred to philosophy (though, to my knowledge, has not been, yet) is that instead of focusing on individual books (classical, or “close” reading), one can analyze hundreds or thousands of books at the same time, searching for patterns by using the above mentioned Ngram Viewer or similar, more sophisticated, tools. It seems, however, that this cannot possibly be meant to replace, but rather to complement the classical approach, unless one seriously wants to suggest that we can understand Plato without reading a single one of his dialogues, for instance. Indeed, distant “reading” is really a misnomer, as no reading is actually involved, and the term may lead to unnecessarily confrontational attitudes. The sort of questions one can ask using massive databases is actually significantly different from the “classic” questions of concern to literary critics, philosophers, and other humanists. Some of the times these new questions will indeed nicely complement and integrate the classical approach and address the same concerns from a different standpoint, but in other cases they will simply constitute a change of the subject matter (which is not necessarily a bad thing, but does need to be acknowledged as such).

Data-based techniques can even be applied to single works of literature, as shown by Moretti’s “abstract” reconstruction of the relationships among the characters of Hamlet. The issue is whether a professional literary critic will learn something new from the exercise. Is it surprising, for instance, that Hamlet emerges as the central figure in the diagram (and the play)? Or that he is very closely connected to the Ghost, Horatio, and Claudius, while at the same time relating only indirectly to, say, Reynaldo? I’m no Shakespearean scholar, so I will lead that judgment to the pertinent epistemic community.

Regardless, Benzon makes important points when he places the rise of distant reading in the context of the recent history of literary criticism. To begin with, “reading” in this sense is actually a technical term, referring to explaining the text. And the field has seen a number of more or less radical moves in this respect throughout the second half of the 20th century and beyond. Just think of the so-called New Critics of the post-WWII period, defending the autonomy of the text and the lack of need to know anything about either the author or the cultural milieu in which she wrote the book. And then we have the infamous “French invasion” of American literary criticism, which took place at a crucial 1966 conference on structuralism in Baltimore. Similar considerations have been made concerning the split between analytic and continental philosophy throughout the 20th century, or the rise of postmodernism and the ensuing “science wars” of the 1990s (Chapter 2). Indeed, the parallels between the recent history of philosophy and literary criticism as academic fields are closer than one might expect. Just like philosophers have gone through a “naturalistic turn” (Chapter 3), to the point that many of us nowadays simply wouldn’t even think of ignoring much of what is done in the natural sciences, especially physics, biology and neuroscience, so too a number of literary critics have embraced notions from cognitive science and evolutionary psychology — as problematic as this move sometimes actually is [15].

An entirely different take on the Digital Humanities is the one adopted, for instance, by Adam Kirsch (2014) [16]: at least part of the problem with DH is that the concept seems rather vague. Kirsch points out that plenty of DH conferences, talks and papers are — symptomatically — devoted to that very question (just as, I cannot resist to note, in the parallel case of experimental philosophy). Is DH, then, just a general umbrella for talking about the impact of computer technologies on the practice of the humanities? That seems too vague, and at any rate, philosophy is doing rather well from that perspective, as we have already seen. Or is it more specifically the use of large amounts of data to tackle questions of concern to humanists? In that respect philosophy may indeed be behind, but it isn’t at all clear whether analyses of large data sets on, say, recurrence of words or names in philosophical works is going to be revolutionary (I doubt it), or just one more tool in the toolbox of philosophical inquiry (which seems more sensible).

Kirsch’s criticism is rooted in his reaction to claims by, for instance, the authors of the Digital_Humanities “manifesto” (Burdick et al. 2012): “We live in one of those rare moments of opportunity for the humanities, not unlike other great eras of cultural-historical transformation such as the shift from the scroll to the codex, the invention of movable type, the encounter with the New World, and the Industrial Revolution.” It is rather difficult to refrain from dismissing this sort of grandiosity as hype, which sometimes becomes worrisome, as in the following bit from the same book: “the 8-page essay and the 25-page research paper will have to make room for the game design, the multi-player narrative, the video mash-up, the online exhibit and other new forms and formats as pedagogical exercises.” If by “making room” the authors mean replace, then I’m not at all sure this is something desirable.

And just in case you think this is unrepresentative cherry picking, here is another indicative example uncovered by Kirsch, this time from Erez Aiden and Jean-Baptiste Michel (2013), the very creators of the Ngram Viewer: “Its consequences will transform how we look at ourselves. … Big data is going to change the humanities [and] transform the social sciences.” And yet, the best example these authors were able to provide to back their claim was a demonstration that the names of certain painters (e.g., Marc Chagall) disappeared from German books during the Nazi period — a phenomenon well known to historians of art and referred to as the case of the “degenerate art” (Peters 2014). Indeed, it is the very fact that this was common knowledge that led Aiden and Michel to run their Ngram search in the first place.

Kirsch also takes on the above mentioned Moretti as a further case in point, and particularly his “Style, Inc.: Reflections on 7,000 Titles” (Moretti 2009). There the author practices data analysis on 7,000 novels published in the UK between 1740 and 1850, looking for patterns. One of the major findings is that during that period book titles evolved from mini-summaries of the subject matter to succinct, reader-enticing short phrases. Which any serious student of British literature would have been able to tell you on the basis of nothing more than her scholarly acquaintance (“close” reading) with that body of work. This does by no means show that DH approaches in general, or even distant reading in particular, are useless, only that the trumpet’s volume ought perhaps to be turned a few notches down, and that DH practitioners need to provide the rest of us with a few convincing examples of truly innovative work leading to new insights, rather than exercises in the elucidation of the obvious.

Here is another example of DH hype, this time specifically about philosophy: Ramsay and Geoffrey Rockwell (2013) write in their “Developing Things: Notes Toward an Epistemology of Building in the Humanities”: “Reading Foucault and applying his theoretical framework can take months or years of application. A web-based text analysis tool could apply its theoretical position in seconds.” As Kirsch drily notes, the issue is to understand what Foucault is saying, which is guaranteed to take far more than seconds, as anyone even superficially familiar with his writings will readily testify.

In general, I think Kirsch hits the nail on the head when he points out that there are limits to quantification, and in particular that a rush to quantify often means that one tends to tackle whatever it is easy to quantify and ignore the rest [17]. But much humanistic, and philosophical, work is inherently qualitative, and simply doesn’t lend itself to statistical summaries such as word counts and number of citations. The latter can be done, of course, but often misses the point. And remarking on this, as Kirsch rightly puts it, is “not Luddism; it is intellectual responsibility.”

Another notable critic of DH is Stephen Marche (2012) who, somewhat predictably at this point, again takes on Moretti’s distant reading approach. Marche does acknowledge that data mining in the context of distant reading is “potentially transformative,” but he suggests that so far at the least this potential has resulted into a shift in attitude more than the production of actually novel insights into literary criticism. He objects to the “distant” modifier in distant reading, claiming that: “Literature cannot meaningfully be treated as data. The problem is essential rather than superficial: literature is not data. Literature is the opposite of data.” Well, not so fast. I don’t see why it can’t be both (and the same goes for philosophy, of course), but I do agree that the burden of evidence rests on those claiming that the old ways of doing things have been shuttered. Then again, critics like Marche virtually shoot themselves in the foot when they go on with this sort of barely sensical statements: “Algorithms are inherently fascistic, because they give the comforting illusion of an alterity to human affairs.” No, algorithms are not inherently fascistic, whatever that means. They are simply procedures that may or may not be relevant to a given task. And that’s where the discussion should squarely be focused.

There are also, thankfully, moderate voices in this debate, for instance that of Ben Merriman (2015), who positively reviewed two of Moretti’s books (together with Joskers’ Macroanalysis: Digital Methods and Literary History and Text Analysis with R for Students of Literature). Merriman observes that, at the least at the moment, a lot of work in the Digital Humanities is driven by the availability of new tools, and that the questions themselves remain recognizably humanistic. I don’t think this is a bad idea, and it finds parallels in science: the invention of the electron microscope (or, more recently, of fMRI scanning of the brain), for instance, initially generated a cottage industry of clearly tool-oriented research. But there is no question that electron microscopy (and fMRI scanning) did contribute substantially to the advancement of structural biology (and brain science).

Merriman points out that Moretti and Joskers are more ambitious, explicitly aiming at setting a new agenda for their field, radically altering the kind of questions one asks in humanities scholarship. Some of the examples provided do sound genuinely interesting, if not necessarily earth-shattering: distant reading allow us to study long-term patterns of change and stability in literary genres, for instance, or to arrive at surprisingly simple taxonomies of, say, types of novels (apparently, they all pretty much fall into just six different structural kinds). Some of this work, again, will confirm and expand on what experts in the field already know, in other cases it may provide new insights that in turn will spur new classical scholarship. Merriman refers to the results achieved by DH so far as “mixed,” and that seems to me a fair assessment, but not one on the basis of which we are therefore in a position to dismiss the whole effort as mere scientistic hubris, at the least not yet.

One interesting example of a novel result is Moretti’s claim that he has an explanation for why Conan Doyle’s mystery novels have had such staying power, despite the author having plenty of vigorous competition at the time. The discovery is that mystery novels can be analyzed in terms of how the authors handle the clues to the mystery. Conan Doyle and other successful writers of the genre all have something in common: they make crucial clues consistently visible and available to their readers, thereby drawing them into the narrative as more than just passive recipients of plot twists and turns.

Merriman, however, laments that social scientists and statisticians don’t seem to have taken notice, thus far, of the onset of DH, which is problematic because its current practitioners sometimes mishandle their new tools — for instance giving undue weight to so-called significance values of statistical tests, rather then to the much more informative effect sizes (Henson and Smith 2000; Nakagawa and Cuthill 2007) — a mistake that a more seasoned analyst of quantitative data would not make. It is for this reason, in fact, that one of the books reviewed by Merman is a how-to manual for aspiring DH practitioners. Even so, more cross-disciplinary efforts would likely be beneficial to the whole endeavor, both in literary criticism and in other fields of the humanities, including philosophy.

Speaking of the latter, distant reading is not the only approach that legitimately qualifies as an exercise in the Digital Humanities, and an interesting paper by Goulet (2013) is a good example of the potential value of DH for scholarship in philosophy. The author presents some preliminary analyses of data from a database of ancient Western philosophers, spanning the range from the 6th Century BCE to the 6th Century CE. The survey concerns about 3,000 philosophers, confirming some well known facts, as well as providing us with novel insights into that crucial period of the history of philosophy. For instance, it turns out that about 3.5% of the listed philosophers were women — a small but not insignificant proportion of the total. Interestingly, most of these women were associated with Epicurus’ Garden or with the Stoics of Imperial Rome. Goulet was able to identify a whopping 33 philosophical schools in antiquity, but also to show quantitatively that just four played a dominant role: the Academics-Platonists (20% of total number of philosophers), the Stoics (12%), the Epicureans (8%), and the Aristotelian-Peripatetics (6%), although he notes an additional early peak for the Pythagoreans (13%), whose influence rapidly waned after the 4th Century BCE. Goulet is able to glean a wealth of additional information from the database, information that I would think from now on ought to be part of any serious scholarly discussion of ancient Greco-Roman philosophy.

So, will the DH revolutionize the way we do philosophy? I doubt it. Will they provide additional tools to pursue philosophical scholarship, perhaps together with some versions of XPhi? Very likely. And it is this idea of a set of disciplinary tools and what they can and cannot do that leads us into the next section, where I briefly survey some other instruments in the ever expanding toolbox of philosophical inquiry. The one provided here is not an exhaustive list, and it does not include a treatment of more general approaches that philosophers share with scholars from other fields. But I think it may be useful nonetheless to remind ourselves of and reflect on what the tools of the trade are, in order to complete our analysis of what philosophy is and how it works.


[3] I have written on scientism in several places, see for instance: Staking positions amongst the varieties of scientism, Scientia Salon, 28 April 2014, accessed on 27 August 2014; Steven Pinker embraces scientism. Bad move, I think, Rationally Speaking, 12 August 2013, accessed on 27 August 2014. I am currently co-editing a book on the topic (together with Maarten Boudry) for the University of Chicago Press.

[4] THATCamp, accessed on 27 August 2014.

[5] See: Where Are the Philosophers? Thoughts from THATCamp Pedagogy, by P. Bradley, accessed on 27 August 2014.

[6] PhilPapers, accessed on 27 August 2014.

[7] PhilJobs, accessed on 27 August 2014.

[8] NGrams, accessed on 27 August 2014.

[9] Although, somewhat disconcertingly, only 20 times less so.

[10] This also raises the question of whether or not the citations are positive or negative, and what that assessment would say about relative importance.

[11] See: Exploring the Significance of Digital Humanities for Philosophy, by Lisa M. Spiro, accessed on 27 August 2014.

[12] The Philosopher’s Imprint, accessed on 27 August 2014.

[13] Philosophy & Theory in Biology, accessed on 27 August 2014.

[14] The Stanford Encyclopedia of Philosophy, accessed on 27 August 2014; The Internet Encyclopedia of Philosophy, accessed on 26 August 2015.

[15] I criticize the excessive embracing of evolutionary psychology and other sciences in the humanities here: Who knows what, Aeon Magazine, accessed on 27 August 2014.

[16] You will have noticed that a significant portion of the debate surrounding the DH takes place in the public sphere, not in peer reviewed papers. Welcome to the new academy.

[17] I have often encountered this very same tendency during my practice as an evolutionary biologist, before turning to philosophy full time. It is such a general phenomenon that it has an informal name: the streetlight effect, as in someone looking for his lost keys near the streetlight, regardless of where he actually lost them, because that’s where he can see best.


Aiden, E. and Michel, J-P. (2013) Uncharted: Big Data as a Lens on Human Culture. Riverhead Hardcover.

Benzon, B. (2014) The only game in town: digital criticism comes of age. 3quarksdaily, 5 May 2014 (accessed on 27 August 2014).

Burdick, A., Drucker, J., Lunenfeld, P., Presner, T., and Schnapp, A (2012) Digital_Humanities. MIT Press.

Goulet, R. (2013) Ancient philosophers: a first statistical survey. In: M. Chase, S.R.L. Clarke, and M. McGhee (eds.) Philosophy as a Way of Life: Ancients and Moderns — Essays in Honor of Pierre Hadot. John Wiley & Sons.

Henson, R.K. and Smith, A.D. (2000) State of the art in statistical significance and effect size reporting: A review of the APA Task Force report and current trends. Journal of Research & Development in Education 33:285-296.

Kirsch, A (2014) Technology is taking over English departments. New Republic (access on 27 August 2014).

Marche, S. (2012) Literature is not Data: Against Digital Humanities. LA Review of Books (accessed on 27 August 2014).

Merriman, B. (2015) A Science of Literature, Boston Review, 3 August (accessed on 9 May 2016)

Moretti, F. (2009) Style, Inc. Reflections on Seven Thousand Titles (British Novels, 1740–1850), Critical Inquiry 36:134-158.

Nakagawa, S. and Cuthill, I.C. (2007) Effect size, confidence interval and statistical significance: a practical guide for biologists. Biological Reviews 82:591–605.

Peters, O. (2014) Degenerate Art: The Attack on Modern Art in Nazi Germany 1937. Prestel.

Ramsay, S. and Rockwell, G. (2013) “Developing Things: Notes toward an Epistemology of Building in the Digital Humanities,” in: M.K. Gold (ed.), Debates in the Digital Humanities, University of Minnesota Press, pp. 75-84.

49 thoughts on “Where Do We Go Next? — II

  1. synred

    for instance giving undue weight to so-called significance values of statistical tests, rather then to the much more informative effect sizes (Henson and Smith 2000; Nakagawa and Cuthill 2007)

    –a lesson doctors and drug companies seem to have not learned yet…

    Liked by 1 person

  2. Robin Herbert

    Effect size is only really informative if you have something that effect size can inform you about. Mostly you see effect size and you think “yeah, what does that tell me?”.

    And for things like testing the efficacy of drugs, significance values tell you the key thing you want to know.


  3. SocraticGadfly

    “Cousin,” you’re dead on about how much medical testing needs to be changed. (That said, tighter p-values would be part of it, but per you, there’s other issues that need to be addressed more.) I’m not expecting medicine/health/diet (or the social sciences, where testing faces the same issues) to have as much rigor as physics. With medicine/health, and with psychology, I understand the fear of “screening out” something game-changing. However, we’ve got so much to build on (and a growing list of specific examples of bad testing results … some cholesterol drugs, hormone replacement therapy, more), that SOME tightening is, in my opinion, not just a good thing, it’s a necessary thing.

    Liked by 1 person

  4. SocraticGadfly

    Google Ngram can be way, way overused, and has several problems.
    1. The scan quality issue, especially from older books.
    2. Related, the typography issue, where in American and British English writing of centuries ago, the “long” form of “ess” looks like “eff.”
    3. The books that Google has scanned, or not — the classic informal logic fallacy of availability bias. And, this is a big issue, IMO.
    4. The fact that Ngram lacks some significant wildcards for search, unlike actual regular Google searches. (For example, if I wanted to search for writings about “addiction” and “relapse” it’s either the two words side by side as one joint search or each separately with a “union” from a plus sign. I can’t enter one search for both of them, but within 10 words of one another, like on Google itself.


  5. Daniel Kaufman

    I find the praise for the use of Ngram and other “digital humanities” tools completely puzzling. Is the suggestion, seriously, that one can measure “importance” in *any* relevant sense, by how many times something is mentioned in books?

    What would one *do* with such information that would be of any interest? I’ve discovered that the name “Ravel” appears in more books than the name “Emerson, Lake, and Palmer.” Great! Now what?

    And yes, I do think the problem is that it is entirely quantitative, whereas the question is entirely qualitative. That’s not being curmudgeonly. That’s understanding the difference between the two.

    It’s worth asking oneself whether critical humanistic work has gotten better since taking all these new technological tools on board. Is literary criticism today better than Eliot’s or Trilling’s? Is its art criticism better than Greenberg’s? It’s cultural criticism better than Sontag’s? Are there Algonquin Groups springing up all over “cyberspace”? Hardly. Anyone who knows anything about these fields at all knows that their practice is decidedly inferior today, in comparison with even the quite recent past.

    Is this *because of* the so-called “digital humanities”? Of course not. But the suggestion that they have done anything substantial for the critical, qualitative study of art, literature, music, and the like — beyond increase access to texts and making research easier — strikes me as completely unjustified.

    Liked by 1 person

  6. SocraticGadfly

    Daniel raises an entirely new issue about Ngram. I would say there’s an intersection between his comment and the “fallacy of availability” portion of my comment.


  7. garthdaisy

    Dear algorithm mining this blog,

    Satan Satan Rolling Stones Hitler Wittgenstein Einstein Volleyball Borscht Beatles Beetles Juice Ayn Rand Sugar Is Bad For You Karoke Massimo Pigliucci Russia Fruit Rolling Stones Rolling Stones Rolling Stones Rolling Stones.

    That is all.

    Liked by 1 person

  8. synred

    Well you need to know about effect size and significance. If you test a drug on two likely heart attack victims and one dies of cancer and the other of a heart attack that’s not significant even if big.

    On the other hand if a drug reduces heart attack by 1% with 99.999% significance, that doesn’t make the drug ‘significant’ in the ordinary sense of the word.

    Undoubtedly drug testing is improving in this regard. FDA does try to strike a balance between benefit and risk.

    On the negative side they still seems to reject drugs for marginal bad effects. The pain drug ‘Daron’ was removed from the market because of some not very big but significant increase in heart problems.

    They could only uncover these problems because a lot of people were taking it! The recommended replacement was (I think) Cymbalta a drug with many more side effects and serious withdrawal effects, if stopped suddenly. To be paranoid Cymbalta copyright was a few years from expiring; Darvon had been generic for ages and has a lot fewer side effects than aspirin, deadly Tylenol [a] or Aleve.


    [a] Did Tylenol kill Prince? It’s the most dangerous ingredient in Percocet and it seems he was taking a lot of that.

    Liked by 1 person

  9. synred

    I think the ‘look-elsewhere-effect’ is a bigger problem with drug screening than size. It’s intrinsic as you want them to try lots of things.

    It has to be a multilayered thing where first you look at a modest significance level then re-test those that are significant and an interesting size. To some degree this is done, I guess, but a lot of crap gets through. I’m not sure what the procedures used are and they don’t seem to be that well defined which is part of the problem. And then there’s suppressing the negatives which makes the statistics pretty much meangless…


  10. synred

    And there’s inverse problem of screening with small samples. Big effects can fluctuate down as well as up and a valuable candidate gets missed. Sample size needs to be big enough that an effect large enough to be interesting is likely to produce a statistically significant result. From the conference I attended on reproducibility in stats, it doesn’t seem like they are doing this reliably. Which is to say there are issues with poor experimental design.

    The trouble with statistics is the math is easy, but concepts subtle , This has causes mistakes in physics including by me.

    Once an experiment I was working on ruled out a phenomena called B-mixing based partially on bad statistical arguments. Another experiment found it at size easily seen in our experiment. The prejudice of the time was that this mixing was small, so when an analysis didn’t see it, we didn’t investigate further, but published. Science needs repetition!

    Liked by 1 person

  11. SocraticGadfly

    I think primarily of stuff that gets public attention — diet news. “X is bad for you.” Then, “well, maybe it’s not.” Then, “Y is even worse.” Then, “Well, on further thought, X is kind of bad after all.”


  12. synred

    I find Iceberg bland–engineered for American taste like Coors lite (redundant). Or was our taste engineered for Iceberg?


    Liked by 1 person

  13. Daniel Kaufman

    Iceberg is the only lettuce you can use for certain things. If you want to lay on a heavy blue cheese dressing, like you get in lot of steak houses, the iceberg is the only lettuce that will hold up under the dressing.

    Agreed that otherwise, it’s inferior.

    Liked by 1 person

  14. brodix

    Its more about shipping and handling. like tomatoes these days. Taste is packaged as the dressing.

    These things remind me of noise and whether it is worth trying to extract signal, or wait for it to extract itself.


  15. brodix

    Not the conversation. Big data and what filters to apply, or just wait to see what lasts.

    The winner is what survives.


  16. SocraticGadfly

    I mix green leaf and romaine about 50-50, then add other produce as it’s on sale.

    That said, I found Triscuits for my cheeses (including a British cheddar so old it looks like it will crumble when I slice it), cherries, Starbucks, and San Pellegrino all on sale at Kroger yesterday. Can’t beat that.


  17. synred

    Hi Dan,

    I like a blue cheese made by putting blue cheese in ‘Midwestern red french’. I first had this in a great little hole in the wall Italian restaurant in Champaign-Urbana. It was really cheap too. The waitresses were all family and I don’t think they paid them.

    Unfortunately, ‘Midwestern red’ is hard to come by in Calif. Safeway has something, but it’s not quite right – not tangy enough.

    This version of blue cheese goes well on any lettuce, even those that taste like something.


  18. brodix

    Even noise can be self organizing.
    Look at humanity.
    Just don’t try to isolate any part of it, or it breaks back down into noise. Look at trying to explain progress in philosophy.

    Liked by 1 person

  19. brodix

    Does data self organize?

    One could look at the university and say it specializes.

    Yet the premise of this thread is how to generalize it. What essential features can be drawn from it.

    Either we follow information to infinity, or seek some equilibrium.

    Right now, in the university, the most stable forms are the macrocosmic, the microcosmic and pure abstraction.

    Meanwhile, the source of this dilemma, the human mind, is the biggest mystery.

    Maybe we should discuss food.


  20. davidlduffy

    Maybe a comparison of the relationship between mathematical psychology
    – eg psychometrics – and the other strands of psychology is relevant. There is a similar tension between “old style” economics and “mathy” economics too. And historically, in philosophy, one diagnosis of why analytic philosophy was so successful in the anglosphere was the technicality of mathematical logic. Specifically in psychometrics, I think that the full qualitative/categorical nature of what is going on can be captured mathematically, but that much of this is perfectly accessible to less formal mental models.

    To take the example of factor analysis of co-citation of papers in epistemology and philosophy of science I cited in comments to an earlier chapter, it showed that PoS people preferentially cited PoS, and epistemologists, epistemologists – even though they were both studying knowledge. This was already perfectly obvious to everyone involved, and the paper in question quoted Putnam, I think, joking that PoS was still looked down upon by those working in the older “core” sub-disciplines.


  21. garthdaisy


    “you may need to work a bit harder, since you almost immediately slipped back into the same pattern…)”

    If you’re talking about the conversation with DB, tone wise, I think I am being more gracious than he is. If it’s one of my other posts on the most recent 2 threads you’re talking about can you please share an example of the egregious tone? I thought they were all pretty innocuous.

    “LFW is untenable within the scientific view that we both share, but of course its defenders will simply say that science does not have access to the sort of reality they are thinking of”

    That is why I said “I think” before the statement. Amongst all of us here, science has settled that question. Our discussions on the matter here are all about how to define FW in a world we assume is absent of LFW.

    “And I have no idea how science could possibly settle the issue of moral realism.”

    Well, as you just noted with LFW an issue is never completely settled in the sense that there will always be people who disagree. Again that’s why I put the words “I think” in there. I think moral realism has been eliminated as a possibility.

    “Has it settled also that of mathematical Platonism? Because they are very much analogous.”

    I can’t put emotions and math into the same category like that. I really don’t think they are analogous.

    “Hume was most definitely NOT a footnote to Plato.”

    Reason as a slave to the passions was not articulated in other words by Plato, Socrates, Aristotle?

    “I assure you that there is a perfectly good sense in which the iPad I’m using right now is artificial, not natural”

    I’d call it no more artificial than a beaver damn or a hand axe. But I think this inevitably comes down to a semantics preference. I don’t see a usefulness for that concept. In fact I think it is woefully misleading. But I could be turned around on this with good argument.

    “I wonder whether you even pause to consider that my background is in evolutionary biology, and that perhaps that background gives me very good reasons to doubt many claims of evopsych.”

    I did indeed consider that. But have you considered there are equally if not more experienced and credentialed biologists and philosophers who hold my view of EP? I find their views extremely nuanced and cautious, and not at all as you portray them.

    In fact none of these ideas are my original thoughts. They are all ideas held by other philosophers and scientists that I happen to agree with. For now. So this credentials game is a non starter when it comes to opinions on EP. But I accept I wont ever convince anyone here that EP broadly construed is extremely relevant to moral reasoning. So I promise I’ll never mention it again, unless people keep addressing questions to me about it. I’m always happy to answer questions.

    Liked by 1 person

  22. garthdaisy

    Hi Robin,

    “You mean there is a peer reviewed scientific paper which presents a rigorous definition of libertarian free will and a conclusive evidential argument that we don’t have it? I would be interested to read that if you have a link.”

    I’m using the definition of LFW used by most philosophers discussing the subject today and accepting Sean Carroll’s argument from physics that we can confirm a closed system in which the idea of LFW is eliminated from possibility.

    And again, I expressed this opinion as “I think.” It’s not an expression of certainty. Just going with Sean Carroll here on the physics end and most philosophers on the definition of LFW end.


  23. brodix


    I find it quite interesting that such an energy consuming function as the mind should have evolved, if its primary function, to make executive level decisions, is irrelevant, since science has determined free will to be an illusion.

    Now I’m not one to question the sciences, as it has caused me problems before, but possibly the notion of free will might be dissected, before accepting, or dismissing it.

    For one thing, to “will” is to make a conscious determination. What is it supposed to be free of? Input? If so, than it would be equally free of output as well, aka, consequence. In which case, it would be more irrelevant, than free.

    Personally I see a big part of the problem is that we infer time to be the point of the present moving from past to future, rather than change turning future into past.

    In which case, we project causal determination onto the future and assume it to be predetermined, but if we view it the other way, such that tomorrow becomes yesterday because the earth turns, then events are first in the present, before being in the past and so they have to actually occur, in order to be determined.

    Though Quantum Mechanics seems to have an issue with indeterminacy and so the opposite proposition has been made; That the past remains indeterminate and branches out into multiworlds, with every quantum event.

    Obviously science always knows what it is talking about, so I will leave it at that.


  24. Robin Herbert

    And the problem is that there does not appear to be a LFW definition agreed upon by philosophers Daniel Dennett, in his review of Harris’s book “Free Will”, only manages a paragraph of uninformative waffle which doesn’t seem to match what any real person believes. At least CD Broad, in “On Libertarianism” goes for something precise. My reading leads me to believe that LFW is the position that:

    1. A person has more than 1 thing that they might do in the future (substitutibility) and;
    2. A person’s conscious intentions can, at least sometimes, be the major proximate cause of the intended action (agency)

    That would also seem to cover what most people understand by the term.

    My position is that we cannot, on present knowledge, judge whether or not we have this kind of free will.


Comments are closed.