Reason Better: An Interdisciplinary Guide to Critical Thinking
Reason Better: An Interdisciplinary Guide to Critical Thinking

Reason Better: An Interdisciplinary Guide to Critical Thinking

Lead Author(s): David Manley

Student Price: Contact us to learn more

The most useful reasoning skills from philosophy and cognitive psychology. A focus on cognitive bias, response to evidence, the logic of probability & good decisions.

Note to instructors

Welcome to Reason Better! I'm David Manley, Associate Professor of Philosophy at the University of Michigan, Ann Arbor. 

This book rethinks the standard playbook for critical thinking courses. In particular, it offers: 

  • Less emphasis on arguments as tools of persuasion, and more on acquiring a mindset that avoids systematic errors 
  • Real interdisciplinarity, combining the most useful things we've learned about reasoning from philosophy, cognitive science, social psychology, and behavioral economics. 
  • A unified picture of evidence that covers statistical, causal, and best-explanation inferences.
  • More emphasis on the logic of probability and decisions, and less on the logic of deduction. (But note that students learn to discern the weight of evidence without needing to write down any equations.)

In this note, I'll provide an overview of the text and explain why I wrote it. But just browsing the chapters is probably the best way to get a feel for the book. (If you do adopt it, this page won't be visible to students: only chapters that you assign will be.) 

If you think you are and instructor and think you might like to use this text, please email me at dmanley@umich.edu from your academic account. I will give you access to:

  • several exams
  • lecture slides
  • quizzes for in-class use
  • discussion section/ in-class activities and worksheets
  • prompts for additional assignments 

Why I wrote the book

I've been teaching Intro to Logic and Critical Thinking courses for years, and I started to get frustrated with the standard material. Much of it seems to survive out of sheer inertia, rather than an empirically-grounded conviction that it can actually improve our reasoning.  I felt that I faced a dilemma between two bad options:

  • teaching a rigorous Introduction to Logic course that is not broadly useful in helping students reason better (see more on this claim below).  
  • teaching a Critical Thinking course with material that feels stale, remedial, and woolly. 

What would it be like,  instead, to start from the ground up and build a curriculum using only the most important things from of the toolkits of philosophy, cognitive psychology, and behavioral economics?  

The course we have developed at the University of Michigan, Ann Arbor using this text is both rigorous and immensely practical. Students come away with a sense of how to weigh the strength of evidence for claims, and adjust their beliefs accordingly. I'm also happy to report that the text has now been adopted by professors in more than a dozen universities and colleges around the world. This note from Philip Robbins at the University of Missouri captures the reason I wrote the text:  

“[Reason Better] is a truly excellent text. I’ve really enjoyed teaching critical thinking with [the] book this semester. In fact, it’s been one of the most enjoyable teaching experiences I’ve ever had. 

For some time, I’ve thought that critical thinking should be a required course for all undergraduates. Until this semester, however, I’ve never taught a course that could fairly be described as something that all students, regardless of major, could substantially benefit from, both in their academic work and in everyday life. Thanks to [this] book, that threshold has been crossed. No small feat.

The book is especially valuable and timely in this day and age, when reason and rationality are under threat in a way that’s not been seen for generations. What’s more, it’s a darned good read."

The platform

Before looking in more detail at the content, it's worth saying why I (and my students) love the TopHat platform:

  • embedded questions that ensure the students are doing the readings  (These are auto-graded).
  • a really nice UI for students with search and note-taking capabilities, and they can read the text and answer questions on any device. 
  • affordability: TopHat charges $45 for the textbook (lifetime access for students) and the grading homework platform. 
  • flexibility: any instructor who assigns the text can change it however they like: add, remove, or edit material from the text, add, remove or edit questions, post slides, create additional pages, etc.

This last point is hard to overestimate: the text is completely customizable. Want the students to skip a section? Just cut it out. Don’t like the wording of a question? Just change it. And you can do all of this as you go, since students only see items that have been assigned; first as homework and then as review. 

One feature about the in-text questions that I really like is that you can see how well people are doing before the due date, and students can change their answers after they've submitted them, right up until the assigned chapter is due. So if the students need some help with a question or two, you can see it happening in real time and help them; anyone who has already turned in their work can fix it with no repercussions.

The content

Today's critical thinking texts mostly rehash the usual collection of items while largely ignoring decades of progress in philosophy, cognitive and social psychology, and behavioral economics. In particular, we've advanced our understanding of the nature of evidence and its connection to statistical and causal arguments. And we've made huge strides in understanding systematic cognitive errors. 

Here are some big ways in which this texts rethinks the standard playbook:

1. Most texts focus on using arguments to defend our views and attack those of others. But an adversarial mindset leads to bad reasoning, as plenty of research has shown. The goal of reasoning better is to have accurate beliefs and make good decisions; and those things are best achieved by cultivating a mindset of curiosity, thoroughness, and openness (see chapter 2). We look at specific techniques that have been shown to reduce systematic errors like confirmation bias. 

2. The text is genuinely interdisciplinary. While I have professional background in epistemology and decision theory,  I spent many months delving into the relevant work in cognitive and social psychology in order to develop a useful taxonomy of heuristics and biases and (where possible) the methods that have been shown to overcome them. (I use only the best replication-crisis-surviving research: see for example the extensive references sections in chapters 1 and 2.)

3. Standard texts introduce a number of “forms of reasoning” as though they were a grab bag of techniques that have nothing to do with each other: deductive reasoning, statistical reasoning, causal reasoning, inference to the best explanation (and maybe analogical, moral and/or legal reasoning). But all of the non-deductive cases involve responsiveness to evidence, and Reason Better is the only text I know of that introduces a unified account of evidence that covers them all. And there is a single measure we can use to assess the strength of a piece of evidence: namely, comparing the likelihood of that evidence given competing hypotheses. This measure helps explain several aspects of the scientific method, such as the proper design of experiments (see chapters 5, 6, 7, and 9).

4. Relatedly, the text offers an easy but rigorous rule for responding to evidence that captures key lessons about de-biasing. When we respond to evidence, we combine our prior confidence in a claim with the strength of the new evidence for that claim. The equation is simply prior odds x strength of the evidence = new odds. This form of Bayes' theorem highlights an important lesson from cognitive science, where actively de-coupling our prior views from our assessment of the evidence has been shown to help avoid biased evaluation of evidence. And, in addition, this rule is so gentle that almost all of our students can do the evidence-updating problems in their heads. (In contrast, when other critical thinking texts introduce probability, they invariably use a complex form of Bayes' theorem that gives students no intuitive sense of what is going on: they simply learn to write down the equation and "plug and chug".) 

5. A key strength of this book is what it does not contain. First, I avoid a number of the standard fallacies of deductive logic enumerated by medieval logicians, because their application in most ordinary non-deductive contexts is highly misleading. Second, the treatment of statistical and causal reasoning in typical texts is often absurdly anachronistic given the way the relevant sciences are now taught and practiced. (See the two sections below for more on these points.) Third, I've eliminated unnecessary formalization in the treatment of deduction and of probability. The remaining formalism is introduced in the friendliest possible way, including visualizations to help out: see chapter 8. v

6. It's worth mentioning that I also have some improvements in mind for future editions. I'm currently working on an additional chapter called "Sources", about social epistemology in a world of information overload: navigating science reporting, expertise, consensus, conformity, polarization, and conditions for skilled intuition.

Some oldies that didn’t make the cut

To make room for the extra material above, I tried to cut only the least useful of the standard items. 

First, I cut many of the "informal fallacies": some were too obvious (e.g. ad hominem, pity) or too rarely used in arguments (e.g. composition; division) to be worth the space. More importantly, many are only fallacious in the context of deductive inference, and once we stop pretending that most arguments in real life are intended to be deductive, they’re not even useful heuristics for identifying bad arguments. For example, there is nothing inherently “fallacious” about appealing to an authority. Neither is there anything fallacious in principle about the form of “slippery slope” arguments in an evidential context. Likewise, exactly what counts as a “causal fallacy” outside of a deductive context is a tricky matter, and there is no simple fallacious form. (This text has a whole chapter on causal reasoning.)

Another standard item is Mill’s methods for identifying causes. But we can do better. Our understanding of the scientific method has come a long way since the mid-nineteenth century. Any students who have taken a good stats course will either be embarrassed for us or take two steps back in understanding. Compare Chapter 7, which provides an overview of contemporary tests for misleading correlation, in the context of the text's broadly Bayesian picture of evidence.

I’ve also cut "enumerative induction”, at least expressed in the standard way as: "X percent of observed Fs are G, therefore N percent of all Fs are probably G". This form of argument is indefensible even given the constraints of having a large unbiased sample. (You can observe a thousand randomly selected black ravens and still not be justified in thinking that all ravens are probably black, if you also know that almost all species have rare albino members.) Instead, this text treats statistical generalization as an instance of responsiveness to evidence; and thinking of evidence strength as Bayes factor explains the need for large and random samples, rather than just stipulating it.

Why not more deductive logic?

I settled on having about a chapter and half dealing with deductive logic. 

My goal is to impart the most useful general tools for reasoning. After this course, for example, students should be better able to notice errors in a politician's speech. But even in the unlikely event that the politician is using a deductive argument, none of us would assess that argument using DeMorgan's rules, truth tables, the square of opposition, or Aristotle’s taxonomy of categorical arguments. That's just not how human minds work. Instead, we’d likely ask ourselves whether the premises guarantee the conclusion, and that’s that. (Imagine a student hearing an argument in real life and thinking "Aha! This is an A-I-O argument, therefore...") Learning to assess whether a set of premises guarantee a conclusion is a pretty useful general skill that can be covered fairly quickly.  

Indeed, when it comes to formal logic, a little learning can be a dangerous thing— it can introduce errors in reasoning that won’t get corrected unless the students continues in philosophy. Here are three examples of the sort of problem I have in mind:

1. If all you have is a hammer, everything looks like a nail. This is the tendency to learn some deductive logic and then treat every argument as though it were intended as deductive. (You may have encountered this in some of your undergraduates.) But the vast majority of arguments in everyday life are not deductive, so this tendency leads to errors in assessing arguments. Suppose someone says that John is angry. You ask why they think that, and they say: "If John's angry, his face gets red. And his face is red." In real life, this is probably not the fallacy of affirming the consequent—it's probably something more like IBE. 

2. Bad translations. Natural language is replete with a great deal of messiness that makes translations into (say) first-order predicate logic inadequate. In teaching intro logic courses, we typically impose translations on natural language sentences so that we can use the simple set of truth-conditions stipulated for the expressions of the formal language. When students find it counterintuitive to learn these translations, they aren't being dense: they are right. On the other hand, if (as seems unlikely) they actually end up using these translations to reason about real ordinary language arguments, they can end up getting things wrong.

For example, All ghosts are narcissistic presupposes (or in some sense treats as not-at-issue) that there are ghosts. According to standard semantic theory, this means the sentence doesn't get to be unproblematically true just because there are no ghosts. And we don't get to make it true by insisting that its proper translation into first-order logic is ∀x(Gx ⊃ Nx). (Note also that "All ghosts are narcissistic" doesn't intuitively assert, concerning everything in the universe, that, if it is a ghost, it is narcissistic. The determiner "all" is semantically a binary restricted quantifier and in this case concerns only ghosts.) A similar point also applies for translations of many NL conditionals into material conditionals in the formal language. 

In this text, I sidestep these issues entirely and still manage to give students a sense for how to evaluate deductive arguments using their logical forms (both sentential and predicate.) So, for example, I avoid cases where the formal validity of an argument depends on the trivial truth of false-antecedent conditionals: happily, one can assume the validity of natural language instances of modus ponens and modus tollens without worrying about whether the English indicative is always true in false-antecedent cases.

3. Conflating formal, modal, and epistemic conceptions of validity. Many logic textbooks define "validity" modally or epistemically but then proceed as though this is equivalent to entailment in some formal system. As a result, students are tempted to assume that every good deductive argument must have a valid form in one of those languages. But this isn’t true of plenty of inferences that are both modally valid and a priori certain, like "Smith was killed, so Smith died," not to mention "Kaplan-valid" arguments like "She is happy, therefore a female is happy.”

None of this is to deny that a course focused on formal logic is critical for philosophy majors, extremely useful for those going on in math and CS, and potentially very useful for many other students. But I would submit that it's not as useful for as many students, even those at a high level, as a critical reasoning course done right. 


version 1.2