Reason Better: An Interdisciplinary Guide to Critical Thinking
Lead Author(s): David Manley
Student Price: Contact us to learn more
The most useful reasoning skills from philosophy and cognitive psychology. A focus on cognitive bias, response to evidence, the logic of probability & good decisions.
Note to instructors
Welcome to Reason Better! I'm David Manley, Associate Professor of Philosophy at the University of Michigan, Ann Arbor. This book is the result of rethinking the standard playbook for critical thinking courses. It focuses on:
- a mindset that avoids systematic error, more than the ability to persuade others
- the logic of probability and decisions, more than the logic of deductive arguments
- a unified treatment of evidence, covering statistical, causal, and best-explanation inferences
This note provides an overview of the text and explains why I wrote it. But the best way to get a feel for it is to browse the chapters. (I recommend using the "full screen" button on the bottom right hand when you click on a chapter. You'll see an outline of the chapter on the left side, and you can try out the questions, note-taking functionality, etc.) If you adopt the text, this page won't be visible to students: only chapters that you assign will be.
If, after looking over the text, you think you might like to use it for your course, please email me at firstname.lastname@example.org from your academic account. I have :
- lecture slides
- quizzes for in-class use
- discussion section materials (for in-class work)
- prompts for additional assignments
- midterms and final exams
I'm happy to share these with any instructor seriously considering using the text.
The text is already being used in the classroom, but I have some improvements in mind for future editions. I'm currently working on an additional chapter called "Sources", about social epistemology in a world of information overload: navigating science reporting, expertise, consensus, conformity, polarization, and conditions for skilled intuition.
Why I wrote the book
I've been teaching Intro Logic and Critical Thinking courses for years, and I started to get frustrated with the standard material. I felt that I faced a dilemma between two bad options:
- I could teach a rigorous Intro to Logic course, but this would be narrow and not as helpful as one might hope, especially for non-majors. (See more on this claim below.)
- I could teach Critical Thinking material that felt a bit remedial and woolly. Much of the curriculum seemed to survive out of sheer inertia, rather than an empirically-grounded conviction that it imparts the tools best suited to fixing our most prominent reasoning errors.
So I asked myself: what would it look like to start from the ground up and build a curriculum using only the most important things from of the toolkits of philosophy, cognitive psychology, and behavioral economics?
And then I discovered that, in order to teach the resulting course, I had to write my own textbook.
I'm happy to report that there's no need to accept the false choice between a narrow Intro to Logic course and a remedial Critical Thinking course. The course at the University of Michigan, Ann Arbor that uses this text (Phil 183: Critical Reasoning) is rigorous but immensely practical. Students come away with a sense of how to weigh the strength of evidence for claims, and adjust their beliefs accordingly.
The account of evidence I introduce is broadly Bayesian, but without the daunting theorems. (Without knowing it, students actually end up using a gentle form of the Bayes factor to measure the strength of evidence and to update.) The text also shows how this framework illuminates aspects of the scientific method, such as the proper design of experiments.
It's also worth saying why I (and my students) love the TopHat platform:
- embedded questions that ensure the students are doing the readings (These are auto-graded).
- a really nice UI for students with search and note-taking capabilities, and they can read the text and answer questions on any device.
- affordability: TopHat charges $45 for the textbook (lifetime access for students) and the grading homework platform.
- flexibility: any instructor who assigns the text can change it however they like: add, remove, or edit material from the text, add, remove or edit questions, post slides, create additional pages, etc.
This last point is hard to overestimate: the text is completely customizable. Want the students to skip a section? Just cut it out. Don’t like the wording of a question? Just change it. And you can do all of this as you go, since students only see items that have been assigned; first as homework and then as review.
One feature about the in-text questions that I really like is that you can see how well people are doing before the due date, and students can change their answers after they've submitted them, right up until the assigned chapter is due. So if the students need some help with a question or two, you can see it happening in real time and help them; anyone who has already turned in their work can fix it with no repercussions.
Why not more deductive logic?
I settled on having about a chapter and half on deductive logic.
My goal is to impart the most useful general tools for reasoning. For example, after this course, students should be better able to notice errors in a politician's speech. But even in the unlikely event that the politician is using a deductive argument, none of us would assess that argument using DeMorgan's rules, truth tables, the square of opposition, or Aristotle’s taxonomy of categorical arguments. That's just not how human minds work. Instead, we’d likely ask ourselves whether the conclusion could be false even if the premises are true, and that’s that.
In fact, when it comes to formal logic, a little learning can be a dangerous thing— it can introduce errors in reasoning that won’t get corrected unless the students continue in philosophy. Here are three examples of the sort of problem I have in mind:
1. If all you have is a hammer, everything looks like a nail. This is the tendency to learn some deductive logic and then treat every argument as deductive. (You may have encountered this in some of your undergraduates.) But the vast majority of arguments in everyday life are not deductive, so this tendency leads to errors in assessing arguments. Suppose someone says that John is angry. You ask why they think that, and they say: "If John's angry, his face gets red. And his face is red." In real life, this is probably not the fallacy of affirming the consequent—it's probably something more like IBE.
2. Bad translations. Natural language is replete with a great deal of messiness that makes translations into (say) first-order predicate logic frequently inadequate. For example, "All dogs bark" presupposes (or in some sense treats as not-at-issue) that there are dogs. The sentence doesn't get to be unproblematically true if there are no dogs, formal conventions notwithstanding. Nor does "All dogs bark" semantically assert, concerning everything in the universe, that, if it is a dog, it barks. The determiner "all" is semantically a binary restricted quantifier and in this case concerns only dogs. A similar point applies for NL conditionals and the material conditional of standard first-order logic. Students who resist these translations aren’t being dense, they’re right. (In this text, for example, I avoid cases where the formal validity of an argument depends on the trivial truth of false-antecedent conditionals. One can assume the validity of natural language instances of modus ponens and modus tollens without worrying about whether the English indicative is always true in false-antecedent cases.)
3. Conflating formal, modal, and epistemic conceptions of validity. Many logic textbooks define "validity" modally or epistemically but then proceed as though this is equivalent to entailment in some formal system. (If validity is defined modally, every argument with "No cats are dogs" as the conclusion will be valid.) As a result, students are tempted to assume that every good deductive argument must have a valid form in one of those languages. But this isn’t true of plenty of inferences that are both modally valid and a priori certain, like "Smith was killed, so Smith died," not to mention "Kaplan-valid" arguments like "She is happy, therefore a female is happy.”
None of this is to deny that a course focused on formal logic is critical for philosophy majors, extremely useful for those going on in math and CS, and potentially very useful for many other students. But I would submit that it's not as useful for as many students, even those at a high level, as a critical reasoning course done right.
Some oldies that didn’t make the cut
Making room for all the additional material sketched above required leaving out a number of items from the standard Critical Thinking curriculum. Here I'll explain the reasoning behind a few of the more obvious omissions.
First, I cut out the usual list of "informal fallacies". Some of these do survive among the cognitive pitfalls that are addressed, but others were too obvious (e.g. ad hominem, pity) or too rarely used in arguments (e.g. composition; division) to be worth the space.
More importantly, several standard “informal fallacies” are only fallacious in the context of deductive inference, and once we stop pretending that most arguments in real life are intended to be deductive, they’re not even helpful heuristics for identifying bad arguments. For example, there is nothing inherently “fallacious” about appealing to an authority. Neither is there anything fallacious in principle about the form of “slippery slope” arguments in an evidential context. Likewise, exactly what counts as a “causal fallacy” outside of a deductive context is a tricky matter, and there is no simple fallacious form. (This text has a whole chapter on causal reasoning.)
Relatedly, another item that shows up in the standard playbook is Mill’s methods for identifying causes. But we can do better. Our understanding of the scientific method has come a long way since the mid-nineteenth century. Any students who have taken a good stats course will either be embarrassed for us or take two steps back in understanding. Compare Chapter 7, which provides an overview of contemporary tests for misleading correlation, in the context of the text's broadly Bayesian picture of evidence.
I’ve also cut "enumerative induction”, at least expressed in the standard way as: "X percent of observed Fs are G, therefore N percent of all Fs are probably G". This form of argument is indefensible even given the constraints of having a large unbiased sample. (You can observe a thousand randomly selected black ravens and still not be justified in thinking that all ravens are probably black, if you also know that almost all species have rare albino members.) Instead, this text treats statistical generalization as an instance of responsiveness to evidence; and thinking of evidence strength as Bayes factor explains the need for large and random samples, rather than just stipulating it.