Confidence and Publication: Comparing Russell and Wittenstein

Many writers get stuck with doubts, while other plow through. How you respond to doubt as a writer—the confidence with which you approach difficulties that you face—has a crucial impact on  your ability to write effectively.  In this post, I want to briefly compare two writers of high quality who faced similar issues and responded very differently. I can’t say with certainty that the difference between the two was purely a matter of confidence, but I believe the comparison is instructive. Perhaps it’s a reflection on perfectionism, not confidence, but I think the two are related: the more confident person is able to say “eh, it ain’t perfect, but it is good enough to move forward.

Russel and Wittgenstein

Bertrand Russell won a Nobel Prize for literature for his voluminous writings and was extremely widely published as a leading 20th-century philosopher. Ludwig Wittgenstein, who was one of Russell’s students in the early 20th century, by contrast published only one book during his life, and that book (The Tractatus Logico-Philosophicus, which was dedicated to Russell) is not regarded as his most important work. In terms of their publication output during their lives, Russell was a giant, and Wittgenstein a shrimp. But from the current moment in history, however, their prestige as philosophers is equal, or perhaps Wittgenstein is given more respect.  

The Limits of Logic

In the 1910s, when Wittgenstein studied with Russell, their project was logic and, to some extent, the mathematization of logical thought.  The concern was how to prove (or disprove) the truth of a statement.

Russell’s book The Philosophy of Logical Atomism, published in 1914, is roughly contemporary with Wittgenstein’s Tractatus, published in 1918, and their subject matter is quite similar—both are works of analytic philosophy discussing logical proof. The question of interest here is how they handle the boundaries of logic.

At the beginning of Logical Atomism, Russell acknowledges an inevitable and unavoidable subjectivity at the foundation of what he is doing. If we want to prove the truth of a statement, we need to have some starting place—some statements that we know are true. But how do we know if something is true without having proved it? And how can we start the project of proving the truth of any statement unless we have something that we have already proven true? His response is to say, approximately, “we start with something undeniable.” Not true, only undeniable. He discusses what he means by undeniable for a paragraph or two, and then he moves on to other issues. Essentially, he says, “well, we can’t follow the rules of proof for our first statement, so we’ll just ignore those rules and accept our first statement as true because it seems undeniable.”  Practically speaking, that makes perfect sense; logically speaking, it’s almost inexcusable. Emotionally speaking, I would say that this is the choice of a person who has confidence in the value of their work, despite some flaws.

In the penultimate sixth chapter of the Tractatus, Wittgenstein similarly struggles with what is either the same, or a very similar problem: he sees the logician as existing within the system being examined, creating the same sort of unavoidable subjectivity that concerned Russell. His response to this, however, quite different. In the sixth chapter, he discusses how one cannot get the necessary objectivity, and that lacking that, one has no grounds on which to speak.  And he concludes the book with his seventh chapter, which I reproduce in full here: “Of that whereof one can’t speak, one must remain silent.” That’s the whole seventh chapter. One sentence. And Wittgenstein never again published in his lifetime. Logically speaking, this is perfectly sound. Practically speaking, however, it leads to paralysis. Emotionally speaking, I would say this is the choice of a person who doubts the value of their work.

Perfectionism and Confidence

To me, this is a story about confidence and a willingness to accept a logical flaw.  Both Russell and Wittgenstein recognized a similar logical limit, but Russell said “I will still proceed” while Wittgenstein said “This project is meaningless.” To me, logically speaking, Wittgenstein is in the right here.  If you are interested in a system of building certain truth through proof, the whole structure of truth fails if it is built on something that is not provably true. Wittgenstein recognizes this and essentially says “this project isn’t worth the effort because it’s ultimately fruitless.”

Russell’s response is very different, and I view it as a manifestation of confidence or even arrogance. Russell says, “weak foundations be damned, I’m still going to pursue this project.”

I don’t know what emotions and thoughts swayed the two men, or whether the issue was really confidence.  But as a lesson for struggling writers, I think it can be instructive: the writer who pushes forward ignoring problems, produces work for publication, while the writer who takes those problems seriously gets stuck, and even is blocked from publishing.

Getting projects finished and published simply takes a willingness to push ahead, despite problems and weaknesses in your research.

This is not to excuse shoddy work, but rather to acknowledge the impossibility of creating perfection, and to prefer flawed productivity with inactivity brought on by doubts and imperfections.

Hume’s Problem and the Weaponization of Doubt

In his History of Western Philosophy, Bertrand Russell wrote something to the effect of “With subjectivism in philosophy comes anarchism in politics.” (I’m too lazy to go hunt up the proper quote, so this may be way off base, but my getting the quote right doesn’t change the basic argument here.) As someone who rejects objectivism in philosophy, who recognizes inevitable subjective elements in all reasoning, and who wants political stability as well as some element of democratic rule, this sentence struck me as problematic, even wrong. Of course, Russell lived through the Nazi era, when big lies were spread to create an alternate reality that inspired horrific acts of violence. I had not yet seen the weaponization of doubt employed by too many, but especially the big business interests and many political actors.

David Hume is perhaps most famous for his framing of the problem of induction. Induction is the process of making generalizations from specific examples. So, for example, suppose you are looking at trees, and every tree you see has green leaves. Induction takes those many observations and makes a general rule: “trees have green leaves.” The problem of induction is that there is no guarantee that future observations will resemble past observations. To Hume, this was mostly important in distinguishing what we know (with absolute certainty beyond doubt) and what we believe (with good reason, but only as a conclusion from experience, which is necessarily fraught with the problem that the future might not resemble the past). Unfortunately, this basic logical problem has become used to much effect (and, in my opinion, much harm).

Much of science proceeds, to some extent, on the basis of “the best-tested theory”—on theories that have been tested and passed those tests, but are subject to further testing. According to the theories of Karl Popper—a philosopher who developed a famous response to Hume’s problem—we can have some certain knowledge in science: we can know (with certainty) that things are false. Because we can know that things are false, we can test and disprove theories, and thus eliminate bad ideas.  This basic structure is common in many fields, where a “null hypothesis” is shown to be false (or at least highly improbable), and an “alternate hypothesis” is therefore accepted.

In the hands of reasonable people who are interested in discovering the truth, this basic structure allows for progress, and thus scholars develop a general consensus agreement about the basic facts. It is not a fully-determined consensus—there is debate and there are those who reject some or most of the consensus, but there is a general acceptance of most basic ideas.

But in the hands of those who have some agenda other than truth, Hume’s problem becomes a weapon to paralyze an enemy and seize power.

It has been widely reported that in the 1970s, scientists at Exxon identified the problem of global warming, and, seeing that such knowledge might hurt their business, the company developed a strategy of questioning global warming science. (It should be noted that Exxon was not necessarily at the forefront of this. For example, check out this article from 1965.) Hume’s problem makes it possible to question every theory, no matter how much evidence: “well,” you say, “that is suggestive, but it can’t be considered conclusive.” In many cases you can offer some alternative explanation. This has been happening less with climate change over the last several years, as evidence becomes even more overwhelming, but it used to happen much more often. I remember one man asking me “well, if it’s caused by humans, why are the polar caps on Mars melting” (implying, I presume, that the Martian polar caps melting showed some solar-system-wide force was at work).

But the most extreme weaponization of doubt that I have ever seen is the current GOP assault on the credibility of the American election systems. Let’s start by admitting that the American election systems are imperfect: there are mistakes made, and some of those mistakes may even impact the outcome of votes. That is why, in addition to systems for gathering and tabulating votes, there are systems already in place for checking the results of an election. Of course, these systems, too, are imperfect. 

Like any knowledge based on observation, the election-checking systems can always be challenged by the question at the heart of Hume’s problem: just because we haven’t observed something (vote fraud) yet, doesn’t mean we won’t observe it in the future (if we run another audit). What the GOP keeps doing, in calling for further investigation into the election, is relying on the basic logic of Hume’s problem. This is the argument that is driving the current audit of votes in Maricopa county: “sure, there were already multiple audits, but just because they didn’t find fraud doesn’t mean the fraud doesn’t exist; it just means that the audits didn’t look for the right things.”  Whatever checks you might carry out, you can make up some new claim and say “You haven’t proved this didn’t happen.” Case in point, the Maricopa county audit has apparently been looking for traces of bamboo to prove that fake ballots were introduced into the election count by sinister Asians. “Sure, you didn’t find any local interference, but what about the Chinese? They managed to inject thousands of fake ballots into the system to help Biden.” And, yes, I’m sure that no one has checked to see if the ballots were faked and forged from China” (And let’s just forget the fact that they did check that the counted ballots were from registered voters, and that no voter voted multiple times, so for the Chinese scheme to work, they must have somehow managed to make fake ballots only for people who were registered but who did not  vote without making any ballots for people who were not registered or who registered and voted.)

Hume correctly pointed out that we cannot prove with certainty the claims we make based on observation. But, while this is logically correct, when that doubt is used in bad faith to ignore the vast preponderance of evidence, there’s a big problem.

I’m no big fan of the Democrats; I think they have been too complicit in many of the worst failures of the USA during my lifetime. And, in my heart, I am conservative (not politically conservative, but actually conservative in that I would like to mostly keep things as they are—there are things that need changing, but let’s only change those things and keep all the rest). But in contrast to the Republican party, it can at least be said that the Democrats are apparently interested in truth, evidence, and data based on observations, all of which are really good things. During my adult life, it seems to me that the Republican party has consistently strayed farther and farther from the truth.  I only remember Watergate from a child’s perspective, but obviously the honesty of Nixon was an issue. And the GOP—at least some members of it—called on the president to step down when it became clear he was a criminal (and apparently, those people were willing to vote to impeach). The Reagan administration at least tried to cloak its work in theory—the Laffer curve was at least an academic theory promulgated by an academic. Stuff like The Bell Curve by Herrnstein and Murray at least tried to give an intellectual defense to GOP perspectives. I don’t think much of the way Herrnstein and Murray handle data (I think they confuse correlation with causation), but I would give them the benefit of a doubt that it’s just bad data analysis, rather than intentionally deceitful analysis. The investigation of the Clintons was a preliminary weaponization of doubt—it started with a supposed real estate fraud (Whitewater), but reached out in any direction it could to find reason to attack Clinton, ultimately resulting in Clinton’s impeachment for lying about his relationship with Lewinsky. There were no real limits on an investigation that said “we haven’t found any fraud yet, but that doesn’t mean it doesn’t exist. The second Bush administration lied about weapons of mass destruction and many other things, but they seemed to be trying to come up with realistic alternate hypotheses. But the GOP since is all about baseless conspiracies that no evidence set to rest. No matter how much evidence there was that Obama was born in the US, the birther conspiracy just rolled on.

GOP attempts to “find the truth” about the 2020 election are bad faith arguments.  They have nothing to do with finding the truth. They are weaponization of doubt to gain political power. Whenever confronted with actual evidence that there wasn’t fraud (like recounts of votes and audits of votes), the GOP answers, “well, you just haven’t found it yet.” It’s not a reasonable search for truth; it’s an attempt to gain political power by reducing people’s faith in the electoral system.

It should be noted that my primary interest is in finding the truth. The fact that I prefer Democrats to Republicans follows from my interest in the truth. I do not prefer Democratic policies because they are proposed by Democrats (Indeed, I loathe many Democratic policies). Instead, I prefer policies that are based on the truth, and then prefer the party that shows closer adherence to the policies that I would espouse. My objection that the GOP is arguing is bad faith is not based on my preference for Democrats, but rather on my observation that they are weaponizing doubt and engaging in intentional deflection, distraction, and disinformation. I believe in the truth. And that belief shapes my preference for politicians who respect and respond to the truth.

Dealing with writer’s block, tip 7: Don’t get stopped by uncertainty

Writer’s block—strong emotional responses that interfere with writing—grows from any number of doubts about the self—that one will be rejected, that one doesn’t work hard enough, that one isn’t smart enough. In this post, I am going to focus on philosophical doubt and on the place of certainty in scholarly work.  Intellectual doubt can trigger emotional doubts: if you have unanswered questions, it’s natural to think “I don’t know enough.” It’s good to think you don’t know enough—doubt sparks growth and learning—but it shouldn’t stop you from sharing what you do know. All scholars work in the face of uncertainty, but too many let their doubts stop them from sharing what they do know. 

The frustration of uncertainty and intellectual doubt

Uncertainty is emotionally draining. Each new question that arises can drain energy and enthusiasm, and every answer can inspire new questions. Research can feel like a treadmill, where no matter what you have done, you still continue to chase knowledge. You want somewhere solid to stand, and the never-ending doubt can make you feel like you’re sinking into a morass. And, if you’re self-critical, it’s easy to think that this constant doubt is a personal failure: “I wouldn’t have this problem if I were smarter/had worked harder.”

You can’t eliminate intellectual doubt

Doubt lies at the heart of research: if you already knew the answer, there would be no reason to research a subject. When you get into the details of any area of research, questions begin to arise: how do you define the terms of greatest concern or interest? What theories or models do you use to explain the phenomena of interest? What are the limits of your research? What are the limits of authorities on which you rely (any sources you cite for methods, theories, definitions)? 

The famous skeptic, David Hume, pointed out that one can never be certain that the future will resemble the past (or, at least, that future empirical observations will resemble past observations), leaving scientists a legacy of doubt so strong that many researchers don’t even try to prove that things are true, they simply attempt to prove things are false, and then argue in favor of the alternative. The idea of a “null hypothesis” that is disproven in order to accept an alternative process (as often seen in inferential statistics), is a response to this problem, known as “the problem of induction,” and often called “Hume’s problem.”

If you are a scholar and you have doubts and questions and uncertainty, it’s the nature of the work, not a failing on your part. A lot of writers get stuck on their projects because of intellectual doubt: “I don’t know enough,” they say, “I have to read this article/book/etc. I can’t write until I’ve done that reading.” But research doesn’t eliminate doubt.  Published research does not eliminate doubt.  Yes, there are authors who argue their cases confidently and claim certainty, but that certainty is emotional, not logical.

Show your work

Your research may be incomplete, uncertain, and built on dubious foundations, but it still contributes to greater understanding of the world.  Indeed, your incomplete, uncertain, and dubiously founded work, shares those characteristics with all research, so it is valuable to other researchers looking to explain the same phenomena as you.

Often, as you may recognize from your own experience, research can be valuable because of some specific aspect—for example, an author with weak results, might offer a very good definition of a concept, or might offer an interesting methodological perspective, or might just ask a really good question (even if they do a poor job of trying to answer the question). 

A lot of research explicitly discusses its own limitations, its questions left unanswered, as well as new questions raised because other researchers can use that discussion of limitations to develop complementary research or to otherwise address weaknesses in the original work.

While it can be emotionally unsettling to write about all the weaknesses in your research project, it is actually a valuable and useful part of the work—both for its role in helping you understand your own work better and clean up errors, and for its role in communicating with others. Instead of letting your doubt on some issue stop you from writing, write about those doubts, be willing to explore them all in writing. Show your readers the variety of issues you considered, the problems they created, and your responses. Show the depth and complexity of your thinking, including the contradictions and doubts. Put it all on the page.  It’s entirely possible that other researchers will find your processes of reasoning interesting and valuable.

Obviously, it can be intimidating to focus on the weaknesses of your work and to think about discussing those weaknesses with other people. In an ideal world, the people who see your work would be supportive and interested in helping you improve your work, and therefore you wouldn’t need to fear writing about the weaknesses of your work. But in the real world, of course, people can be quite aggressive and competitive. Of course, that doesn’t go away even for work of the highest quality—there’s almost always someone who is going to say you’re wrong, whatever you say—so you might as well just get it over with and share your work.

Filling the gaps

In academia, it is common to talk about how research “fills the gaps in the literature,” or addresses questions unanswered by previous scholarship.  If you are addressing such a gap—especially if it’s a gap that other scholars think is important—then your attempt to fill the gap is valuable to the community of scholars, regardless of whether it succeeds.  If your work does succeed, the gap is filled, and if your work doesn’t succeed, scholars who follow you may be able to use your attempt to avoid the problems you faced and try a different way of attempting to fill the gap.  In both cases, your work helps the larger community.

It is true that there is a publication bias for successful work, but the issue is not that you wouldn’t prefer to have successful work, but what do you do if the work you have done has problems?  Because your work is going to have problems, if, as I argued above, intellectual uncertainty cannot be eliminated. So the value in your work, for other scholars, lies not only in the conclusions that you draw, but in the whole fabric of your search—in all your theoretical and methodological choices, and how they shaped your research, and the insights they give not only into the questions asked, but into the ways that we try to answer those questions.


Intellectual uncertainty is unavoidable, and to try to capture any absolute ultimate truth in words may be impossible. As early as the 6th century, BCE, Lao Tzu wrote in the very first verse of the Tao Te Ching, “The Tao that can be spoken is not the absolute Tao,” or, to take a little liberty, “the truth that can be put into words is not the absolute truth.”  If you’re making a conscientious effort to do good scholarship, which means critically questioning your own work as well as the work of others, you will certainly find places to doubt your own work, where intellectual certainty is impossible, and all you’re left with is work that is intellectually uncertain. But intellectual uncertainty can be paired with emotional confidence—the confidence that you made responsible and reasonable choices as you tried to understand the world better, and that your work, though susceptible to doubt, is also worthy of consideration for its contribution to the communal discourse in search of understanding.

Intellectual uncertainty is denied all scholars.  A lot of success in academia goes to those who have emotional confidence, despite the intellectual limits of their work. Instead of letting uncertainty stop you, show your audience how you tried to deal with the limits of your (and your research community’s) knowledge.

The tarot's fool steps blindly toward the edge of a cliff. Researchers also advance without a clear vision of what lies ahead.

The Fool

While a researcher ought not be blindly stepping off a cliff, like the fool from the tarot, they do have to be willing to step into the unknown and risk the fall. Choose the course of action that seems best to you, and risk it, because no course of action guarantees a perfect outcome. Fortunately, as a writer, you’re unlikely to die if you take a chance by sharing an imperfect draft.

Searching for Truth

As a philosopher, I have long since concluded that if there is such a thing as an absolute, completely objective truth, it is not something to which we humans have access. My fallback quotation on this point is from the first verse of the Tao Te Ching: “The Tao that can be spoken is not the absolute Tao,” which I interpret to mean something like “we can’t put it all into words (or other representations).”  

Despite this basic presumption, as a philosopher and teacher, I very strongly believe that there is a difference between truth and falsehood, and believe that the attempt to distinguish one from the other is the   extremely valuable role of the scholar/teacher/student/researcher/journalist/analyst in modern societies, especially those that are founded on the idea that governments are elected by the people.

How do I reconcile these two competing views—that there is no ultimate Truth (at least that we can express) and that there is a difference between truth and falsehood?  I suppose my answer is that the question is resolved situationally: at times the question of what is true is difficult and elusive, and other times it is clear and distinct. If pressed into a close philosophical argument, I would take the position that truth is elusive and that the truth that can be put into words is not the absolute truth, but in many cases a close philosophical argument is not necessary or even useful.


My notions on this subject have been, I suppose, strongly influenced by my (very limited) understanding of the Pragmatic school of philosophy, sometimes called American Pragmatism, which is historically associated with Charles Sanders Pierce and William James, and more recently with C.W. Churchman and Hilary Putnam, who are generally associated with the notion that “truth is what works,” and the idea of the “cash value” of an idea.  These ideas seem to me important, though they do not in my mind comprehend all the issues related to the difference between truth and falsehood. But again, the more we try to engage in close philosophical argument, the more elusive the issues become.  And this, perhaps, is the value in thinking that truth is what works: in many cases the question of truth is important because of how ideas off truth guide our actions (it is on this point that I wrote about how all knowledge is political). If there were an ultimate truth, it would be a valuable guide to our actions (plans work better when take the facts into consideration), but in the absence of ultimate truth, there is still the value that we can get from on a pragmatic view of “what works.”

One problem of viewing truth in terms of “what works,” or “cash value,” however, makes an appearance if we ask “for whom”?  Who gets the cash value?  Politicians and businesses have often uttered falsehoods that gained them all sorts of personal gain.  The tobacco industry maintained that tobacco wasn’t unhealthy for a long time. Exxon (now ExxonMobil) long profited over denying that climate change was real. Politicians have lied about all sorts of things for their own personal gain.

The Problem with Individual Notions of Truth

But in these questions we can see part of the problems that can beset notions of “truth.”  If truth is only what works for a given person, then there is great social danger, as some people will inevitably argue from purely personal biases, and, if the intent is bad, can poison any possibility of cooperation or constructive compromise.  Thus the desire for some objective standard—something that is true for everyone.  And I believe that there are such things, even though the abstract search for an ultimate, objective truth will not lead to a certain end.

There are things that can be considered objective truths in simple, everyday actions.  If I go to the supermarket, for example, and fill my shopping cart, there is a true and definitive answer to the question “do I have enough cash to make this purchase?”  Either I am carrying sufficient cash or I am not, and that answer (whether I have enough cash) is true for me, for the cashier, for the store manager, and indeed for every human being.  Admittedly, the question of whether I, Dave Harris, have enough cash to purchase the groceries in my cart, is not one of interest to most people, but it is one example of a whole class of questions that are amenable to absolute true-false answers. Each successive shopper is asked the same question: do you have cash to pay for this? Each successive shopper either does or does not. There are many different questions that can be answered is such absolute and objective terms.  

The Desirability of Truth

For practical reasons, it would be great if we could identify absolute, objective truths more frequently: there is little debate about possible courses of action when faced with an absolute truth. But such truths are too few and far between. For most questions, there is too much opportunity to question and doubt.  A general premise driving the skepticism of David Hume is that the future might differ from the past—no matter how many observations we make that agree with a premise, there is no certainty that future observations will match it. Other problems can arise for certainty, as well, when dealing with concepts that can be interpreted in a variety of ways.  We may all agree that one man killed another, but was it murder?  To answer the question of murder depends on how we define murder.  It is to deal with such questions that judicial systems are developed to make judgements about how to define and understand certain events that are not amenable to any abstract ultimate standard of truth.

Scholarly Truth and Legal Truth

Judicial systems and scholarship have a lot in common; they both seek confidence in the claims they make. They try to take into account evidence; they try to separate out those truths that can be ascertained (did he kill the man?) from those that cannot (did he murder the man?).  In both cases, those involved are ultimately forced to make decisions on the basis of best evidence or probability rather than ultimate certain truth. And in both cases, the decisions made are, in the long run, subject to dispute and revision as new evidence and knowledge come to light.

An Ongoing Search

The fact these systems are fallible is a product of the nature of our human knowledge, but this does not mean that we ought not continue to seek the truth.  There are those who act in bad faith, who try to deceive other for their own personal gain. To allow such people to use the unavoidable doubt about some questions to poison the well of scholarship or of legal systems and the larger social systems they represent, is to abdicate responsibility to those who would lie for their personal gain.  That is why, for one, it is important to remember that there are questions that can be answered with ultimate objective truth, and for two, to strive to find such undeniable truths on which to base decisions.  Just because truth is elusive does not mean we ought not seek it. Seeking truth is a process and a principle; it is, in my mind, one of the fundamental principles that should guide any moral system.  Without a genuine commitment to seeking truth, societies fall into evil and dangerous patterns where malevolent actors visit harm on many to satisfy their own selfish aims. 

Of course this last premise falls into an area of understanding that is much debated: the realm of right and wrong/good and evil.  In this realm, I make no claim to an ultimate truth, but I feel strongly that there are good and evil in the hearts of humans, and societies are most likely to prosper when the interest in the good outweighs the interest in evil. Regardless, the search for truth is, at least as I see the world and societies in the world, an act of good more than of evil (even if evil has been done in pursuit of truth).

The Basics of Logical Analysis 3: Concluding

I wanted to conclude the line of discussion I was following in my previous posts, with an eye toward the experience a researcher might have in beginning to define a new project, particularly those in areas where the researcher has not done a lot of previous research.  I also wanted to try to make my examples a little more detailed and academic in terms of focus. I’m still going to be working with an example from an area where I have little experience because it’s close to one of the concerns of a writer with whom I’m working.

Down the Rabbit Hole 4: Fractals

The previous post was talking about “going down the rabbit hole” for the way that a question can seem initially simple and small, but takes on detail and scope as it is examined more closely. Another parallel would be fractals, which are patterns/images derived from mathematical operations that are recursively defined in such a way that as you magnify the image, new detail continuously emerges. The Mandelbrot Set is one of the most famous of the fractals. 

Research shares something of this characteristic. It may not be infinitely recursive (though some have argued that it is), but generally, if you examine any issue closely, it will lead to more questions.  This is due to the basic nature of analysis: if we analyze things into separate parts/aspects/issues, each of those separate parts can itself be analyzed into its ow constituent part.  Jorge Luis Borges wrote an essay titled “Avatars of the Tortoise,” in which he argues that infinite regressions “corrupt” reasoning, giving examples, like how, to define a word/concept, it is necessary to use other words, and each of those other words then needs to be defined, which requires other words, which then all require their own definition, and so on. I’m not sure that the pattern is infinite (there are, after all only a finite number of words, so for definition at least the regression can’t be infinite), but the multiplication of details can quickly become overwhelming. 

The Nobel Prize-winning psychologist and economist Herbert Simon, who studied decision-making, coined the term “satisficing,” to speak of how some decisions must be made without a full logical analysis because such analyses take so long and become so detailed.

As my earlier examples of reviewing a restaurant or movie showed, it’s pretty natural to see different aspects in things: the restaurant has food and service and ambiance; the service has courtesy and competence; courtesy has all the different things that different people said and did. It may be simple to say whether you liked the restaurant, but to explain in detail all the different factors that contributed to that decision is another matter altogether.

Fractal: The Barnsley Fern
Each leaf, if expanded, will show similar structure and fine detail as the larger frond.
Image by: DSP-user / CC BY-SA (

A More Focused Example

So far, I was giving pretty general examples, now let’s try to get more focused.

Let’s imagine a hypothetical student, studying business management.  And let’s imagine that this student has what we can call “The Fruit Theory of Management,” in which they assume that giving employees fruit improves performance. (I was going to call it “Apple Theory” but didn’t want this to be confused for a reference to the big corporation.)

The Fruit Theory

On its face, the fruit theory of management is ridiculous, but since I’m talking about a general structure of research, the precise theory in question is not so important (as will hopefully become obvious in a moment). Instead of “giving employees fruit” we could use “giving employees training in XYZ,” or, more generally, “instituting policies ABC.”  ”Giving fruit” can stand in for any possible intervention. And instead of “employees,” we could substitutes almost any group—students, parents, plumbers, etc.—and in each of these cases we could either find a suitable measure of performance, or we could replace “performance” with some other construct to measure (e.g., happiness, health, etc.).  

We can even generalize this to any basic causal pattern: “giving fruit leads to better performance” is a specific example of the general pattern “X causes Y.” Most research is concerned with causal relationships in some way or another, so although I’m going to focus on fruit theory

Studying Fruit Theory

So, we have our business management student who wants to research fruit theory. Generally speaking, a starting point for fruit theory would be to define the theory.

So the student tries to write down a definition (or speaks a definition in conversation with someone). At this point, the process of analysis inevitably has already begun: the words used can themselves be examined individually.  So, if the theory is “giving fruit leads to better performance,” there are elements that can be defined individually. 

For starters, we can ask “what is  fruit?”  In everyday conversation, we know what a fruit is and don’t need definition. But if we’re talking about developing research and examining causal relationships, we want to define things more closely and formally. (Research needs formality and detail so that others can check the research.)  For example, fruit theory might call for fresh, ripe, worm-free fruit that people would enjoy eating (a definition that is not identical with a more general understanding of fruit that includes unripe or wormy or rotten fruit). That might lead us to a whole set of questions of how to identify fruit that people would enjoy eating, which could lead to more general questions of what it means for people to enjoy eating. (Or maybe the real issue is that people enjoy receiving fruit as gifts—that would lead to a different definition of what “fruit” is.)

To study fruit theory, we also need to define what counts as “giving” and what counts as “better performance.”  As for “giving,” there is some question of the specific details of how the transfer is made and whether any conditions are placed on that transaction, including any potentially hidden costs of the transaction. But defining giving is relatively simple compared to the question of “better performance.” Measuring performance a huge array of questions: Whose performance? Are we measuring the performance of the organization as a whole? Or of individuals in it? What kind of performance? What dimensions of performance are we measuring (speed? accuracy? gross sales? net sales? etc.) and over what time periods? Are we measuring cash flow of the business over a month? Or the employee sick days taken over a year? Or are we measuring profitability over a decade? There are any number of different ways to think about the general concept of performance.

To develop research, we might also need to specify further the causal mechanism by which fruit theory works. Does giving fruit work because fruit makes people healthier, and therefore better able to work hard (as the old saying goes “an apple a day keeps the doctor away”)? Is there a physiological causality? Is that physiological causal path one that gives people more energy? Or one that improves their strength? Or one that boosts their mood?  Or maybe the causality is not physiological but psychological: giving employees gifts makes them feel appreciated and they want to work harder as a result?

Answers lead to new questions

Whenever we make a choice of where to focus attention, we can find new questions to pursue. We may start pursuing a question of business, as in fruit theory, but that question might lead into other fields of study.  If we posit a physiological cause for fruit leading to better performance of employees, then we need to study physiology. That study might lead in a variety of directions: maybe fruit theory works because fruit improves health, reducing sick-time lost—that would lead to study of immunology: how and in what ways do apples improve immune response? Or maybe fruit theory works because of some other physiological effect: strength, endurance, mood. Since different foods and substances can impact strength, endurance, and mood, maybe fruit has such effects?  If one thinks that fruit has a physiological effect on mood, one might then be led into questions of which specific biological pathways lead to mood improvement, and perhaps in studying that research, you see that other researchers have identified different kinds of mood improvement, and perhaps debate ways in which physiology affect mood.

New answers pretty much always suggest new questions.  

Preventing Over-analysis

You can take analysis too far. If you constantly analyze everything, you end up with a great mass of questions and no answers.  It can lead to getting swamped in doubt.  There is no rule for this, beyond that at some point it is necessary to pick the point at which you say “I’m satisfied with my answer to this question.”  Such statements close off one potential avenue of study to allow focus on another, and to set limits to what you need to study—limits that are necessary for the practical reason that it’s good to finish a project even if that project is imperfect.

If you say “I’m satisfied that the reason Fruit Theory works is because fruit makes people healthier,” you don’t need to pursue questions of whether and how and how much fruit promotes health, and you can go on to focus on how improved health helps a business.  Or if you say, “I’m satisfied that fruit theory works,” you can go on to study details of implementing fruit theory.  Of course, it’s good to have reasons, and good to be able to explain those reasons: if you’re satisfied that fruit theory works, it’s useful to be able to give evidence and reasoning. In academia, that evidence often comes in the form of other research literature. If you can cite five articles from reputable sources that all say “fruit theory works,” then you can go on to your research in implementation without getting embroiled in any debate about whether fruit theory works—even if the five articles you cite are not yet accepted by all members of the scientific community.


Analysis itself isn’t really that hard in the small-scale—we do it automatically to some extent. But it is something that grows increasingly difficult as we invest more energy into it: the more detail we add to our analysis, the more there is an opportunity to analyze further, which can lead to paralysis or to getting swamped. It is something that wants care; it wants attention to detail. 

The basics of logical analysis 2: Down the rabbit hole

Continuing my discussion of analysis from my previous posts, I look at how analysis can lead to new questions and new perspectives. Just as Alice ducked into the small rabbit hole and found an entire world, so too can stepping into one small question open up a whole world of new questions and ideas.

If you look at things right and apply a bit of imagination, analysis quickly leads to new questions.  Even something that looks small and simple will open up to a vast array of interesting and difficult questions. 

The multiplication of questions that arises from analysis can be good or bad. New questions can be good because they can lead to all sorts of potentially interesting research. But having too many questions can be bad, both because it can interfere with focusing on one project, and because it leads to complexity that can be intimidating. Learning to deal with the expanding complexity that appears with close study is a valuable skill in any intelligence-based endeavor—whether scholar or professional, decisions must be made and action taken, and falling down a rabbit hole of analysis and exploration will sometimes interfere with those decisions and actions.

This post follows up on my previous in which I argued that we analyze automatically and that the work of a researcher includes making our analyses explicit so that we and others can check them.

In this post, in order to show the potential expansion of questions, I’ll look at a couple of examples in somewhat greater detail. While I won’t approach the level of detail that might be expected in a scholarly work meant for experts in a specific field—I want my examples to make sense to people who are not experts and I’m not writing about fields in which I might reasonably called an expert—I hope to at least show how the complexity that characterizes most academic work arises as a natural part of the kind of analysis that we all do automatically.

Looking more closely: Detail appears with new perspectives

In the previous post, I used the example of distinguishing the stem, seeds, skin, and flesh of an apple as a basic analysis (separation into parts), but it was quite simplistic. Now I want to examine how to get more detail in an analysis of this apple.

For starters, we can often see more detail simply by looking more closely (literally): In my previous post, I separated an apple into skin, flesh, seeds, core and stem.  But we could look at each of those in greater detail: the seed, for example, has a dark brown skin that covers it and a white interior.  With a microscope, the seed (and all the rest of the apple) can be seen to be made up of cells.  And with a strong enough microscope, we can see the internal parts of the cells (e.g., mitochondria, nucleus), or even parts of the parts (e.g., the nuclear envelope and nucleolus of the cell’s nucleus). This focus on literally seeing smaller and smaller pieces fails at some point (when the pieces are themselves about the same size as the wavelengths of visible light), but in theory this “looking” more closely leads to the realms of chemistry, atomic and molecular physics, and ultimately to quantum mechanics. Now we don’t necessarily need to know quantum mechanics or even cellular biology to study apples—you don’t necessarily visit all of Wonderland—but those paths are there and can be followed.

In this apple example, each new closer visual focus—each new perspective—revealed further detail that we naturally analyzed as part of what we saw.  But division into physical components is only one avenue of analysis, and others also lead down expansive and detailed courses of study.

So Many Things to See!

We can look at different kinds of apples in a number of different ways. (Not to go all meta here, but we can indeed separate—analyze—distinct ways in which we can analyze apples.)

At the most obvious, perhaps, we can separate apples according to their variety, as can be seen in markets: there are Granny Smiths, Pippins, etc., so that customers can choose apples according to their varied flavors and characters.  Some people like one variety and not another.  These distinctions are often made on the basis of identifying separate characteristics of apples (another analysis): “I like the flavor and smell, but it’s kind mealy and dry;” or “It’s got crisp flesh and strong flavor; it’s not too sweet.” Flavor, texture, appearance (color, shape, etc.), and condition (ripe, overripe, e.g.,) are all distinct criteria that a shopper might consider with respect to an apple.  These aren’t exactly the kind of thing that would be the subject of academic study, but they could certainly lead to more academic questions.

The question of apple variety, for example, could be seen through the lens of biology. There are the questions of which genetic markers distinguish varieties and the ways in which those genetic markers tell us of the relationships between different types of apples and their heritages.  The question of heritage brings up another aspect of apples that could be a study for a biologist: How did a given strain develop? There are wild apples, which developed without human intervention; heirlooms, which develop through selective breeding; and hybrids, which grow from planned crossbreeding.  Combining these questions of genetics and heritage might lead a scholar to study the migration of a specific gene, for example to see if GMO commercial apple farms are spreading their modified genes to wild populations.

Another characteristic of an apple that a shopper might consider at the store is the price.  This is obviously not a matter for biologists, but rather for economists. And an economist might want to look at how apples get priced in different markets.  That might lead to questions of apple distribution and apple growing. Questions of apple growing might lead back to questions of biology, or to other fields of study like agronomy. Questions of distribution might lead to questions of transportation engineering (what’s the best means to transport apples?) or to questions of markets (who are potential producers/distributors/vendors/consumers? what products ‘compete’ with apples?) or questions of government policy (how did the new law affect apple prices?).

So Many Different Perspectives

Different analytical frameworks can be found by imagining different perspectives on apples. In the previous section, I already linked the study of apples into fields like biology and economics and more, but there is wide potential for study of apples in many areas. 

Think about university departments where apples might get studied. Biology, economics, and agronomy are three already suggested. But people in literature departments might study apples in literature—“The apple in literature: From the bible to the novel”. People in history departments could study the history of apples—“Apples on the Silk Road in the 14th century.”  Anthropology: “Apples and the formation of early human agricultural communities.” Ecology/Environmental Science: “Apples and Climate Change.”  

These example titles are a little strained because I have not made a study of apples in these contexts, and therefore I’m throwing out general ideas that are rather simplistic and free of real theoretical considerations.  More complexity would attend a real project.  The student of literature might be looking at different things that apples have symbolized because they want to make a point about changing cultural norms. Or they might look at how apples have been linked to misogynistic representations of women. Such studies, of course, are interested in more than just apples. As we combine interest with apples with other interests, too, new potential ideas being to arise.

Combining Perspectives

Most people have multiple interests and these interests can combine in myriad ways to create a vast array of different questions that could be asked about apples (or any other subject).

Pretty much any scholarly perspective has its own analytical frameworks that structure research. Biology analyzes according to genetic structure, for example. Business analyzes according to market and economic factors. When these frameworks start to overlap—a business analysis using genetic factors, or a genetic analysis driven by specific economic factors—multiple points of intersection appear. Each genetic structure (each type of apple) can be examined with respect to a variety of different economic factors (e.g., flavor, shelf life, durability, appearance). 

This multiplication of different ways of dividing things up (analytically, anyway) can be problematic because it creates a lot of complexity and because it can be confusing/overwhelming, but it can also present opportunities because each new perspective might have some valuable insight to add. 


What seems small and simple to a first glance—a rabbit hole has a small and unassuming entrance—usually opens into a vast and expanding world of questions.

Analysis requires a bit of imagination—imagination to see a whole as composed of parts, imagination to consider different perspectives from which to view an issue, imagination to recognize the different aspects of things.  But a lot of this analysis is pretty automatic: little or no effort is required for the necessary imagination. Still, because it’s so easy and so natural, this process gets discounted—especially if you view “analysis” as something highly specialized that only experts do.

To develop a practice of analysis, all you really need to do is make a point of trying to make your different observations explicit.  Whether you’re judging an apple (taste, appearance, scent, etc.) or a theory (the various assumptions, conclusions, relationships to other theories), chances are good that you’ll pretty automatically respond to different aspects at different times. If you can formalize and record these different observations, you lay the foundation for developing your own analyses.

The Basics of Logical Analysis 1: Seeing Parts of Wholes

In this post, I revisit the general issue of analysis that I discussed in my previous post. There is a measure of overlap because I’m really searching for a way to communicate both the fundamental simplicity of analysis with all its potential complexity.  Maybe the general principle for this post is that analysis is, at its roots, a simple intellectual action: dividing something into different parts, but that this simple action inevitably leads to increasing complexity.

As with so many things in which analysis is involved, this post started out simpler and shorter than it has become. My original plan was to write one short post that just did a better job of explaining the ideas in the previous post. But then, as I thought more closely about it, I found issues that hadn’t been discussed in my previous.  It’s now looking like this will be a series of posts—at least two: this one will discuss the big idea of analysis and relatively simple, everyday examples; the next will look at some examples more closely, in hopes that they feel more like an academic example. I suspect that may end up as two or more posts. In a way, this story encapsulates an aspect of analysis in practice that I want to emphasize here: the more you do it, the more complexity you see, and that leads to expanding projects, that must be reined in for purely practical reasons: basically, if you want to finish a project, you have to stop analyzing everything. (And as I write that, I wonder whether I haven’t sparked the foundation for a third post: how do you stop analyzing once you’ve started. It’s an idea that I touch on briefly in the second post, but maybe it deserves its own? I’ll have to think about that…)

What is “Analysis”

At its root (its etymological foundations), “Analysis” is derived through medieval Latin from the Greek for “unloose” or “take apart.” (In contrast to “synthesis” whose roots lie in the Greek for “put together.”) This sense is generally in line with how the word might get used in a conversation. For example, after [a movie/a TV show/a meal at a restaurant], if one person is talking at length criticizing details of the [movie/etc.], the other might get exasperated and say “Stop picking it apart,” or “stop over-analyzing it.”

It is this basic “picking apart” that concerns me in these posts. It is a basic principle that can manifest informally (as a person might do with a movie/etc.) or one that can manifest as extremely detailed and formalized systems of analysis, as with psychoanalysis, or statistical analysis, or data analysis, or any other field that uses “analysis” in a title.

We Do It Automatically

The kind of analysis that is important in research (and other intellectual work) is something that humans do naturally and automatically—often without even noticing that we’re analyzing.

To apply it in research is to take an automatic, unconscious ability and work to make it conscious and explicit. Splitting things into pieces—into different parts or different aspects—is pretty easy. But making those divisions explicit is hard because of the complexity that tends to develop.

We all automatically split things up into different parts, which is reflected in our languages (including words like “parts,” “pieces,” “components,” “elements,” etc.) and much of our daily lives. We separate the world into all sorts of different categories. We eat food, which includes fruit, vegetables, meat, etc. We work, but have many different kinds of work: homework, housework, yard work, not to mention jobs, which are work. We separate the good from the bad. We divide people up into different groups: family, friends, acquaintances, people we don’t know, etc.

It’s true that many of these divisions are learned, but that doesn’t mean that we don’t naturally make divisions of some sort.

Analysis: Examples

Consider an apple.  It is a whole in itself, but we pretty naturally separate it into a few different parts: stem, skin, flesh, core, seeds.  Our basic sensory apparatus provides distinguishing information: stem, seed, and flesh taste different, smell different, look different, and feel different.  Our basic sensory apparatus is already providing us information about differences in the world that lead to analysis of the apple into its different parts.

Consider a movie.  It is a whole in itself, but we can easily divide it in many different ways that are familiar to cinephiles. We can say “The acting was pretty good, but the script was weak.” Or “The cinematography is great, the writing is great, the direction is ok, but the star annoys me, so I had trouble enjoying it.” We might like what we see (“great cinematography!”), but not what we hear (“poorly written dialogue”). We might like one actor and not another. Again, this is analysis in action, although few would think of this kind of thing as analysis. Unless we were to really get into a lengthy discussion of different aspects of a movie, and then someone might say “stop analyzing it! You’re ruining it for me!” 

Research and Analysis

Research takes this basic ability to distinguish between things and tries to make it explicit and formal. For the researcher, it’s not enough to say that it’s obvious that you have stem, seeds, and flesh, or acting, directing, writing, and cinematography. It’s necessary to begin to formalize.

Formalized analysis is crucial in research because it allows a research community to work together.  Researchers who doesn’t explicitly express their analyses can’t have their researcher reviewed or trusted by others. The need to share and provide explanations and evidence that can be examined leads to detailed discussions (articles books, etc.) that can themselves be analyzed (and will be by other researchers who will look for strengths on which to build and weaknesses to correct).

In practice, research communities develop different analytical frameworks and methods of analysis as a result of the attempt to explain and examine each others’ work. These become increasingly detailed and complex over time, as each successive generation of researchers turns their analytical abilities to the questions of interest. Sometimes entirely new analytical frameworks develop, but these, too, are subject to close examination that leads to complex formal analytical systems. 

Psychoanalysis, for example, depends on familiar analytical divisions: the id, ego, and super-ego represent parts of a large whole. So, too, the conscious and unconscious.  Each different pathology is a part of the large whole of “poor mental health.” And each pathology itself is distinguished by a number of different characteristics that are parts of the pathology. To become a psychoanalyst, is to adopt a specific set of analytical frameworks regarding the psychology of individuals and the nature of psychotherapy as well.   Other theories of psychology and psychotherapy may not be called “psychoanalysis,” but they too adopt different analytical frameworks.

Mathematical analyses separate the world into different symbols that represent different parts of the world and distinct relationships between the parts. Physics, of course, presents the interactions of objects in the world as a set of symbols and mathematical equations. In a business setting, the large-scale system of a factory, for example, might get represented in mathematical equations that separate out machines that produce goods, goods that are produced, rates of production, costs of production, necessary workers, etc.


Analysis happens.  If you examine something closely—an object, an interaction, an idea—you will begin to distinguish different aspects or parts of it.  These distinctions are analysis. To move that analysis into an academic or research setting really only requires that you try to make your analyses explicit as you develop them, so that they can be examined for flaws (by you and by others).

Of course, making analyses explicit and then looking at those analyses with an eye for flaws may be a path to good research, but it is not a path to simplicity.

I’m going to close here and in my next post (or posts), I’ll look with greater detail at some examples to show different ways in which things can be analyzed and to discuss the expansion of complexity, which can be both good and bad.

Sophistry vs. Reason and Partisanship vs. Principle

In my previous blog post, I lamented the absence of logical certainty and the problem created by the absence of objective truth, where each person/group believes that they hold the truth and that therefore their political choices are necessary and correct while the choices made by others are based on falsehood or error.

I lament this unavailability of objective truth particularly because I believe there is a fundamental reality—that even if we cannot recognize or discover objective truth, there is a real difference between truth and falsehood.

Because of the political nature of knowledge—because people act on what they accept to be true—political actors have motivation to control knowledge that is disseminated in order to manipulate the behavior of other people.  This is obvious on the large scale: political propaganda is often deceptive. And on the small: people lie to shape the behavior of others (“I didn’t cheat on you, honey. I swear!” is meant to deflect anger, for example).  In this light, although we may not be able to find objective truth, we can certainly recognize at least one dimension on which we can differentiate truth from falsity.

Honesty vs. Deceit

Some people try to deceive, and I don’t want to focus my attention on them, that’s why I didn’t title this post “honesty vs. deceit.”  Some people willingly and knowingly try to obscure what they actually believe is the truth.  Take the tobacco industry, for example.  We know from the record that has been made public, the tobacco companies actively and publicly promoted cigarette smoking as healthful, even while their internal documents clearly indicated their knowledge in the deleterious effects of their product. Or take Exxon, whose scientists internally agreed upon the dangers of climate change in the 1970s, but whose public discourse was to promote doubt about those very conclusions. In these cases, the companies involved presented ideas to the public that were at odds with information that they had internally.

Cases of intentional deception are sadly too common. But I don’t really want to talk about intent so much as I want to discuss issues relevant to recognizing patterns of argumentation that are not based on reason or principle and hence are often used to avoid reason or principle.

Sophistry vs. Reason

This blog is mostly aimed at discussing ideas to help academics negotiate academia. In that context, I want to talk about the presentation of ideas and different things that one can look for as good or as problematic in the work of others, and things to avoid as a matter of principle.

There is a difference between arguments built on reason and arguments built on sophistry, and regardless of the whether or not there is an objective truth, the difference between sophistry and reason can often be recognized. A good speaker or writer can often effectively hide sophistry, at least from casual glance.  The art of rhetoric is often disparaged for its role as a tool for obfuscation—a matter of sophistry not reason—but rhetoric can also be used in service of truth (or at least the intention to tell the truth rather than to deceive). Even if you believe in the truth of your message, you may still struggle to get others to accept those ideas, and persuasion is valuable.  Understanding how to convince an audience is worthwhile.  But some of the tools of persuasion can be deployed to both honest and to deceitful ends, while others are generally only deceitful.

One well-known tool of rhetoric that falls largely outside the bounds of reason is the ad hominem argument: which is to focus on the person who makes a claim rather than on the claim itself.  This can work in two ways: it can be used to attack a claim by arguing that the speaker is generally untruthful, or it can be used to support a claim by arguing that the speaker is generally truthful. The story of the boy who cried wolf is an insight into the issue of the ad hominem argument. Once people have decided that the boy is a liar, they do not check his claim that there is a wolf, even though there was, in the end, a real wolf.  A liar can make a claim that is true. And a generally honest person can make a claim that is false.  It is reasonable to consider the veracity of a speaker (or lack thereof) as an interesting piece of evidence indicative of the truth or falsity of a claim.  It is sophistry to consider the veracity of a speaker as the only indication of a claim’s truth, especially if there is other evidence that can be used to judge the claim in question. If someone tries to avoid discussion of the actual claim and instead they try to focus on a person, then they’re probably engaged in deceitful sophistry. If a claim’s veracity is in doubt, an answer should not be sought in reference to the person who made the claim.

Partisanship vs. Principle

It is well-documented that in situations that should involve reasoned judgement, people show strong biases related to people involved.  This is known as reactive devaluation.[] So, for example, one study in the 1980s showed American participants an arms treaty between the US and USSR and asked them whether they approved of it. One group of participants was told the plan was proposed by Ronald Reagan, one group was told it was proposed by unnamed policy experts, and a final group was told it was proposed by Mikhail Gorbachev.  The same plan was shown to all groups.  If people were making their decision based on the principles laid out in the treaty, then all three groups should have had similar approval rates. The results showed 90% support amongst those told it was proposed by Reagan, 80% support in the group told it was proposed by policy experts, and 44% support in the group told it was proposed by Gorbachev.  The same exact plan got a vastly different reception on the basis of partisanship.

I don’t think decisions should be made on the basis of who your friends are.  Or at least, I think that partisanship—supporting your friends attacking your enemies—should not be a sole consideration when making plans.  For all my concerns about the limitations of research and the general limits of human knowledge, I believe/wish/hope that decisions—mine, yours, and those made by groups, including political bodies—should rely heavily on actual principles, not on partisanship.

There are times when making a decision based on friendship is appropriate. If you decide to go to the restaurant your friend wants even if you read a really bad review, go for it. The ramifications are small. If you’re a researcher, however, and you ignore your data to support some friend’s work, that’s a very different thing altogether.  And if you’re a policy-maker, and you reject actual evidence and your principles for the purpose of supporting your ally, that’s a gross violation of basic ethical behavior. If you tout the principle of honesty or fairness, but then put aside those principles for your friends, then you are abdicating principle in favor of partisanship.  This general observation is obviously applicable to politics, but it’s also true in research.

In research it may not always be partisanship—desire for fame and money may prompt researchers to abandon principles—but whatever the motivation, it’s important to try to return research discussions to the principles that provide a foundation for research.


As I said in my previous post, I lament the absence of objective truth on which all can agree.  But I still believe that there are foundations on which people can build that will help ideas and discourse rise above the level of partisan sophistry while striving for the elusive fruits of principled reasoning.

Pursuing principles—and focusing on principles, like the principle of testing claims based on evidence, not on the character of the person who made the claim—is one way to both move towards shared ideas and shared knowledge nd a way to recognize when others are engaging in sophistry rather than partisanship.

All Knowledge is Political: A Lament

When I was in graduate school, my advisor Jean-Pierre Protzen used to say that all knowledge was political.  I think he might have attributed that idea to Horst Rittel, but I would nod my head in agreement because there are many different sources from which that idea could have come, including Michel Foucault, whose work I have found generally compelling in its arguments about the nature of systems of knowledge and the political roots of what gets to be considered knowledge.  Logically speaking, I am generally convinced that all knowledge is shaped by politics.

Emotionally speaking, however, this is cause for despair. Without some objective standard to determine knowledge, all discourse devolves into whose voice is loudest and/or whose stick is largest. Logically, I see the problems with the idea of objective knowledge, but in my heart, not only do I long (desperately) for objective truth, but I believe that there is, in fact, a big difference between fact and fiction. As a philosopher, I recognize the many, many non-objective factors that shape any statement of fact, but still… There is a big difference between truth and fiction.

This view that all knowledge is shaped by political (or social/cultural/historical) forces only captures one half of the idea that all knowledge is political, however.  It looks at how knowledge is created and how we can or can’t know things.  This is very important, but there’s another side of the coin that I would like to look at, and this is the active side of knowledge, if you will: this is the way in which knowledge shapes politics.  

When people “know” something, that shapes how they behave.  In this sense, it may be that “knowledge” is not necessarily the right word—perhaps “belief” or “certainty” would be better. But I don’t want to get into close debate about defining the idea of knowledge. If the foundations of knowledge are uncertain or contingent (as argued by my advisor, Foucault, and many others), then what is the difference between “knowledge” and “belief”?  To say that our beliefs shape our politics is hardly surprising or interesting, I suppose, though I think it can be a factor that we lose sight of, too.  

This active dimension is crucial in understanding political discourse. Take, for example, the case of a hypothetical university department.  Different professors might compete for funding for their research and their students.  This competition will at least partly grow out of differing ideas of what is true.  To take a broad example, consider a Marxist economist and a Free Market economist.  What may immediately spring to mind is the political difference—one might believe in Marxist communism as the best government and the other in some form of capitalism.  But does that political difference drive the debate, or is that a result of something else? Given what I’ve already said, it should be clear that I think that an idea of what constitutes knowledge is what shapes the debate.

Let us imagine, for a moment, that these two competing professors share the view that economic/political systems should treat people with justice, should protect the general welfare, and should reward the virtuous.  Such agreement is, I believe, to be found between Marx and Adam Smith, on some level, at least: both share an interest in the betterment of the overall polity.  The differences lie in how they view the workings of societies and economic systems.  To frame the difference broadly (yet, I believe accurately), Marx argues that the betterment of the overall polity arises from community action—from the action of the classes, while Smith argues that the betterment of society grows out of selfish, individual action (“By pursuing his own interest, [the individual] frequently promotes [the interest] of the society more effectually than when he really intends to promote it.” From Wealth of Nation’s famous “invisible hand” passage. This is the second sentence following the “invisible hand” sentence).

In this example, the politics of the one professor are driven by the views of the importance of the collective action.  This professor will quite naturally align with political movements that also valorize the community over the individual.  And the politics of the other will be driven by the view that the individual must predominate.  

We would see this on the level of national politics—choice of political party, for example—but it can have more ramifications, in the form of departmental politics, for example. Within this hypothetical department, this difference in view will result in difference of opinion about which applicants/students/job applicants are best suited to the department—decisions that have very real impacts on the department as a whole. 

As I write this, it reminds me of a case of how underlying assumptions like these shape action that isn’t obviously political, but might lead to reinforcing certain views in certain ways and thus wind back to political impacts. In one of his books, George Lakoff analyzed a Chinese proverb: “cows run with the wind; horses run against it.” (I think this was in More Than Cool Reason, co-written with Mark Turner, but I’m not checking sources here—I’m going on my memory of a lecture given by Lakoff that I attended. Since I’m using this example as a way to illustrate how political beliefs influence people, it doesn’t require perfect accuracy.)  The analysis he gave was that the proverb valorized horses, recognizing their independence. But after publication, readers (at least one) informed him that in Chinese culture, the proverb valorized the cows, showing their wisdom in working together.  The proverb (at least this translation of it), does not explicitly valorize either the horse or the cow, it merely differentiates.  The value system adopted by the interpreter shapes the interpretation. In our hypothetical example, the Marxist professor, preferring community action might interpret this in favor of the cow, and thus view this as evidence confirming their view. At the same time, the Free Market professor will interpret in favor of horse, and view it as evidence in favor of individual action  (and yes, although the Free Market professor is wrong/factually inaccurate in the historical/ethnographic interpretation—the people who use the adage don’t interpret it this way—that’s not really relevant in this discussion of how “knowledge” shapes actions that have political impact).  And in both cases, these interpretations tend to shape further action and investigation in the future, as well as future potential competition for department resources, with both professors feeling that their agenda is best for their department.

There is, to be sure, a feedback loop here: what one accepts as knowledge is shaped by political forces, and then goes on to shape political action, which leads to shaping what gets accepted as knowledge. 

My lament is that this basic political nature of knowledge is, as far as I can see, tearing the world about and risks the future of humanity and most other species of life on earth. This statement, which is political in nature, is based on my understanding of the world—on my “knowledge.” It is not a statement driven by political partisanship, but by my best attempts to understand objective reality, even though I believe such understanding is problematic. To the extent that I prefer one political party over another, it is due to this understanding of the world: I prefer the party that recognizes the same situation in the world that I recognize.  And I passionately believe that certain things should be done as a result of being firmly convinced that my understanding of the world is basically accurate.  The thing is, that the folks who disagree with me would say the same thing. And they, too, are passionately committed.

Everybody is committed to their own view of the world. Few are willing to make the effort to learn and understand. Most are convinced of their own rightness. Hence the tremendous stress on the world, on the nations of the world, on the people of the world, and on all the creatures living in it.

“If only people understood the world with the clarity that I do,” I cry. That is my lament. And the lament of billions.  And if all the human race can do is compete over who is right and who is wrong, the future of humanity looks very bleak.