Principle and Partisanship: Team Truth

In my previous post, I considered the question of whether principle follows from partisan alignment, or, instead, partisan alignment follows from principle. And I said that if there is a “team” whose guiding principle is the search for truth, I want to be on that team.

Finding “team truth” would probably be easier if I believed in “truth.” Or, perhaps “believe” isn’t the right word because “believe” can mean accepting something is real or true, even if its reality cannot be proven. In this sense, I do believe in “truth,” even though I also accept the strength and validity of many arguments against the existence of truth.

“Truth” doesn’t exist

Many have argued that the idea of of an objective truth is impossible. If we are hoping for “THE Truth,” using the capitalization from William James’s Pragmatism, and looking for “one system that is right and EVERY other wrong” (Pragmatism, Lecture VII: “Pragmatism and Humanism”)–what Hilary Putnam would call a “God’s-eye-view”–we run into problems.

American Pragmatists like James and Putnam are not alone in arguing this: post-modern philosophy (e.g., Foucault), theories of embodied cognition (e.g., George Lakoff), and skepticism (e.g., Hume) reject objective truth. Philosophical results like Wittgenstein’s Tractatus and Gödel’s proof of the incompleteness of arithmetic demonstrate the impossibility of any logical system. Even Karl Popper’s Objective Knowledge ends up describing a system in which the only objective knowledge we have is in regards to what is false; we never know if a hypothesis is true, only that it has not yet been proven false.

Jorge Luis Borges notes that infinity corrupts all ideas, including proof: any logical proof depends on on the truth of its premises, but how do we know a premise is true? We have to prove its truth, of course, which requires other premises. And these premises must then be proven–thus proof is stuck in an infinite regression: it can never set an absolutely “true” premise. Bertrand Russell notes this same regression in his Logical Atomism and decides that you have to start with something that is not true but rather “undeniable.” But that only begs the question.

Each of these arguments has merits that I cannot deny or refute.

Team Truth

Although I believe that there is no such thing in provable, demonstrable objective truth, and even believe it is illogical to speak of things being “true,” still I believe in truth. Not only do I believe in it, I am an ardent advocate for it. I think the search for truth is both socially valuable and personally rewarding.

This is a cognitive dissonance that bedevils me. But humans manage cognitive dissonance all the time. Although “truth” eludes logical definition, there is a difference between what fiction writers do and what scholars do–at least a difference in purpose: the fiction writer is inventing things that definitely did not happen, while the scholar is trying to identify things that actually do (or can) happen.

Even though I lack a logically defensible “truth,” still I recognize that somethings are real and true and some are not and are false. For example, this morning, I walked the dog (that’s true), and I did not walk the cat.

I’m rooting for team truth.

Principle and Partisanship

My uncle and I were once debating a political issue when he said to me something like “You support that position because of your partisan alignment.” I responded that he had it backwards: to the extent that I had a partisan alignment, it was shaped by my position on specific issues.

Separately, I had a friendly acquaintance who would regularly argue that scholarly work that disagreed with Republican positions was biased because the authors were Democrats. In that situation, too, I argued for the possibility that the authors were Democrats because their scholarly conclusions disagreed with Republican claims.

Certainly, there are people whose biases affect what they do and say in both conscious and unconscious ways. An economist, for example, who supports the Unnamed Party, might conceivably agree with Unnamed policy because of partisan alignment, and that alignment might influence their research, results, and policy recommendations.

But is there only partisanship? My uncle and my acquaintance both argued all views are shaped by partisan alignment. But that assumes that everyone feels allegiance to extent parties, when it’s pretty clear that many people don’t immediately choose an alignment–witness the millions of voters in the US who register without party affiliation. Additionally, it raises the question of why people choose partisan alignment: we can’t assume that everyone simply accepts the partisan affiliation of their parents.

Doesn’t it make sense that people would choose party affiliation because the principles of the person align with the principles of the party? Imagine a school teacher given a choice between the Schools-are-terrible Party and the Schools-are-great party? What about a member of a labor union given a choice between the Union-busting Party and the Union-supporting Party? What about a environmental biologist whose work suggests that climate change is real? Someone who has dedicated their life to scientific study of the environment and, as a result of that study, concluded that anthropegenic climate change is real? Won’t that person be tempted to align themselves with a party that respects their work rather than ridiculing it?

My uncle said to me: “every one wants to fit on their own team.” I said to my uncle: “what about people who don’t feel they fit on any team?” Personally, I’ve never fit in well with groups. But when a team is dedicated to a principle that is important to me, I like them for that reason (even if there may be other reasons I dislike that team).

Confidence and Publication: Comparing Russell and Wittenstein

Many writers get stuck with doubts, while other plow through. How you respond to doubt as a writer—the confidence with which you approach difficulties that you face—has a crucial impact on  your ability to write effectively.  In this post, I want to briefly compare two writers of high quality who faced similar issues and responded very differently. I can’t say with certainty that the difference between the two was purely a matter of confidence, but I believe the comparison is instructive. Perhaps it’s a reflection on perfectionism, not confidence, but I think the two are related: the more confident person is able to say “eh, it ain’t perfect, but it is good enough to move forward.

Russel and Wittgenstein

Bertrand Russell won a Nobel Prize for literature for his voluminous writings and was extremely widely published as a leading 20th-century philosopher. Ludwig Wittgenstein, who was one of Russell’s students in the early 20th century, by contrast published only one book during his life, and that book (The Tractatus Logico-Philosophicus, which was dedicated to Russell) is not regarded as his most important work. In terms of their publication output during their lives, Russell was a giant, and Wittgenstein a shrimp. But from the current moment in history, however, their prestige as philosophers is equal, or perhaps Wittgenstein is given more respect.  

The Limits of Logic

In the 1910s, when Wittgenstein studied with Russell, their project was logic and, to some extent, the mathematization of logical thought.  The concern was how to prove (or disprove) the truth of a statement.

Russell’s book The Philosophy of Logical Atomism, published in 1914, is roughly contemporary with Wittgenstein’s Tractatus, published in 1918, and their subject matter is quite similar—both are works of analytic philosophy discussing logical proof. The question of interest here is how they handle the boundaries of logic.

At the beginning of Logical Atomism, Russell acknowledges an inevitable and unavoidable subjectivity at the foundation of what he is doing. If we want to prove the truth of a statement, we need to have some starting place—some statements that we know are true. But how do we know if something is true without having proved it? And how can we start the project of proving the truth of any statement unless we have something that we have already proven true? His response is to say, approximately, “we start with something undeniable.” Not true, only undeniable. He discusses what he means by undeniable for a paragraph or two, and then he moves on to other issues. Essentially, he says, “well, we can’t follow the rules of proof for our first statement, so we’ll just ignore those rules and accept our first statement as true because it seems undeniable.”  Practically speaking, that makes perfect sense; logically speaking, it’s almost inexcusable. Emotionally speaking, I would say that this is the choice of a person who has confidence in the value of their work, despite some flaws.

In the penultimate sixth chapter of the Tractatus, Wittgenstein similarly struggles with what is either the same, or a very similar problem: he sees the logician as existing within the system being examined, creating the same sort of unavoidable subjectivity that concerned Russell. His response to this, however, quite different. In the sixth chapter, he discusses how one cannot get the necessary objectivity, and that lacking that, one has no grounds on which to speak.  And he concludes the book with his seventh chapter, which I reproduce in full here: “Of that whereof one can’t speak, one must remain silent.” That’s the whole seventh chapter. One sentence. And Wittgenstein never again published in his lifetime. Logically speaking, this is perfectly sound. Practically speaking, however, it leads to paralysis. Emotionally speaking, I would say this is the choice of a person who doubts the value of their work.

Perfectionism and Confidence

To me, this is a story about confidence and a willingness to accept a logical flaw.  Both Russell and Wittgenstein recognized a similar logical limit, but Russell said “I will still proceed” while Wittgenstein said “This project is meaningless.” To me, logically speaking, Wittgenstein is in the right here.  If you are interested in a system of building certain truth through proof, the whole structure of truth fails if it is built on something that is not provably true. Wittgenstein recognizes this and essentially says “this project isn’t worth the effort because it’s ultimately fruitless.”

Russell’s response is very different, and I view it as a manifestation of confidence or even arrogance. Russell says, “weak foundations be damned, I’m still going to pursue this project.”

I don’t know what emotions and thoughts swayed the two men, or whether the issue was really confidence.  But as a lesson for struggling writers, I think it can be instructive: the writer who pushes forward ignoring problems, produces work for publication, while the writer who takes those problems seriously gets stuck, and even is blocked from publishing.

Getting projects finished and published simply takes a willingness to push ahead, despite problems and weaknesses in your research.

This is not to excuse shoddy work, but rather to acknowledge the impossibility of creating perfection, and to prefer flawed productivity with inactivity brought on by doubts and imperfections.

Hume’s Problem and the Weaponization of Doubt

In his History of Western Philosophy, Bertrand Russell wrote something to the effect of “With subjectivism in philosophy comes anarchism in politics.” (I’m too lazy to go hunt up the proper quote, so this may be way off base, but my getting the quote right doesn’t change the basic argument here.) As someone who rejects objectivism in philosophy, who recognizes inevitable subjective elements in all reasoning, and who wants political stability as well as some element of democratic rule, this sentence struck me as problematic, even wrong. Of course, Russell lived through the Nazi era, when big lies were spread to create an alternate reality that inspired horrific acts of violence. I had not yet seen the weaponization of doubt employed by too many, but especially the big business interests and many political actors.

David Hume is perhaps most famous for his framing of the problem of induction. Induction is the process of making generalizations from specific examples. So, for example, suppose you are looking at trees, and every tree you see has green leaves. Induction takes those many observations and makes a general rule: “trees have green leaves.” The problem of induction is that there is no guarantee that future observations will resemble past observations. To Hume, this was mostly important in distinguishing what we know (with absolute certainty beyond doubt) and what we believe (with good reason, but only as a conclusion from experience, which is necessarily fraught with the problem that the future might not resemble the past). Unfortunately, this basic logical problem has become used to much effect (and, in my opinion, much harm).

Much of science proceeds, to some extent, on the basis of “the best-tested theory”—on theories that have been tested and passed those tests, but are subject to further testing. According to the theories of Karl Popper—a philosopher who developed a famous response to Hume’s problem—we can have some certain knowledge in science: we can know (with certainty) that things are false. Because we can know that things are false, we can test and disprove theories, and thus eliminate bad ideas.  This basic structure is common in many fields, where a “null hypothesis” is shown to be false (or at least highly improbable), and an “alternate hypothesis” is therefore accepted.

In the hands of reasonable people who are interested in discovering the truth, this basic structure allows for progress, and thus scholars develop a general consensus agreement about the basic facts. It is not a fully-determined consensus—there is debate and there are those who reject some or most of the consensus, but there is a general acceptance of most basic ideas.

But in the hands of those who have some agenda other than truth, Hume’s problem becomes a weapon to paralyze an enemy and seize power.

It has been widely reported that in the 1970s, scientists at Exxon identified the problem of global warming, and, seeing that such knowledge might hurt their business, the company developed a strategy of questioning global warming science. (It should be noted that Exxon was not necessarily at the forefront of this. For example, check out this article from 1965.) Hume’s problem makes it possible to question every theory, no matter how much evidence: “well,” you say, “that is suggestive, but it can’t be considered conclusive.” In many cases you can offer some alternative explanation. This has been happening less with climate change over the last several years, as evidence becomes even more overwhelming, but it used to happen much more often. I remember one man asking me “well, if it’s caused by humans, why are the polar caps on Mars melting” (implying, I presume, that the Martian polar caps melting showed some solar-system-wide force was at work).

But the most extreme weaponization of doubt that I have ever seen is the current GOP assault on the credibility of the American election systems. Let’s start by admitting that the American election systems are imperfect: there are mistakes made, and some of those mistakes may even impact the outcome of votes. That is why, in addition to systems for gathering and tabulating votes, there are systems already in place for checking the results of an election. Of course, these systems, too, are imperfect. 

Like any knowledge based on observation, the election-checking systems can always be challenged by the question at the heart of Hume’s problem: just because we haven’t observed something (vote fraud) yet, doesn’t mean we won’t observe it in the future (if we run another audit). What the GOP keeps doing, in calling for further investigation into the election, is relying on the basic logic of Hume’s problem. This is the argument that is driving the current audit of votes in Maricopa county: “sure, there were already multiple audits, but just because they didn’t find fraud doesn’t mean the fraud doesn’t exist; it just means that the audits didn’t look for the right things.”  Whatever checks you might carry out, you can make up some new claim and say “You haven’t proved this didn’t happen.” Case in point, the Maricopa county audit has apparently been looking for traces of bamboo to prove that fake ballots were introduced into the election count by sinister Asians. “Sure, you didn’t find any local interference, but what about the Chinese? They managed to inject thousands of fake ballots into the system to help Biden.” And, yes, I’m sure that no one has checked to see if the ballots were faked and forged from China” (And let’s just forget the fact that they did check that the counted ballots were from registered voters, and that no voter voted multiple times, so for the Chinese scheme to work, they must have somehow managed to make fake ballots only for people who were registered but who did not  vote without making any ballots for people who were not registered or who registered and voted.)

Hume correctly pointed out that we cannot prove with certainty the claims we make based on observation. But, while this is logically correct, when that doubt is used in bad faith to ignore the vast preponderance of evidence, there’s a big problem.

I’m no big fan of the Democrats; I think they have been too complicit in many of the worst failures of the USA during my lifetime. And, in my heart, I am conservative (not politically conservative, but actually conservative in that I would like to mostly keep things as they are—there are things that need changing, but let’s only change those things and keep all the rest). But in contrast to the Republican party, it can at least be said that the Democrats are apparently interested in truth, evidence, and data based on observations, all of which are really good things. During my adult life, it seems to me that the Republican party has consistently strayed farther and farther from the truth.  I only remember Watergate from a child’s perspective, but obviously the honesty of Nixon was an issue. And the GOP—at least some members of it—called on the president to step down when it became clear he was a criminal (and apparently, those people were willing to vote to impeach). The Reagan administration at least tried to cloak its work in theory—the Laffer curve was at least an academic theory promulgated by an academic. Stuff like The Bell Curve by Herrnstein and Murray at least tried to give an intellectual defense to GOP perspectives. I don’t think much of the way Herrnstein and Murray handle data (I think they confuse correlation with causation), but I would give them the benefit of a doubt that it’s just bad data analysis, rather than intentionally deceitful analysis. The investigation of the Clintons was a preliminary weaponization of doubt—it started with a supposed real estate fraud (Whitewater), but reached out in any direction it could to find reason to attack Clinton, ultimately resulting in Clinton’s impeachment for lying about his relationship with Lewinsky. There were no real limits on an investigation that said “we haven’t found any fraud yet, but that doesn’t mean it doesn’t exist. The second Bush administration lied about weapons of mass destruction and many other things, but they seemed to be trying to come up with realistic alternate hypotheses. But the GOP since is all about baseless conspiracies that no evidence set to rest. No matter how much evidence there was that Obama was born in the US, the birther conspiracy just rolled on.

GOP attempts to “find the truth” about the 2020 election are bad faith arguments.  They have nothing to do with finding the truth. They are weaponization of doubt to gain political power. Whenever confronted with actual evidence that there wasn’t fraud (like recounts of votes and audits of votes), the GOP answers, “well, you just haven’t found it yet.” It’s not a reasonable search for truth; it’s an attempt to gain political power by reducing people’s faith in the electoral system.

It should be noted that my primary interest is in finding the truth. The fact that I prefer Democrats to Republicans follows from my interest in the truth. I do not prefer Democratic policies because they are proposed by Democrats (Indeed, I loathe many Democratic policies). Instead, I prefer policies that are based on the truth, and then prefer the party that shows closer adherence to the policies that I would espouse. My objection that the GOP is arguing is bad faith is not based on my preference for Democrats, but rather on my observation that they are weaponizing doubt and engaging in intentional deflection, distraction, and disinformation. I believe in the truth. And that belief shapes my preference for politicians who respect and respond to the truth.

Dealing with writer’s block, tip 7: Don’t get stopped by uncertainty

Writer’s block—strong emotional responses that interfere with writing—grows from any number of doubts about the self—that one will be rejected, that one doesn’t work hard enough, that one isn’t smart enough. In this post, I am going to focus on philosophical doubt and on the place of certainty in scholarly work.  Intellectual doubt can trigger emotional doubts: if you have unanswered questions, it’s natural to think “I don’t know enough.” It’s good to think you don’t know enough—doubt sparks growth and learning—but it shouldn’t stop you from sharing what you do know. All scholars work in the face of uncertainty, but too many let their doubts stop them from sharing what they do know. 

The frustration of uncertainty and intellectual doubt

Uncertainty is emotionally draining. Each new question that arises can drain energy and enthusiasm, and every answer can inspire new questions. Research can feel like a treadmill, where no matter what you have done, you still continue to chase knowledge. You want somewhere solid to stand, and the never-ending doubt can make you feel like you’re sinking into a morass. And, if you’re self-critical, it’s easy to think that this constant doubt is a personal failure: “I wouldn’t have this problem if I were smarter/had worked harder.”

You can’t eliminate intellectual doubt

Doubt lies at the heart of research: if you already knew the answer, there would be no reason to research a subject. When you get into the details of any area of research, questions begin to arise: how do you define the terms of greatest concern or interest? What theories or models do you use to explain the phenomena of interest? What are the limits of your research? What are the limits of authorities on which you rely (any sources you cite for methods, theories, definitions)? 

The famous skeptic, David Hume, pointed out that one can never be certain that the future will resemble the past (or, at least, that future empirical observations will resemble past observations), leaving scientists a legacy of doubt so strong that many researchers don’t even try to prove that things are true, they simply attempt to prove things are false, and then argue in favor of the alternative. The idea of a “null hypothesis” that is disproven in order to accept an alternative process (as often seen in inferential statistics), is a response to this problem, known as “the problem of induction,” and often called “Hume’s problem.”

If you are a scholar and you have doubts and questions and uncertainty, it’s the nature of the work, not a failing on your part. A lot of writers get stuck on their projects because of intellectual doubt: “I don’t know enough,” they say, “I have to read this article/book/etc. I can’t write until I’ve done that reading.” But research doesn’t eliminate doubt.  Published research does not eliminate doubt.  Yes, there are authors who argue their cases confidently and claim certainty, but that certainty is emotional, not logical.

Show your work

Your research may be incomplete, uncertain, and built on dubious foundations, but it still contributes to greater understanding of the world.  Indeed, your incomplete, uncertain, and dubiously founded work, shares those characteristics with all research, so it is valuable to other researchers looking to explain the same phenomena as you.

Often, as you may recognize from your own experience, research can be valuable because of some specific aspect—for example, an author with weak results, might offer a very good definition of a concept, or might offer an interesting methodological perspective, or might just ask a really good question (even if they do a poor job of trying to answer the question). 

A lot of research explicitly discusses its own limitations, its questions left unanswered, as well as new questions raised because other researchers can use that discussion of limitations to develop complementary research or to otherwise address weaknesses in the original work.

While it can be emotionally unsettling to write about all the weaknesses in your research project, it is actually a valuable and useful part of the work—both for its role in helping you understand your own work better and clean up errors, and for its role in communicating with others. Instead of letting your doubt on some issue stop you from writing, write about those doubts, be willing to explore them all in writing. Show your readers the variety of issues you considered, the problems they created, and your responses. Show the depth and complexity of your thinking, including the contradictions and doubts. Put it all on the page.  It’s entirely possible that other researchers will find your processes of reasoning interesting and valuable.

Obviously, it can be intimidating to focus on the weaknesses of your work and to think about discussing those weaknesses with other people. In an ideal world, the people who see your work would be supportive and interested in helping you improve your work, and therefore you wouldn’t need to fear writing about the weaknesses of your work. But in the real world, of course, people can be quite aggressive and competitive. Of course, that doesn’t go away even for work of the highest quality—there’s almost always someone who is going to say you’re wrong, whatever you say—so you might as well just get it over with and share your work.

Filling the gaps

In academia, it is common to talk about how research “fills the gaps in the literature,” or addresses questions unanswered by previous scholarship.  If you are addressing such a gap—especially if it’s a gap that other scholars think is important—then your attempt to fill the gap is valuable to the community of scholars, regardless of whether it succeeds.  If your work does succeed, the gap is filled, and if your work doesn’t succeed, scholars who follow you may be able to use your attempt to avoid the problems you faced and try a different way of attempting to fill the gap.  In both cases, your work helps the larger community.

It is true that there is a publication bias for successful work, but the issue is not that you wouldn’t prefer to have successful work, but what do you do if the work you have done has problems?  Because your work is going to have problems, if, as I argued above, intellectual uncertainty cannot be eliminated. So the value in your work, for other scholars, lies not only in the conclusions that you draw, but in the whole fabric of your search—in all your theoretical and methodological choices, and how they shaped your research, and the insights they give not only into the questions asked, but into the ways that we try to answer those questions.

Conclusion

Intellectual uncertainty is unavoidable, and to try to capture any absolute ultimate truth in words may be impossible. As early as the 6th century, BCE, Lao Tzu wrote in the very first verse of the Tao Te Ching, “The Tao that can be spoken is not the absolute Tao,” or, to take a little liberty, “the truth that can be put into words is not the absolute truth.”  If you’re making a conscientious effort to do good scholarship, which means critically questioning your own work as well as the work of others, you will certainly find places to doubt your own work, where intellectual certainty is impossible, and all you’re left with is work that is intellectually uncertain. But intellectual uncertainty can be paired with emotional confidence—the confidence that you made responsible and reasonable choices as you tried to understand the world better, and that your work, though susceptible to doubt, is also worthy of consideration for its contribution to the communal discourse in search of understanding.

Intellectual uncertainty is denied all scholars.  A lot of success in academia goes to those who have emotional confidence, despite the intellectual limits of their work. Instead of letting uncertainty stop you, show your audience how you tried to deal with the limits of your (and your research community’s) knowledge.

The tarot's fool steps blindly toward the edge of a cliff. Researchers also advance without a clear vision of what lies ahead.

The Fool

While a researcher ought not be blindly stepping off a cliff, like the fool from the tarot, they do have to be willing to step into the unknown and risk the fall. Choose the course of action that seems best to you, and risk it, because no course of action guarantees a perfect outcome. Fortunately, as a writer, you’re unlikely to die if you take a chance by sharing an imperfect draft.

Searching for Truth

As a philosopher, I have long since concluded that if there is such a thing as an absolute, completely objective truth, it is not something to which we humans have access. My fallback quotation on this point is from the first verse of the Tao Te Ching: “The Tao that can be spoken is not the absolute Tao,” which I interpret to mean something like “we can’t put it all into words (or other representations).”  

Despite this basic presumption, as a philosopher and teacher, I very strongly believe that there is a difference between truth and falsehood, and believe that the attempt to distinguish one from the other is the   extremely valuable role of the scholar/teacher/student/researcher/journalist/analyst in modern societies, especially those that are founded on the idea that governments are elected by the people.

How do I reconcile these two competing views—that there is no ultimate Truth (at least that we can express) and that there is a difference between truth and falsehood?  I suppose my answer is that the question is resolved situationally: at times the question of what is true is difficult and elusive, and other times it is clear and distinct. If pressed into a close philosophical argument, I would take the position that truth is elusive and that the truth that can be put into words is not the absolute truth, but in many cases a close philosophical argument is not necessary or even useful.

Pragmatism

My notions on this subject have been, I suppose, strongly influenced by my (very limited) understanding of the Pragmatic school of philosophy, sometimes called American Pragmatism, which is historically associated with Charles Sanders Pierce and William James, and more recently with C.W. Churchman and Hilary Putnam, who are generally associated with the notion that “truth is what works,” and the idea of the “cash value” of an idea.  These ideas seem to me important, though they do not in my mind comprehend all the issues related to the difference between truth and falsehood. But again, the more we try to engage in close philosophical argument, the more elusive the issues become.  And this, perhaps, is the value in thinking that truth is what works: in many cases the question of truth is important because of how ideas off truth guide our actions (it is on this point that I wrote about how all knowledge is political). If there were an ultimate truth, it would be a valuable guide to our actions (plans work better when take the facts into consideration), but in the absence of ultimate truth, there is still the value that we can get from on a pragmatic view of “what works.”

One problem of viewing truth in terms of “what works,” or “cash value,” however, makes an appearance if we ask “for whom”?  Who gets the cash value?  Politicians and businesses have often uttered falsehoods that gained them all sorts of personal gain.  The tobacco industry maintained that tobacco wasn’t unhealthy for a long time. Exxon (now ExxonMobil) long profited over denying that climate change was real. Politicians have lied about all sorts of things for their own personal gain.

The Problem with Individual Notions of Truth

But in these questions we can see part of the problems that can beset notions of “truth.”  If truth is only what works for a given person, then there is great social danger, as some people will inevitably argue from purely personal biases, and, if the intent is bad, can poison any possibility of cooperation or constructive compromise.  Thus the desire for some objective standard—something that is true for everyone.  And I believe that there are such things, even though the abstract search for an ultimate, objective truth will not lead to a certain end.

There are things that can be considered objective truths in simple, everyday actions.  If I go to the supermarket, for example, and fill my shopping cart, there is a true and definitive answer to the question “do I have enough cash to make this purchase?”  Either I am carrying sufficient cash or I am not, and that answer (whether I have enough cash) is true for me, for the cashier, for the store manager, and indeed for every human being.  Admittedly, the question of whether I, Dave Harris, have enough cash to purchase the groceries in my cart, is not one of interest to most people, but it is one example of a whole class of questions that are amenable to absolute true-false answers. Each successive shopper is asked the same question: do you have cash to pay for this? Each successive shopper either does or does not. There are many different questions that can be answered is such absolute and objective terms.  

The Desirability of Truth

For practical reasons, it would be great if we could identify absolute, objective truths more frequently: there is little debate about possible courses of action when faced with an absolute truth. But such truths are too few and far between. For most questions, there is too much opportunity to question and doubt.  A general premise driving the skepticism of David Hume is that the future might differ from the past—no matter how many observations we make that agree with a premise, there is no certainty that future observations will match it. Other problems can arise for certainty, as well, when dealing with concepts that can be interpreted in a variety of ways.  We may all agree that one man killed another, but was it murder?  To answer the question of murder depends on how we define murder.  It is to deal with such questions that judicial systems are developed to make judgements about how to define and understand certain events that are not amenable to any abstract ultimate standard of truth.

Scholarly Truth and Legal Truth

Judicial systems and scholarship have a lot in common; they both seek confidence in the claims they make. They try to take into account evidence; they try to separate out those truths that can be ascertained (did he kill the man?) from those that cannot (did he murder the man?).  In both cases, those involved are ultimately forced to make decisions on the basis of best evidence or probability rather than ultimate certain truth. And in both cases, the decisions made are, in the long run, subject to dispute and revision as new evidence and knowledge come to light.

An Ongoing Search

The fact these systems are fallible is a product of the nature of our human knowledge, but this does not mean that we ought not continue to seek the truth.  There are those who act in bad faith, who try to deceive other for their own personal gain. To allow such people to use the unavoidable doubt about some questions to poison the well of scholarship or of legal systems and the larger social systems they represent, is to abdicate responsibility to those who would lie for their personal gain.  That is why, for one, it is important to remember that there are questions that can be answered with ultimate objective truth, and for two, to strive to find such undeniable truths on which to base decisions.  Just because truth is elusive does not mean we ought not seek it. Seeking truth is a process and a principle; it is, in my mind, one of the fundamental principles that should guide any moral system.  Without a genuine commitment to seeking truth, societies fall into evil and dangerous patterns where malevolent actors visit harm on many to satisfy their own selfish aims. 

Of course this last premise falls into an area of understanding that is much debated: the realm of right and wrong/good and evil.  In this realm, I make no claim to an ultimate truth, but I feel strongly that there are good and evil in the hearts of humans, and societies are most likely to prosper when the interest in the good outweighs the interest in evil. Regardless, the search for truth is, at least as I see the world and societies in the world, an act of good more than of evil (even if evil has been done in pursuit of truth).

The Basics of Logical Analysis 3: Concluding

I wanted to conclude the line of discussion I was following in my previous posts, with an eye toward the experience a researcher might have in beginning to define a new project, particularly those in areas where the researcher has not done a lot of previous research.  I also wanted to try to make my examples a little more detailed and academic in terms of focus. I’m still going to be working with an example from an area where I have little experience because it’s close to one of the concerns of a writer with whom I’m working.

Down the Rabbit Hole 4: Fractals

The previous post was talking about “going down the rabbit hole” for the way that a question can seem initially simple and small, but takes on detail and scope as it is examined more closely. Another parallel would be fractals, which are patterns/images derived from mathematical operations that are recursively defined in such a way that as you magnify the image, new detail continuously emerges. The Mandelbrot Set is one of the most famous of the fractals. 

Research shares something of this characteristic. It may not be infinitely recursive (though some have argued that it is), but generally, if you examine any issue closely, it will lead to more questions.  This is due to the basic nature of analysis: if we analyze things into separate parts/aspects/issues, each of those separate parts can itself be analyzed into its ow constituent part.  Jorge Luis Borges wrote an essay titled “Avatars of the Tortoise,” in which he argues that infinite regressions “corrupt” reasoning, giving examples, like how, to define a word/concept, it is necessary to use other words, and each of those other words then needs to be defined, which requires other words, which then all require their own definition, and so on. I’m not sure that the pattern is infinite (there are, after all only a finite number of words, so for definition at least the regression can’t be infinite), but the multiplication of details can quickly become overwhelming. 

The Nobel Prize-winning psychologist and economist Herbert Simon, who studied decision-making, coined the term “satisficing,” to speak of how some decisions must be made without a full logical analysis because such analyses take so long and become so detailed.

As my earlier examples of reviewing a restaurant or movie showed, it’s pretty natural to see different aspects in things: the restaurant has food and service and ambiance; the service has courtesy and competence; courtesy has all the different things that different people said and did. It may be simple to say whether you liked the restaurant, but to explain in detail all the different factors that contributed to that decision is another matter altogether.

Fractal: The Barnsley Fern
Each leaf, if expanded, will show similar structure and fine detail as the larger frond.
Image by: DSP-user / CC BY-SA (https://creativecommons.org/licenses/by-sa/3.0)
(from https://commons.wikimedia.org/wiki/File:Barnsley_fern_plotted_with_VisSim.PNG)

A More Focused Example

So far, I was giving pretty general examples, now let’s try to get more focused.

Let’s imagine a hypothetical student, studying business management.  And let’s imagine that this student has what we can call “The Fruit Theory of Management,” in which they assume that giving employees fruit improves performance. (I was going to call it “Apple Theory” but didn’t want this to be confused for a reference to the big corporation.)

The Fruit Theory

On its face, the fruit theory of management is ridiculous, but since I’m talking about a general structure of research, the precise theory in question is not so important (as will hopefully become obvious in a moment). Instead of “giving employees fruit” we could use “giving employees training in XYZ,” or, more generally, “instituting policies ABC.”  ”Giving fruit” can stand in for any possible intervention. And instead of “employees,” we could substitutes almost any group—students, parents, plumbers, etc.—and in each of these cases we could either find a suitable measure of performance, or we could replace “performance” with some other construct to measure (e.g., happiness, health, etc.).  

We can even generalize this to any basic causal pattern: “giving fruit leads to better performance” is a specific example of the general pattern “X causes Y.” Most research is concerned with causal relationships in some way or another, so although I’m going to focus on fruit theory

Studying Fruit Theory

So, we have our business management student who wants to research fruit theory. Generally speaking, a starting point for fruit theory would be to define the theory.

So the student tries to write down a definition (or speaks a definition in conversation with someone). At this point, the process of analysis inevitably has already begun: the words used can themselves be examined individually.  So, if the theory is “giving fruit leads to better performance,” there are elements that can be defined individually. 

For starters, we can ask “what is  fruit?”  In everyday conversation, we know what a fruit is and don’t need definition. But if we’re talking about developing research and examining causal relationships, we want to define things more closely and formally. (Research needs formality and detail so that others can check the research.)  For example, fruit theory might call for fresh, ripe, worm-free fruit that people would enjoy eating (a definition that is not identical with a more general understanding of fruit that includes unripe or wormy or rotten fruit). That might lead us to a whole set of questions of how to identify fruit that people would enjoy eating, which could lead to more general questions of what it means for people to enjoy eating. (Or maybe the real issue is that people enjoy receiving fruit as gifts—that would lead to a different definition of what “fruit” is.)

To study fruit theory, we also need to define what counts as “giving” and what counts as “better performance.”  As for “giving,” there is some question of the specific details of how the transfer is made and whether any conditions are placed on that transaction, including any potentially hidden costs of the transaction. But defining giving is relatively simple compared to the question of “better performance.” Measuring performance a huge array of questions: Whose performance? Are we measuring the performance of the organization as a whole? Or of individuals in it? What kind of performance? What dimensions of performance are we measuring (speed? accuracy? gross sales? net sales? etc.) and over what time periods? Are we measuring cash flow of the business over a month? Or the employee sick days taken over a year? Or are we measuring profitability over a decade? There are any number of different ways to think about the general concept of performance.

To develop research, we might also need to specify further the causal mechanism by which fruit theory works. Does giving fruit work because fruit makes people healthier, and therefore better able to work hard (as the old saying goes “an apple a day keeps the doctor away”)? Is there a physiological causality? Is that physiological causal path one that gives people more energy? Or one that improves their strength? Or one that boosts their mood?  Or maybe the causality is not physiological but psychological: giving employees gifts makes them feel appreciated and they want to work harder as a result?

Answers lead to new questions

Whenever we make a choice of where to focus attention, we can find new questions to pursue. We may start pursuing a question of business, as in fruit theory, but that question might lead into other fields of study.  If we posit a physiological cause for fruit leading to better performance of employees, then we need to study physiology. That study might lead in a variety of directions: maybe fruit theory works because fruit improves health, reducing sick-time lost—that would lead to study of immunology: how and in what ways do apples improve immune response? Or maybe fruit theory works because of some other physiological effect: strength, endurance, mood. Since different foods and substances can impact strength, endurance, and mood, maybe fruit has such effects?  If one thinks that fruit has a physiological effect on mood, one might then be led into questions of which specific biological pathways lead to mood improvement, and perhaps in studying that research, you see that other researchers have identified different kinds of mood improvement, and perhaps debate ways in which physiology affect mood.

New answers pretty much always suggest new questions.  

Preventing Over-analysis

You can take analysis too far. If you constantly analyze everything, you end up with a great mass of questions and no answers.  It can lead to getting swamped in doubt.  There is no rule for this, beyond that at some point it is necessary to pick the point at which you say “I’m satisfied with my answer to this question.”  Such statements close off one potential avenue of study to allow focus on another, and to set limits to what you need to study—limits that are necessary for the practical reason that it’s good to finish a project even if that project is imperfect.

If you say “I’m satisfied that the reason Fruit Theory works is because fruit makes people healthier,” you don’t need to pursue questions of whether and how and how much fruit promotes health, and you can go on to focus on how improved health helps a business.  Or if you say, “I’m satisfied that fruit theory works,” you can go on to study details of implementing fruit theory.  Of course, it’s good to have reasons, and good to be able to explain those reasons: if you’re satisfied that fruit theory works, it’s useful to be able to give evidence and reasoning. In academia, that evidence often comes in the form of other research literature. If you can cite five articles from reputable sources that all say “fruit theory works,” then you can go on to your research in implementation without getting embroiled in any debate about whether fruit theory works—even if the five articles you cite are not yet accepted by all members of the scientific community.

Conclusion

Analysis itself isn’t really that hard in the small-scale—we do it automatically to some extent. But it is something that grows increasingly difficult as we invest more energy into it: the more detail we add to our analysis, the more there is an opportunity to analyze further, which can lead to paralysis or to getting swamped. It is something that wants care; it wants attention to detail. 

The basics of logical analysis 2: Down the rabbit hole

Continuing my discussion of analysis from my previous posts, I look at how analysis can lead to new questions and new perspectives. Just as Alice ducked into the small rabbit hole and found an entire world, so too can stepping into one small question open up a whole world of new questions and ideas.

If you look at things right and apply a bit of imagination, analysis quickly leads to new questions.  Even something that looks small and simple will open up to a vast array of interesting and difficult questions. 

The multiplication of questions that arises from analysis can be good or bad. New questions can be good because they can lead to all sorts of potentially interesting research. But having too many questions can be bad, both because it can interfere with focusing on one project, and because it leads to complexity that can be intimidating. Learning to deal with the expanding complexity that appears with close study is a valuable skill in any intelligence-based endeavor—whether scholar or professional, decisions must be made and action taken, and falling down a rabbit hole of analysis and exploration will sometimes interfere with those decisions and actions.

This post follows up on my previous in which I argued that we analyze automatically and that the work of a researcher includes making our analyses explicit so that we and others can check them.

In this post, in order to show the potential expansion of questions, I’ll look at a couple of examples in somewhat greater detail. While I won’t approach the level of detail that might be expected in a scholarly work meant for experts in a specific field—I want my examples to make sense to people who are not experts and I’m not writing about fields in which I might reasonably called an expert—I hope to at least show how the complexity that characterizes most academic work arises as a natural part of the kind of analysis that we all do automatically.

Looking more closely: Detail appears with new perspectives

In the previous post, I used the example of distinguishing the stem, seeds, skin, and flesh of an apple as a basic analysis (separation into parts), but it was quite simplistic. Now I want to examine how to get more detail in an analysis of this apple.

For starters, we can often see more detail simply by looking more closely (literally): In my previous post, I separated an apple into skin, flesh, seeds, core and stem.  But we could look at each of those in greater detail: the seed, for example, has a dark brown skin that covers it and a white interior.  With a microscope, the seed (and all the rest of the apple) can be seen to be made up of cells.  And with a strong enough microscope, we can see the internal parts of the cells (e.g., mitochondria, nucleus), or even parts of the parts (e.g., the nuclear envelope and nucleolus of the cell’s nucleus). This focus on literally seeing smaller and smaller pieces fails at some point (when the pieces are themselves about the same size as the wavelengths of visible light), but in theory this “looking” more closely leads to the realms of chemistry, atomic and molecular physics, and ultimately to quantum mechanics. Now we don’t necessarily need to know quantum mechanics or even cellular biology to study apples—you don’t necessarily visit all of Wonderland—but those paths are there and can be followed.

In this apple example, each new closer visual focus—each new perspective—revealed further detail that we naturally analyzed as part of what we saw.  But division into physical components is only one avenue of analysis, and others also lead down expansive and detailed courses of study.

So Many Things to See!

We can look at different kinds of apples in a number of different ways. (Not to go all meta here, but we can indeed separate—analyze—distinct ways in which we can analyze apples.)

At the most obvious, perhaps, we can separate apples according to their variety, as can be seen in markets: there are Granny Smiths, Pippins, etc., so that customers can choose apples according to their varied flavors and characters.  Some people like one variety and not another.  These distinctions are often made on the basis of identifying separate characteristics of apples (another analysis): “I like the flavor and smell, but it’s kind mealy and dry;” or “It’s got crisp flesh and strong flavor; it’s not too sweet.” Flavor, texture, appearance (color, shape, etc.), and condition (ripe, overripe, e.g.,) are all distinct criteria that a shopper might consider with respect to an apple.  These aren’t exactly the kind of thing that would be the subject of academic study, but they could certainly lead to more academic questions.

The question of apple variety, for example, could be seen through the lens of biology. There are the questions of which genetic markers distinguish varieties and the ways in which those genetic markers tell us of the relationships between different types of apples and their heritages.  The question of heritage brings up another aspect of apples that could be a study for a biologist: How did a given strain develop? There are wild apples, which developed without human intervention; heirlooms, which develop through selective breeding; and hybrids, which grow from planned crossbreeding.  Combining these questions of genetics and heritage might lead a scholar to study the migration of a specific gene, for example to see if GMO commercial apple farms are spreading their modified genes to wild populations.

Another characteristic of an apple that a shopper might consider at the store is the price.  This is obviously not a matter for biologists, but rather for economists. And an economist might want to look at how apples get priced in different markets.  That might lead to questions of apple distribution and apple growing. Questions of apple growing might lead back to questions of biology, or to other fields of study like agronomy. Questions of distribution might lead to questions of transportation engineering (what’s the best means to transport apples?) or to questions of markets (who are potential producers/distributors/vendors/consumers? what products ‘compete’ with apples?) or questions of government policy (how did the new law affect apple prices?).

So Many Different Perspectives

Different analytical frameworks can be found by imagining different perspectives on apples. In the previous section, I already linked the study of apples into fields like biology and economics and more, but there is wide potential for study of apples in many areas. 

Think about university departments where apples might get studied. Biology, economics, and agronomy are three already suggested. But people in literature departments might study apples in literature—“The apple in literature: From the bible to the novel”. People in history departments could study the history of apples—“Apples on the Silk Road in the 14th century.”  Anthropology: “Apples and the formation of early human agricultural communities.” Ecology/Environmental Science: “Apples and Climate Change.”  

These example titles are a little strained because I have not made a study of apples in these contexts, and therefore I’m throwing out general ideas that are rather simplistic and free of real theoretical considerations.  More complexity would attend a real project.  The student of literature might be looking at different things that apples have symbolized because they want to make a point about changing cultural norms. Or they might look at how apples have been linked to misogynistic representations of women. Such studies, of course, are interested in more than just apples. As we combine interest with apples with other interests, too, new potential ideas being to arise.

Combining Perspectives

Most people have multiple interests and these interests can combine in myriad ways to create a vast array of different questions that could be asked about apples (or any other subject).

Pretty much any scholarly perspective has its own analytical frameworks that structure research. Biology analyzes according to genetic structure, for example. Business analyzes according to market and economic factors. When these frameworks start to overlap—a business analysis using genetic factors, or a genetic analysis driven by specific economic factors—multiple points of intersection appear. Each genetic structure (each type of apple) can be examined with respect to a variety of different economic factors (e.g., flavor, shelf life, durability, appearance). 

This multiplication of different ways of dividing things up (analytically, anyway) can be problematic because it creates a lot of complexity and because it can be confusing/overwhelming, but it can also present opportunities because each new perspective might have some valuable insight to add. 

Conclusion

What seems small and simple to a first glance—a rabbit hole has a small and unassuming entrance—usually opens into a vast and expanding world of questions.

Analysis requires a bit of imagination—imagination to see a whole as composed of parts, imagination to consider different perspectives from which to view an issue, imagination to recognize the different aspects of things.  But a lot of this analysis is pretty automatic: little or no effort is required for the necessary imagination. Still, because it’s so easy and so natural, this process gets discounted—especially if you view “analysis” as something highly specialized that only experts do.

To develop a practice of analysis, all you really need to do is make a point of trying to make your different observations explicit.  Whether you’re judging an apple (taste, appearance, scent, etc.) or a theory (the various assumptions, conclusions, relationships to other theories), chances are good that you’ll pretty automatically respond to different aspects at different times. If you can formalize and record these different observations, you lay the foundation for developing your own analyses.

The Basics of Logical Analysis 1: Seeing Parts of Wholes

In this post, I revisit the general issue of analysis that I discussed in my previous post. There is a measure of overlap because I’m really searching for a way to communicate both the fundamental simplicity of analysis with all its potential complexity.  Maybe the general principle for this post is that analysis is, at its roots, a simple intellectual action: dividing something into different parts, but that this simple action inevitably leads to increasing complexity.

As with so many things in which analysis is involved, this post started out simpler and shorter than it has become. My original plan was to write one short post that just did a better job of explaining the ideas in the previous post. But then, as I thought more closely about it, I found issues that hadn’t been discussed in my previous.  It’s now looking like this will be a series of posts—at least two: this one will discuss the big idea of analysis and relatively simple, everyday examples; the next will look at some examples more closely, in hopes that they feel more like an academic example. I suspect that may end up as two or more posts. In a way, this story encapsulates an aspect of analysis in practice that I want to emphasize here: the more you do it, the more complexity you see, and that leads to expanding projects, that must be reined in for purely practical reasons: basically, if you want to finish a project, you have to stop analyzing everything. (And as I write that, I wonder whether I haven’t sparked the foundation for a third post: how do you stop analyzing once you’ve started. It’s an idea that I touch on briefly in the second post, but maybe it deserves its own? I’ll have to think about that…)

What is “Analysis”

At its root (its etymological foundations), “Analysis” is derived through medieval Latin from the Greek for “unloose” or “take apart.” (In contrast to “synthesis” whose roots lie in the Greek for “put together.”) This sense is generally in line with how the word might get used in a conversation. For example, after [a movie/a TV show/a meal at a restaurant], if one person is talking at length criticizing details of the [movie/etc.], the other might get exasperated and say “Stop picking it apart,” or “stop over-analyzing it.”

It is this basic “picking apart” that concerns me in these posts. It is a basic principle that can manifest informally (as a person might do with a movie/etc.) or one that can manifest as extremely detailed and formalized systems of analysis, as with psychoanalysis, or statistical analysis, or data analysis, or any other field that uses “analysis” in a title.

We Do It Automatically

The kind of analysis that is important in research (and other intellectual work) is something that humans do naturally and automatically—often without even noticing that we’re analyzing.

To apply it in research is to take an automatic, unconscious ability and work to make it conscious and explicit. Splitting things into pieces—into different parts or different aspects—is pretty easy. But making those divisions explicit is hard because of the complexity that tends to develop.

We all automatically split things up into different parts, which is reflected in our languages (including words like “parts,” “pieces,” “components,” “elements,” etc.) and much of our daily lives. We separate the world into all sorts of different categories. We eat food, which includes fruit, vegetables, meat, etc. We work, but have many different kinds of work: homework, housework, yard work, not to mention jobs, which are work. We separate the good from the bad. We divide people up into different groups: family, friends, acquaintances, people we don’t know, etc.

It’s true that many of these divisions are learned, but that doesn’t mean that we don’t naturally make divisions of some sort.

Analysis: Examples

Consider an apple.  It is a whole in itself, but we pretty naturally separate it into a few different parts: stem, skin, flesh, core, seeds.  Our basic sensory apparatus provides distinguishing information: stem, seed, and flesh taste different, smell different, look different, and feel different.  Our basic sensory apparatus is already providing us information about differences in the world that lead to analysis of the apple into its different parts.

Consider a movie.  It is a whole in itself, but we can easily divide it in many different ways that are familiar to cinephiles. We can say “The acting was pretty good, but the script was weak.” Or “The cinematography is great, the writing is great, the direction is ok, but the star annoys me, so I had trouble enjoying it.” We might like what we see (“great cinematography!”), but not what we hear (“poorly written dialogue”). We might like one actor and not another. Again, this is analysis in action, although few would think of this kind of thing as analysis. Unless we were to really get into a lengthy discussion of different aspects of a movie, and then someone might say “stop analyzing it! You’re ruining it for me!” 

Research and Analysis

Research takes this basic ability to distinguish between things and tries to make it explicit and formal. For the researcher, it’s not enough to say that it’s obvious that you have stem, seeds, and flesh, or acting, directing, writing, and cinematography. It’s necessary to begin to formalize.

Formalized analysis is crucial in research because it allows a research community to work together.  Researchers who doesn’t explicitly express their analyses can’t have their researcher reviewed or trusted by others. The need to share and provide explanations and evidence that can be examined leads to detailed discussions (articles books, etc.) that can themselves be analyzed (and will be by other researchers who will look for strengths on which to build and weaknesses to correct).

In practice, research communities develop different analytical frameworks and methods of analysis as a result of the attempt to explain and examine each others’ work. These become increasingly detailed and complex over time, as each successive generation of researchers turns their analytical abilities to the questions of interest. Sometimes entirely new analytical frameworks develop, but these, too, are subject to close examination that leads to complex formal analytical systems. 

Psychoanalysis, for example, depends on familiar analytical divisions: the id, ego, and super-ego represent parts of a large whole. So, too, the conscious and unconscious.  Each different pathology is a part of the large whole of “poor mental health.” And each pathology itself is distinguished by a number of different characteristics that are parts of the pathology. To become a psychoanalyst, is to adopt a specific set of analytical frameworks regarding the psychology of individuals and the nature of psychotherapy as well.   Other theories of psychology and psychotherapy may not be called “psychoanalysis,” but they too adopt different analytical frameworks.

Mathematical analyses separate the world into different symbols that represent different parts of the world and distinct relationships between the parts. Physics, of course, presents the interactions of objects in the world as a set of symbols and mathematical equations. In a business setting, the large-scale system of a factory, for example, might get represented in mathematical equations that separate out machines that produce goods, goods that are produced, rates of production, costs of production, necessary workers, etc.

Conclusion

Analysis happens.  If you examine something closely—an object, an interaction, an idea—you will begin to distinguish different aspects or parts of it.  These distinctions are analysis. To move that analysis into an academic or research setting really only requires that you try to make your analyses explicit as you develop them, so that they can be examined for flaws (by you and by others).

Of course, making analyses explicit and then looking at those analyses with an eye for flaws may be a path to good research, but it is not a path to simplicity.

I’m going to close here and in my next post (or posts), I’ll look with greater detail at some examples to show different ways in which things can be analyzed and to discuss the expansion of complexity, which can be both good and bad.