Searching for Truth

As a philosopher, I have long since concluded that if there is such a thing as an absolute, completely objective truth, it is not something to which we humans have access. My fallback quotation on this point is from the first verse of the Tao Te Ching: “The Tao that can be spoken is not the absolute Tao,” which I interpret to mean something like “we can’t put it all into words (or other representations).”  

Despite this basic presumption, as a philosopher and teacher, I very strongly believe that there is a difference between truth and falsehood, and believe that the attempt to distinguish one from the other is the   extremely valuable role of the scholar/teacher/student/researcher/journalist/analyst in modern societies, especially those that are founded on the idea that governments are elected by the people.

How do I reconcile these two competing views—that there is no ultimate Truth (at least that we can express) and that there is a difference between truth and falsehood?  I suppose my answer is that the question is resolved situationally: at times the question of what is true is difficult and elusive, and other times it is clear and distinct. If pressed into a close philosophical argument, I would take the position that truth is elusive and that the truth that can be put into words is not the absolute truth, but in many cases a close philosophical argument is not necessary or even useful.


My notions on this subject have been, I suppose, strongly influenced by my (very limited) understanding of the Pragmatic school of philosophy, sometimes called American Pragmatism, which is historically associated with Charles Sanders Pierce and William James, and more recently with C.W. Churchman and Hilary Putnam, who are generally associated with the notion that “truth is what works,” and the idea of the “cash value” of an idea.  These ideas seem to me important, though they do not in my mind comprehend all the issues related to the difference between truth and falsehood. But again, the more we try to engage in close philosophical argument, the more elusive the issues become.  And this, perhaps, is the value in thinking that truth is what works: in many cases the question of truth is important because of how ideas off truth guide our actions (it is on this point that I wrote about how all knowledge is political). If there were an ultimate truth, it would be a valuable guide to our actions (plans work better when take the facts into consideration), but in the absence of ultimate truth, there is still the value that we can get from on a pragmatic view of “what works.”

One problem of viewing truth in terms of “what works,” or “cash value,” however, makes an appearance if we ask “for whom”?  Who gets the cash value?  Politicians and businesses have often uttered falsehoods that gained them all sorts of personal gain.  The tobacco industry maintained that tobacco wasn’t unhealthy for a long time. Exxon (now ExxonMobil) long profited over denying that climate change was real. Politicians have lied about all sorts of things for their own personal gain.

The Problem with Individual Notions of Truth

But in these questions we can see part of the problems that can beset notions of “truth.”  If truth is only what works for a given person, then there is great social danger, as some people will inevitably argue from purely personal biases, and, if the intent is bad, can poison any possibility of cooperation or constructive compromise.  Thus the desire for some objective standard—something that is true for everyone.  And I believe that there are such things, even though the abstract search for an ultimate, objective truth will not lead to a certain end.

There are things that can be considered objective truths in simple, everyday actions.  If I go to the supermarket, for example, and fill my shopping cart, there is a true and definitive answer to the question “do I have enough cash to make this purchase?”  Either I am carrying sufficient cash or I am not, and that answer (whether I have enough cash) is true for me, for the cashier, for the store manager, and indeed for every human being.  Admittedly, the question of whether I, Dave Harris, have enough cash to purchase the groceries in my cart, is not one of interest to most people, but it is one example of a whole class of questions that are amenable to absolute true-false answers. Each successive shopper is asked the same question: do you have cash to pay for this? Each successive shopper either does or does not. There are many different questions that can be answered is such absolute and objective terms.  

The Desirability of Truth

For practical reasons, it would be great if we could identify absolute, objective truths more frequently: there is little debate about possible courses of action when faced with an absolute truth. But such truths are too few and far between. For most questions, there is too much opportunity to question and doubt.  A general premise driving the skepticism of David Hume is that the future might differ from the past—no matter how many observations we make that agree with a premise, there is no certainty that future observations will match it. Other problems can arise for certainty, as well, when dealing with concepts that can be interpreted in a variety of ways.  We may all agree that one man killed another, but was it murder?  To answer the question of murder depends on how we define murder.  It is to deal with such questions that judicial systems are developed to make judgements about how to define and understand certain events that are not amenable to any abstract ultimate standard of truth.

Scholarly Truth and Legal Truth

Judicial systems and scholarship have a lot in common; they both seek confidence in the claims they make. They try to take into account evidence; they try to separate out those truths that can be ascertained (did he kill the man?) from those that cannot (did he murder the man?).  In both cases, those involved are ultimately forced to make decisions on the basis of best evidence or probability rather than ultimate certain truth. And in both cases, the decisions made are, in the long run, subject to dispute and revision as new evidence and knowledge come to light.

An Ongoing Search

The fact these systems are fallible is a product of the nature of our human knowledge, but this does not mean that we ought not continue to seek the truth.  There are those who act in bad faith, who try to deceive other for their own personal gain. To allow such people to use the unavoidable doubt about some questions to poison the well of scholarship or of legal systems and the larger social systems they represent, is to abdicate responsibility to those who would lie for their personal gain.  That is why, for one, it is important to remember that there are questions that can be answered with ultimate objective truth, and for two, to strive to find such undeniable truths on which to base decisions.  Just because truth is elusive does not mean we ought not seek it. Seeking truth is a process and a principle; it is, in my mind, one of the fundamental principles that should guide any moral system.  Without a genuine commitment to seeking truth, societies fall into evil and dangerous patterns where malevolent actors visit harm on many to satisfy their own selfish aims. 

Of course this last premise falls into an area of understanding that is much debated: the realm of right and wrong/good and evil.  In this realm, I make no claim to an ultimate truth, but I feel strongly that there are good and evil in the hearts of humans, and societies are most likely to prosper when the interest in the good outweighs the interest in evil. Regardless, the search for truth is, at least as I see the world and societies in the world, an act of good more than of evil (even if evil has been done in pursuit of truth).

The Basics of Logical Analysis 3: Concluding

I wanted to conclude the line of discussion I was following in my previous posts, with an eye toward the experience a researcher might have in beginning to define a new project, particularly those in areas where the researcher has not done a lot of previous research.  I also wanted to try to make my examples a little more detailed and academic in terms of focus. I’m still going to be working with an example from an area where I have little experience because it’s close to one of the concerns of a writer with whom I’m working.

Down the Rabbit Hole 4: Fractals

The previous post was talking about “going down the rabbit hole” for the way that a question can seem initially simple and small, but takes on detail and scope as it is examined more closely. Another parallel would be fractals, which are patterns/images derived from mathematical operations that are recursively defined in such a way that as you magnify the image, new detail continuously emerges. The Mandelbrot Set is one of the most famous of the fractals. 

Research shares something of this characteristic. It may not be infinitely recursive (though some have argued that it is), but generally, if you examine any issue closely, it will lead to more questions.  This is due to the basic nature of analysis: if we analyze things into separate parts/aspects/issues, each of those separate parts can itself be analyzed into its ow constituent part.  Jorge Luis Borges wrote an essay titled “Avatars of the Tortoise,” in which he argues that infinite regressions “corrupt” reasoning, giving examples, like how to define a word/concept, it is necessary to use other words, and each of those other words then needs to be defined, which requires other words, which then all require their own definition, and so on. I’m not sure that the pattern is infinite (there are, after all only a finite number of words, so for definition at least the regression can’t be infinite), but the multiplication of details can quickly become overwhelming. 

The Nobel Prize-winning psychologist and economist Herbert Simon, whose studied decision-making, coined the term “satisficing,” to speak of how some decisions must be made without a full logical analysis because such analyses take so long and become so detailed.

As my earlier examples of reviewing a restaurant or movie showed, it’s pretty natural to see different aspects in things: the restaurant has food and service and ambiance; the service has courtesy and competence; courtesy has all the different things that different people said and did. It may be simple to say whether you liked the restaurant, but to explain in detail all the different factors that contributed to that decision is another matter altogether.

Fractal: The Barnsley Fern
Each leaf, if expanded, will show similar structure and fine detail to the larger frond.
Image by: DSP-user / CC BY-SA (

A More Focused Example

So far, I was giving pretty general examples, now let’s try to get more focused.

Let’s imagine a hypothetical student, studying business management.  And let’s imagine that this student has what we can call “The Fruit Theory of Management,” in which they assume that giving employees fruit improves performance. (I was going to call it “Apple Theory” but didn’t want this to be confused for a reference to the big corporation.)

The Fruit Theory

On its face, the fruit theory of management is ridiculous, but since I’m talking about a general structure of research, the precise theory in question is not so important (as will hopefully become obvious in a moment). Instead of “giving employees fruit” we could use “giving employees training in XYZ,” or, more generally, “instituting policies ABC.”  ”Giving fruit” can stand in for any possible intervention. And instead of “employees,” we could substitutes almost any group—students, parents, plumbers, etc.—and in each of these cases we could either find a suitable measure of performance, or we could replace “performance” with some other construct to measure (e.g., happiness, health, etc.).  

We can even generalize this to any basic causal pattern: “giving fruit leads to better performance” is a specific example of the general pattern “X causes Y.” Most research is concerned with causal relationships in some way or another, so although I’m going to focus on fruit theory

Studying Fruit Theory

So, we have our business management student who wants to research fruit theory. Generally speaking, a starting point for fruit theory would be to define the theory.

So the student tries to write down a definition (or speaks a definition in conversation with someone). At this point, the process of analysis inevitably has already begun: the words used can themselves be examined individually.  So, if the theory is “giving fruit leads to better performance,” there are elements that can be defined individually. 

For starters, we can ask “what is  fruit?”  In everyday conversation, we know what a fruit is and don’t need definition. But if we’re talking about developing research and examining causal relationships, we want to define things more closely and formally. (Research needs formality and detail so that others can check the research.)  For example, fruit theory might call for fresh, ripe, worm-free fruit that people would enjoy eating (a definition that is not identical with a more general understanding of fruit that includes unripe or wormy or rotten fruit). That might lead us to a whole set of questions of how to identify fruit that people would enjoy eating, which could lead to more general questions of what it means for people to enjoy eating. (Or maybe the real issue is that people enjoy receiving fruit as gifts—that would lead to a different definition of what “fruit” is.)

To study fruit theory, we also need to define what counts as “giving” and what counts as “better performance.”  As for “giving,” there is some question of the specific details of how the transfer is made and whether any conditions are placed on that transaction, including any potentially hidden costs of the transaction. But defining giving is relatively simple compared to the question of “better performance.” Measuring performance a huge array of questions: Whose performance? Are we measuring the performance of the organization as a whole? Or of individuals in it? What kind of performance? What dimensions of performance are we measuring (speed? accuracy? gross sales? net sales? etc.) and over what time periods? Are we measuring cash flow of the business over a month? Or the employee sick days taken over a year? Or are we measuring profitability over a decade? There are any number of different ways to think about the general concept of performance.

To develop research, we might also need to specify further the causal mechanism by which fruit theory works. Does giving fruit work because fruit makes people healthier, and therefore better able to work hard (as the old saying goes “an apple a day keeps the doctor away”)? Is there a physiological causality? Is that physiological causal path one that gives people more energy? Or one that improves their strength? Or one that boosts their mood?  Or maybe the causality is not physiological but psychological: giving employees gifts makes them feel appreciated and they want to work harder as a result?

Answers lead to new questions

Whenever we make a choice of where to focus attention, we can find new questions to pursue. We may start pursuing a question of business, as in fruit theory, but that question might lead into other fields of study.  If we posit a physiological cause for fruit leading to better performance of employees, then we need to study physiology. That study might lead in a variety of directions: maybe fruit theory works because fruit improves health, reducing sick-time lost—that would lead to study of immunology: how and in what ways do apples improve immune response? Or maybe fruit theory works because of some other physiological effect: strength, endurance, mood. Since different foods and substances can impact strength, endurance, and mood, maybe fruit has such effects?  If one thinks that fruit has a physiological effect on mood, one might then be led into questions of which specific biological pathways lead to mood improvement, and perhaps in studying that research, you see that other researchers have identified different kinds of mood improvement, and perhaps debate ways in which physiology affect mood.

New answers pretty much always suggest new questions.  

Preventing Over-analysis

You can take analysis too far. If you constantly analyze everything, you end up with a great mass of questions and no answers.  It can lead to getting swamped in doubt.  There is no rule for this, beyond that at some point it is necessary to pick the point at which you say “I’m satisfied with my answer to this question.”  Such statements close off one potential avenue of study to allow focus on another, and to set limits to what you need to study—limits that are necessary for the practical reason that it’s good to finish a project even if that project is imperfect.

If you say “I’m satisfied that the reason Fruit Theory works is because fruit makes people healthier,” you don’t need to pursue questions of whether and how and how much fruit promotes health, and you can go on to focus on how improved health helps a business.  Or if you say, “I’m satisfied that fruit theory works,” you can go on to study details of implementing fruit theory.  Of course, it’s good to have reasons, and good to be able to explain those reasons: if you’re satisfied that fruit theory works, it’s useful to be able to give evidence and reasoning. In academia, that evidence often comes in the form of other research literature. If you can cite five articles from reputable sources that all say “fruit theory works,” then you can go on to your research in implementation without getting embroiled in any debate about whether fruit theory works—even if the five articles you cite are not yet accepted by all members of the scientific community.


Analysis itself isn’t really that hard in the small-scale—we do it automatically to some extent. But it is something that grows increasingly difficult as we invest more energy into it: the more detail we add to our analysis, the more there is an opportunity to analyze further, which can lead to paralysis or to getting swamped. It is something that wants care; it wants attention to detail. 

The basics of logical analysis 2: Down the rabbit hole

Continuing my discussion of analysis from my previous posts, I look at how analysis can lead to new questions and new perspectives. Just as Alice ducked into the small rabbit hole and found an entire world, so too can stepping into one small question open up a whole world of new questions and ideas.

If you look at things right and apply a bit of imagination, analysis quickly leads to new questions.  Even something that looks small and simple will open up to a vast array of interesting and difficult questions. 

The multiplication of questions that arises from analysis can be good or bad. New questions can be good because they can lead to all sorts of potentially interesting research. But having too many questions can be bad, both because it can interfere with focusing on one project, and because it leads to complexity that can be intimidating. Learning to deal with the expanding complexity that appears with close study is a valuable skill in any intelligence-based endeavor—whether scholar or professional, decisions must be made and action taken, and falling down a rabbit hole of analysis and exploration will sometimes interfere with those decisions and actions.

This post follows up on my previous in which I argued that we analyze automatically and that the work of a researcher includes making our analyses explicit so that we and others can check them.

In this post, in order to show the potential expansion of questions, I’ll look at a couple of examples in somewhat greater detail. While I won’t approach the level of detail that might be expected in a scholarly work meant for experts in a specific field—I want my examples to make sense to people who are not experts and I’m not writing about fields in which I might reasonably called an expert—I hope to at least show how the complexity that characterizes most academic work arises as a natural part of the kind of analysis that we all do automatically.

Looking more closely: Detail appears with new perspectives

In the previous post, I used the example of distinguishing the stem, seeds, skin, and flesh of an apple as a basic analysis (separation into parts), but it was quite simplistic. Now I want to examine how to get more detail in an analysis of this apple.

For starters, we can often see more detail simply by looking more closely (literally): In my previous post, I separated an apple into skin, flesh, seeds, core and stem.  But we could look at each of those in greater detail: the seed, for example, has a dark brown skin that covers it and a white interior.  With a microscope, the seed (and all the rest of the apple) can be seen to be made up of cells.  And with a strong enough microscope, we can see the internal parts of the cells (e.g., mitochondria, nucleus), or even parts of the parts (e.g., the nuclear envelope and nucleolus of the cell’s nucleus). This focus on literally seeing smaller and smaller pieces fails at some point (when the pieces are themselves about the same size as the wavelengths of visible light), but in theory this “looking” more closely leads to the realms of chemistry, atomic and molecular physics, and ultimately to quantum mechanics. Now we don’t necessarily need to know quantum mechanics or even cellular biology to study apples—you don’t necessarily visit all of Wonderland—but those paths are there and can be followed.

In this apple example, each new closer visual focus—each new perspective—revealed further detail that we naturally analyzed as part of what we saw.  But division into physical components is only one avenue of analysis, and others also lead down expansive and detailed courses of study.

So Many Things to See!

We can look at different kinds of apples in a number of different ways. (Not to go all meta here, but we can indeed separate—analyze—distinct ways in which we can analyze apples.)

At the most obvious, perhaps, we can separate apples according to their variety, as can be seen in markets: there are Granny Smiths, Pippins, etc., so that customers can choose apples according to their varied flavors and characters.  Some people like one variety and not another.  These distinctions are often made on the basis of identifying separate characteristics of apples (another analysis): “I like the flavor and smell, but it’s kind mealy and dry;” or “It’s got crisp flesh and strong flavor; it’s not too sweet.” Flavor, texture, appearance (color, shape, etc.), and condition (ripe, overripe, e.g.,) are all distinct criteria that a shopper might consider with respect to an apple.  These aren’t exactly the kind of thing that would be the subject of academic study, but they could certainly lead to more academic questions.

The question of apple variety, for example, could be seen through the lens of biology. There are the questions of which genetic markers distinguish varieties and the ways in which those genetic markers tell us of the relationships between different types of apples and their heritages.  The question of heritage brings up another aspect of apples that could be a study for a biologist: How did a given strain develop? There are wild apples, which developed without human intervention; heirlooms, which develop through selective breeding; and hybrids, which grow from planned crossbreeding.  Combining these questions of genetics and heritage might lead a scholar to study the migration of a specific gene, for example to see if GMO commercial apple farms are spreading their modified genes to wild populations.

Another characteristic of an apple that a shopper might consider at the store is the price.  This is obviously not a matter for biologists, but rather for economists. And an economist might want to look at how apples get priced in different markets.  That might lead to questions of apple distribution and apple growing. Questions of apple growing might lead back to questions of biology, or to other fields of study like agronomy. Questions of distribution might lead to questions of transportation engineering (what’s the best means to transport apples?) or to questions of markets (who are potential producers/distributors/vendors/consumers? what products ‘compete’ with apples?) or questions of government policy (how did the new law affect apple prices?).

So Many Different Perspectives

Different analytical frameworks can be found by imagining different perspectives on apples. In the previous section, I already linked the study of apples into fields like biology and economics and more, but there is wide potential for study of apples in many areas. 

Think about university departments where apples might get studied. Biology, economics, and agronomy are three already suggested. But people in literature departments might study apples in literature—“The apple in literature: From the bible to the novel”. People in history departments could study the history of apples—“Apples on the Silk Road in the 14th century.”  Anthropology: “Apples and the formation of early human agricultural communities.” Ecology/Environmental Science: “Apples and Climate Change.”  

These example titles are a little strained because I have not made a study of apples in these contexts, and therefore I’m throwing out general ideas that are rather simplistic and free of real theoretical considerations.  More complexity would attend a real project.  The student of literature might be looking at different things that apples have symbolized because they want to make a point about changing cultural norms. Or they might look at how apples have been linked to misogynistic representations of women. Such studies, of course, are interested in more than just apples. As we combine interest with apples with other interests, too, new potential ideas being to arise.

Combining Perspectives

Most people have multiple interests and these interests can combine in myriad ways to create a vast array of different questions that could be asked about apples (or any other subject).

Pretty much any scholarly perspective has its own analytical frameworks that structure research. Biology analyzes according to genetic structure, for example. Business analyzes according to market and economic factors. When these frameworks start to overlap—a business analysis using genetic factors, or a genetic analysis driven by specific economic factors—multiple points of intersection appear. Each genetic structure (each type of apple) can be examined with respect to a variety of different economic factors (e.g., flavor, shelf life, durability, appearance). 

This multiplication of different ways of dividing things up (analytically, anyway) can be problematic because it creates a lot of complexity and because it can be confusing/overwhelming, but it can also present opportunities because each new perspective might have some valuable insight to add. 


What seems small and simple to a first glance—a rabbit hole has a small and unassuming entrance—usually opens into a vast and expanding world of questions.

Analysis requires a bit of imagination—imagination to see a whole as composed of parts, imagination to consider different perspectives from which to view an issue, imagination to recognize the different aspects of things.  But a lot of this analysis is pretty automatic: little or no effort is required for the necessary imagination. Still, because it’s so easy and so natural, this process gets discounted—especially if you view “analysis” as something highly specialized that only experts do.

To develop a practice of analysis, all you really need to do is make a point of trying to make your different observations explicit.  Whether you’re judging an apple (taste, appearance, scent, etc.) or a theory (the various assumptions, conclusions, relationships to other theories), chances are good that you’ll pretty automatically respond to different aspects at different times. If you can formalize and record these different observations, you lay the foundation for developing your own analyses.

The Basics of Logical Analysis 1: Seeing Parts of Wholes

In this post, I revisit the general issue of analysis that I discussed in my previous post. There is a measure of overlap because I’m really searching for a way to communicate both the fundamental simplicity of analysis with all its potential complexity.  Maybe the general principle for this post is that analysis is, at its roots, a simple intellectual action: dividing something into different parts, but that this simple action inevitably leads to increasing complexity.

As with so many things in which analysis is involved, this post started out simpler and shorter than it has become. My original plan was to write one short post that just did a better job of explaining the ideas in the previous post. But then, as I thought more closely about it, I found issues that hadn’t been discussed in my previous.  It’s now looking like this will be a series of posts—at least two: this one will discuss the big idea of analysis and relatively simple, everyday examples; the next will look at some examples more closely, in hopes that they feel more like an academic example. I suspect that may end up as two or more posts. In a way, this story encapsulates an aspect of analysis in practice that I want to emphasize here: the more you do it, the more complexity you see, and that leads to expanding projects, that must be reined in for purely practical reasons: basically, if you want to finish a project, you have to stop analyzing everything. (And as I write that, I wonder whether I haven’t sparked the foundation for a third post: how do you stop analyzing once you’ve started. It’s an idea that I touch on briefly in the second post, but maybe it deserves its own? I’ll have to think about that…)

What is “Analysis”

At its root (its etymological foundations), “Analysis” is derived through medieval Latin from the Greek for “unloose” or “take apart.” (In contrast to “synthesis” whose roots lie in the Greek for “put together.”) This sense is generally in line with how the word might get used in a conversation. For example, after [a movie/a TV show/a meal at a restaurant], if one person is talking at length criticizing details of the [movie/etc.], the other might get exasperated and say “Stop picking it apart,” or “stop over-analyzing it.”

It is this basic “picking apart” that concerns me in these posts. It is a basic principle that can manifest informally (as a person might do with a movie/etc.) or one that can manifest as extremely detailed and formalized systems of analysis, as with psychoanalysis, or statistical analysis, or data analysis, or any other field that uses “analysis” in a title.

We Do It Automatically

The kind of analysis that is important in research (and other intellectual work) is something that humans do naturally and automatically—often without even noticing that we’re analyzing.

To apply it in research is to take an automatic, unconscious ability and work to make it conscious and explicit. Splitting things into pieces—into different parts or different aspects—is pretty easy. But making those divisions explicit is hard because of the complexity that tends to develop.

We all automatically split things up into different parts, which is reflected in our languages (including words like “parts,” “pieces,” “components,” “elements,” etc.) and much of our daily lives. We separate the world into all sorts of different categories. We eat food, which includes fruit, vegetables, meat, etc. We work, but have many different kinds of work: homework, housework, yard work, not to mention jobs, which are work. We separate the good from the bad. We divide people up into different groups: family, friends, acquaintances, people we don’t know, etc.

It’s true that many of these divisions are learned, but that doesn’t mean that we don’t naturally make divisions of some sort.

Analysis: Examples

Consider an apple.  It is a whole in itself, but we pretty naturally separate it into a few different parts: stem, skin, flesh, core, seeds.  Our basic sensory apparatus provides distinguishing information: stem, seed, and flesh taste different, smell different, look different, and feel different.  Our basic sensory apparatus is already providing us information about differences in the world that lead to analysis of the apple into its different parts.

Consider a movie.  It is a whole in itself, but we can easily divide it in many different ways that are familiar to cinephiles. We can say “The acting was pretty good, but the script was weak.” Or “The cinematography is great, the writing is great, the direction is ok, but the star annoys me, so I had trouble enjoying it.” We might like what we see (“great cinematography!”), but not what we hear (“poorly written dialogue”). We might like one actor and not another. Again, this is analysis in action, although few would think of this kind of thing as analysis. Unless we were to really get into a lengthy discussion of different aspects of a movie, and then someone might say “stop analyzing it! You’re ruining it for me!” 

Research and Analysis

Research takes this basic ability to distinguish between things and tries to make it explicit and formal. For the researcher, it’s not enough to say that it’s obvious that you have stem, seeds, and flesh, or acting, directing, writing, and cinematography. It’s necessary to begin to formalize.

Formalized analysis is crucial in research because it allows a research community to work together.  Researchers who doesn’t explicitly express their analyses can’t have their researcher reviewed or trusted by others. The need to share and provide explanations and evidence that can be examined leads to detailed discussions (articles books, etc.) that can themselves be analyzed (and will be by other researchers who will look for strengths on which to build and weaknesses to correct).

In practice, research communities develop different analytical frameworks and methods of analysis as a result of the attempt to explain and examine each others’ work. These become increasingly detailed and complex over time, as each successive generation of researchers turns their analytical abilities to the questions of interest. Sometimes entirely new analytical frameworks develop, but these, too, are subject to close examination that leads to complex formal analytical systems. 

Psychoanalysis, for example, depends on familiar analytical divisions: the id, ego, and super-ego represent parts of a large whole. So, too, the conscious and unconscious.  Each different pathology is a part of the large whole of “poor mental health.” And each pathology itself is distinguished by a number of different characteristics that are parts of the pathology. To become a psychoanalyst, is to adopt a specific set of analytical frameworks regarding the psychology of individuals and the nature of psychotherapy as well.   Other theories of psychology and psychotherapy may not be called “psychoanalysis,” but they too adopt different analytical frameworks.

Mathematical analyses separate the world into different symbols that represent different parts of the world and distinct relationships between the parts. Physics, of course, presents the interactions of objects in the world as a set of symbols and mathematical equations. In a business setting, the large-scale system of a factory, for example, might get represented in mathematical equations that separate out machines that produce goods, goods that are produced, rates of production, costs of production, necessary workers, etc.


Analysis happens.  If you examine something closely—an object, an interaction, an idea—you will begin to distinguish different aspects or parts of it.  These distinctions are analysis. To move that analysis into an academic or research setting really only requires that you try to make your analyses explicit as you develop them, so that they can be examined for flaws (by you and by others).

Of course, making analyses explicit and then looking at those analyses with an eye for flaws may be a path to good research, but it is not a path to simplicity.

I’m going to close here and in my next post (or posts), I’ll look with greater detail at some examples to show different ways in which things can be analyzed and to discuss the expansion of complexity, which can be both good and bad.

Sophistry vs. Reason and Partisanship vs. Principle

In my previous blog post, I lamented the absence of logical certainty and the problem created by the absence of objective truth, where each person/group believes that they hold the truth and that therefore their political choices are necessary and correct while the choices made by others are based on falsehood or error.

I lament this unavailability of objective truth particularly because I believe there is a fundamental reality—that even if we cannot recognize or discover objective truth, there is a real difference between truth and falsehood.

Because of the political nature of knowledge—because people act on what they accept to be true—political actors have motivation to control knowledge that is disseminated in order to manipulate the behavior of other people.  This is obvious on the large scale: political propaganda is often deceptive. And on the small: people lie to shape the behavior of others (“I didn’t cheat on you, honey. I swear!” is meant to deflect anger, for example).  In this light, although we may not be able to find objective truth, we can certainly recognize at least one dimension on which we can differentiate truth from falsity.

Honesty vs. Deceit

Some people try to deceive, and I don’t want to focus my attention on them, that’s why I didn’t title this post “honesty vs. deceit.”  Some people willingly and knowingly try to obscure what they actually believe is the truth.  Take the tobacco industry, for example.  We know from the record that has been made public, the tobacco companies actively and publicly promoted cigarette smoking as healthful, even while their internal documents clearly indicated their knowledge in the deleterious effects of their product. Or take Exxon, whose scientists internally agreed upon the dangers of climate change in the 1970s, but whose public discourse was to promote doubt about those very conclusions. In these cases, the companies involved presented ideas to the public that were at odds with information that they had internally.

Cases of intentional deception are sadly too common. But I don’t really want to talk about intent so much as I want to discuss issues relevant to recognizing patterns of argumentation that are not based on reason or principle and hence are often used to avoid reason or principle.

Sophistry vs. Reason

This blog is mostly aimed at discussing ideas to help academics negotiate academia. In that context, I want to talk about the presentation of ideas and different things that one can look for as good or as problematic in the work of others, and things to avoid as a matter of principle.

There is a difference between arguments built on reason and arguments built on sophistry, and regardless of the whether or not there is an objective truth, the difference between sophistry and reason can often be recognized. A good speaker or writer can often effectively hide sophistry, at least from casual glance.  The art of rhetoric is often disparaged for its role as a tool for obfuscation—a matter of sophistry not reason—but rhetoric can also be used in service of truth (or at least the intention to tell the truth rather than to deceive). Even if you believe in the truth of your message, you may still struggle to get others to accept those ideas, and persuasion is valuable.  Understanding how to convince an audience is worthwhile.  But some of the tools of persuasion can be deployed to both honest and to deceitful ends, while others are generally only deceitful.

One well-known tool of rhetoric that falls largely outside the bounds of reason is the ad hominem argument: which is to focus on the person who makes a claim rather than on the claim itself.  This can work in two ways: it can be used to attack a claim by arguing that the speaker is generally untruthful, or it can be used to support a claim by arguing that the speaker is generally truthful. The story of the boy who cried wolf is an insight into the issue of the ad hominem argument. Once people have decided that the boy is a liar, they do not check his claim that there is a wolf, even though there was, in the end, a real wolf.  A liar can make a claim that is true. And a generally honest person can make a claim that is false.  It is reasonable to consider the veracity of a speaker (or lack thereof) as an interesting piece of evidence indicative of the truth or falsity of a claim.  It is sophistry to consider the veracity of a speaker as the only indication of a claim’s truth, especially if there is other evidence that can be used to judge the claim in question. If someone tries to avoid discussion of the actual claim and instead they try to focus on a person, then they’re probably engaged in deceitful sophistry. If a claim’s veracity is in doubt, an answer should not be sought in reference to the person who made the claim.

Partisanship vs. Principle

It is well-documented that in situations that should involve reasoned judgement, people show strong biases related to people involved.  This is known as reactive devaluation.[] So, for example, one study in the 1980s showed American participants an arms treaty between the US and USSR and asked them whether they approved of it. One group of participants was told the plan was proposed by Ronald Reagan, one group was told it was proposed by unnamed policy experts, and a final group was told it was proposed by Mikhail Gorbachev.  The same plan was shown to all groups.  If people were making their decision based on the principles laid out in the treaty, then all three groups should have had similar approval rates. The results showed 90% support amongst those told it was proposed by Reagan, 80% support in the group told it was proposed by policy experts, and 44% support in the group told it was proposed by Gorbachev.  The same exact plan got a vastly different reception on the basis of partisanship.

I don’t think decisions should be made on the basis of who your friends are.  Or at least, I think that partisanship—supporting your friends attacking your enemies—should not be a sole consideration when making plans.  For all my concerns about the limitations of research and the general limits of human knowledge, I believe/wish/hope that decisions—mine, yours, and those made by groups, including political bodies—should rely heavily on actual principles, not on partisanship.

There are times when making a decision based on friendship is appropriate. If you decide to go to the restaurant your friend wants even if you read a really bad review, go for it. The ramifications are small. If you’re a researcher, however, and you ignore your data to support some friend’s work, that’s a very different thing altogether.  And if you’re a policy-maker, and you reject actual evidence and your principles for the purpose of supporting your ally, that’s a gross violation of basic ethical behavior. If you tout the principle of honesty or fairness, but then put aside those principles for your friends, then you are abdicating principle in favor of partisanship.  This general observation is obviously applicable to politics, but it’s also true in research.

In research it may not always be partisanship—desire for fame and money may prompt researchers to abandon principles—but whatever the motivation, it’s important to try to return research discussions to the principles that provide a foundation for research.


As I said in my previous post, I lament the absence of objective truth on which all can agree.  But I still believe that there are foundations on which people can build that will help ideas and discourse rise above the level of partisan sophistry while striving for the elusive fruits of principled reasoning.

Pursuing principles—and focusing on principles, like the principle of testing claims based on evidence, not on the character of the person who made the claim—is one way to both move towards shared ideas and shared knowledge nd a way to recognize when others are engaging in sophistry rather than partisanship.

All Knowledge is Political: A Lament

When I was in graduate school, my advisor Jean-Pierre Protzen used to say that all knowledge was political.  I think he might have attributed that idea to Horst Rittel, but I would nod my head in agreement because there are many different sources from which that idea could have come, including Michel Foucault, whose work I have found generally compelling in its arguments about the nature of systems of knowledge and the political roots of what gets to be considered knowledge.  Logically speaking, I am generally convinced that all knowledge is shaped by politics.

Emotionally speaking, however, this is cause for despair. Without some objective standard to determine knowledge, all discourse devolves into whose voice is loudest and/or whose stick is largest. Logically, I see the problems with the idea of objective knowledge, but in my heart, not only do I long (desperately) for objective truth, but I believe that there is, in fact, a big difference between fact and fiction. As a philosopher, I recognize the many, many non-objective factors that shape any statement of fact, but still… There is a big difference between truth and fiction.

This view that all knowledge is shaped by political (or social/cultural/historical) forces only captures one half of the idea that all knowledge is political, however.  It looks at how knowledge is created and how we can or can’t know things.  This is very important, but there’s another side of the coin that I would like to look at, and this is the active side of knowledge, if you will: this is the way in which knowledge shapes politics.  

When people “know” something, that shapes how they behave.  In this sense, it may be that “knowledge” is not necessarily the right word—perhaps “belief” or “certainty” would be better. But I don’t want to get into close debate about defining the idea of knowledge. If the foundations of knowledge are uncertain or contingent (as argued by my advisor, Foucault, and many others), then what is the difference between “knowledge” and “belief”?  To say that our beliefs shape our politics is hardly surprising or interesting, I suppose, though I think it can be a factor that we lose sight of, too.  

This active dimension is crucial in understanding political discourse. Take, for example, the case of a hypothetical university department.  Different professors might compete for funding for their research and their students.  This competition will at least partly grow out of differing ideas of what is true.  To take a broad example, consider a Marxist economist and a Free Market economist.  What may immediately spring to mind is the political difference—one might believe in Marxist communism as the best government and the other in some form of capitalism.  But does that political difference drive the debate, or is that a result of something else? Given what I’ve already said, it should be clear that I think that an idea of what constitutes knowledge is what shapes the debate.

Let us imagine, for a moment, that these two competing professors share the view that economic/political systems should treat people with justice, should protect the general welfare, and should reward the virtuous.  Such agreement is, I believe, to be found between Marx and Adam Smith, on some level, at least: both share an interest in the betterment of the overall polity.  The differences lie in how they view the workings of societies and economic systems.  To frame the difference broadly (yet, I believe accurately), Marx argues that the betterment of the overall polity arises from community action—from the action of the classes, while Smith argues that the betterment of society grows out of selfish, individual action (“By pursuing his own interest, [the individual] frequently promotes [the interest] of the society more effectually than when he really intends to promote it.” From Wealth of Nation’s famous “invisible hand” passage. This is the second sentence following the “invisible hand” sentence).

In this example, the politics of the one professor are driven by the views of the importance of the collective action.  This professor will quite naturally align with political movements that also valorize the community over the individual.  And the politics of the other will be driven by the view that the individual must predominate.  

We would see this on the level of national politics—choice of political party, for example—but it can have more ramifications, in the form of departmental politics, for example. Within this hypothetical department, this difference in view will result in difference of opinion about which applicants/students/job applicants are best suited to the department—decisions that have very real impacts on the department as a whole. 

As I write this, it reminds me of a case of how underlying assumptions like these shape action that isn’t obviously political, but might lead to reinforcing certain views in certain ways and thus wind back to political impacts. In one of his books, George Lakoff analyzed a Chinese proverb: “cows run with the wind; horses run against it.” (I think this was in More Than Cool Reason, co-written with Mark Turner, but I’m not checking sources here—I’m going on my memory of a lecture given by Lakoff that I attended. Since I’m using this example as a way to illustrate how political beliefs influence people, it doesn’t require perfect accuracy.)  The analysis he gave was that the proverb valorized horses, recognizing their independence. But after publication, readers (at least one) informed him that in Chinese culture, the proverb valorized the cows, showing their wisdom in working together.  The proverb (at least this translation of it), does not explicitly valorize either the horse or the cow, it merely differentiates.  The value system adopted by the interpreter shapes the interpretation. In our hypothetical example, the Marxist professor, preferring community action might interpret this in favor of the cow, and thus view this as evidence confirming their view. At the same time, the Free Market professor will interpret in favor of horse, and view it as evidence in favor of individual action  (and yes, although the Free Market professor is wrong/factually inaccurate in the historical/ethnographic interpretation—the people who use the adage don’t interpret it this way—that’s not really relevant in this discussion of how “knowledge” shapes actions that have political impact).  And in both cases, these interpretations tend to shape further action and investigation in the future, as well as future potential competition for department resources, with both professors feeling that their agenda is best for their department.

There is, to be sure, a feedback loop here: what one accepts as knowledge is shaped by political forces, and then goes on to shape political action, which leads to shaping what gets accepted as knowledge. 

My lament is that this basic political nature of knowledge is, as far as I can see, tearing the world about and risks the future of humanity and most other species of life on earth. This statement, which is political in nature, is based on my understanding of the world—on my “knowledge.” It is not a statement driven by political partisanship, but by my best attempts to understand objective reality, even though I believe such understanding is problematic. To the extent that I prefer one political party over another, it is due to this understanding of the world: I prefer the party that recognizes the same situation in the world that I recognize.  And I passionately believe that certain things should be done as a result of being firmly convinced that my understanding of the world is basically accurate.  The thing is, that the folks who disagree with me would say the same thing. And they, too, are passionately committed.

Everybody is committed to their own view of the world. Few are willing to make the effort to learn and understand. Most are convinced of their own rightness. Hence the tremendous stress on the world, on the nations of the world, on the people of the world, and on all the creatures living in it.

“If only people understood the world with the clarity that I do,” I cry. That is my lament. And the lament of billions.  And if all the human race can do is compete over who is right and who is wrong, the future of humanity looks very bleak.

Keep things as simple as you can; Complexity will arise

A writer recently expressed to me the concern about her work being too simple, a concern triggered by, among other things, being told that her work was pedestrian (which I discussed n my previous post). But for the great majority of scholarly work, if done carefully, complexity is almost unavoidable. The real world is not simple, and a scholar trying to document the real world is not documenting something simple.  Analyzing data gathered in the process of documenting the real world is not simple, either.

My experience of writing blog posts often goes something like this: an idea formulates into a basic message and plan for what I will say; I start writing; I think of an example to use; I start to describe the example, and in so doing, I find complexity where I thought was simplicity. No matter the clarity of my plan, once I start writing, I discover complexity.

It’s easy to find complexity if you are being careful and trying to focus on details. All you need do is be curious and careful.

Suppose, for example, you try to describe a simple household process like getting a glass of water.  That’s simple, right? You get a glass; you hold the glass beneath the faucet; you turn on the water and the glass fills. But complexity lurks. Where do you get the glass, for example? In your own home, you know where the glasses are, but if you’re visiting somewhere, finding a glass may require extra steps, such as opening many cabinets or asking your host. Getting into details might lead to asking what criteria are used for choosing a glass: do you take the one closest to your hand? To which hand? Do you prefer a large glass or small? Do you look to make sure that there is no visible smudge or dirt on the glass? Do you prefer one material over another (glass vs. plastic, for example)? If a glass has a colored material or an image printed, does that matter?  Beyond these practical questions of how to get a glass (we haven’t even started talking about locating or operating a faucet yet), if our aim is to describe the process, we might choose to try to define what we mean by “glass”—does, for example, a mug get included? A mug is not a glass, but it will be effective for drinking a “glass of water” if we interpret the phrase loosely? In many contexts, such an interpretation suffices: imagine asking a friend for a glass of water and them giving you a mug filled with water. Would you complain that they had failed because your water was served in a mug not a glass? And beyond these questions relevant to getting a glass of water in practice, if we are describing the process of getting a glass of water, we might examine how or where the glasses (or mugs) were procured, and how they were made. Although they are not questions for the practical situation, for someone documenting or describing a process, those questions directly follow (even if we might decide that they are not sufficiently relevant to include in a description of getting a glass of water). So trying to describe something simple, quickly leads to complexity if you just ask questions.

Another way that complexity can arise for a writer is by trying to define terms. Suppose you want to write about [term/concept].  It’s good form as a scholar to define the crucial term to your audience, so you try to define [term/concept]. You may turn to a dictionary, where you find multiple different meanings of [term/concept]. You look at the literature in your field, and you find several different authors have all defined [term/concept] in their paper, and they have all done it differently. If the observed complexity of the use of the term hasn’t stymied you, you might sit down to try to write your own definition of the term. In that process you use [term2/concept2], and that leads to the question of whether you need to define [term2/concept2].  Defining terms is a rabbit hole of complexity, as every definition requires using terms that could themselves require definition.  In his beautiful essay “Avatars of the Tortoise,” Jorge Luis Borges describes this as an infinite regression first identified by a Greek Philosopher (whose name escapes me, and I don’t have the Borges text at hand). Defining terms/concepts is not simple, and scholarly writing requires definition.

Complexity arises in the process of argumentation/justification, and there is a similar regression of questions. Suppose, for example, I want to explain why I have chosen a specific research method—methodX.  Every statement I make in favor of methodX can be questioned. If I say I have chosen methodX because it’s appropriate to my research question, the natural question that follows is why (or whether) it is appropriate to the question. If I then offer two arguments—argument1 and argument2—for why the method is appropriate to the question, I have two new arguments that each require some defense. Logically speaking, any argument can be questioned, and each answer offers new arguments that can be questioned.  

It is exactly this kind of logical path from one question to the next that leads many writers down discursive rabbit holes that can inhibit the writing process. And it is one reason that citation is so valuable for the scholarly writer: you can end the string of questions by saying “because FamousAuthor said so.”  It’s not a logically perfect foundation, but what the heck…we all need to find a foundation, and even the greats rely on the foundation of the scholars who have come before—Newton said “If I have seen farther, it was by standing on the shoulders of giants.”

If you want to describe something, and you are careful about it, complexity will arise.  If you are a scholar, you’re supposed to be careful, and, in my experience, that leads to what most might consider a surprising result: good scholars almost almost always have too much to say. I’ve known lots of writers who worried that they had nothing to say, and I’ve known lots of writers who wrote very little for fear that they have nothing to say. But I can’t remember any writer who, once writing, wasn’t able to say enough. The far more common (and more difficult) problem for writers is to have to cut material to get their article or book down to a word limit. (Because of the difficulty of cutting down a draft, I strongly recommend writing first drafts that are short!)

So, don’t worry that your ideas are too simple, embrace that simplicity. Try to capture that simplicity in writing. If you’re careful and attentive to detail, complexity will arise. Indeed, so much complexity arises that there is great danger in getting lost in it, and the writer needs to learn to say “here’s where I stop asking questions.”