Imagination and Analysis: Refining a research question (part 2)

In the previous post, I was trying to illustrate how I approached a question, trying to detail my ideas and the process through which my thinking developed.  The purpose was to highlight the importance of using one’s own imagination and judgement in developing a research question. Many graduate students with whom I have worked, have struggled because they didn’t develop their own ideas, but rather tried to follow the ideas of other people. To contribute to research in a field, however, requires the confidence to challenge old ideas and develop or refine theories. As I have argued elsewhere, it doesn’t take vast brilliance, just willingness to use natural basic abilities with care and attention to detail.  The example of reasoning presented in these two posts is offered as an illustration of the way someone (me) who is naive in a subject (politics), can develop a complex, detailed range of issues that are potential subjects of research as I try to answer a fairly simple question (simple in the sense that it came from my initial, offhand intuition/curiosity, not any detailed analysis), and complexity arose as I imagined possibilities.

Wild imagining

One intellectual exercise for a scholar is to try to imagine what other possibilities could be added to those lists of possible issues affecting voting results. In this area, imagination is the key factor. This imagination of possibilities could also be called “generation of hypotheses,” if I wanted to frame it in more formal (and perhaps intimidating) terms.

It can be useful to suspend reason and logic to help imagination flow: for example, I can imagine that maybe ballots are being improperly counted because extra-terrestrials are tampering with the ballots.  That’s kind of ridiculous, but it is an explanation, and maybe there are even a few people who would believe it. Maybe there are some Satan-worshipping pedophiles who are working to actively miscount. That also seems ridiculous, though in this case, it seems likely that many people would believe it. Or maybe the election system has been tampered with by the Russians.  This seems less ridiculous, given what is known about Russian cyberactivity, or, for that matter, hostile cyberactivity from a range of sources.  To research the possibility of cyberattacks affecting the vote counts, we would want to understand Georgia’s election protocols: at what point could hostile actors affect the vote counts? Does Georgia use electronic voting machines where individual votes can be changed (e.g., I click on “Ossoff,” but the machine registers “Loeffler”)? Does Georgia have a centralized computer tally that could be hacked and altered? Diving deeper, we could ask about technologies used by different groups of potential attackers—maybe Russian cyberattacks use different techniques than Chinese cyberattacks. (And maybe there’s a nefarious actor who pretends to be the friend of the US but also carries out cyberattacks?) Not only could we dive down those technical holes of whether and how the voting system could be compromised, we also are led to ask why: why would someone want to help Perdue but not Loeffler, or help Warnock but not Ossoff?

Anyway, my general point is that as I ask questions—sometimes ridiculous questions—it’s possible for new ideas to arise that might be worth some investigation. Generating ideas for what could be researched—research hypotheses—is an act of the imagination. Therefore it’s useful for imagination to operate freely, to  be able to propose the absurd as well as the “reasonable,” in order to generate hypotheses for investigation.

Yet more possibilities

So far, all the questions I have been asking were generated from looking for an explanation that allowed me to retain the assumption that people would not split-tickets in this election.  Once I start to entertain the notion that people might split tickets, a variety of new questions arise: who would do so, and why? (Also note that these considerations do not rule out any of the previous considerations—in addition to ballot errors and tampering, people could also split their ticket—the observed vote totals could be influenced by all of these factors.)

And just in asking this, I realize that there are multiple ways to “split” a ticket: you could vote for one R and one D, or you could vote for one R (or one D) but not vote at all for the other, or you could vote for one R (or one D) and a third-party candidate (well, actually not in this election because it was a run-off with only two candidates, but if that weren’t true, you could get people saying, e.g., “I’m not voting for Ossoff because he’s not progressive enough, so I voted Green party”).

Now, again, it’s necessary to start to use imagination: why would people split the ticket and cross party lines? Maybe:

  • 1. Democratic women (or feminists) crossed party lines to vote for Loeffler (a woman).
  • 2. Black Republicans (or anti-racist Republicans) crossed party lines to vote for Warnock.
  • 3. Democrats who voted for Warnock didn’t vote for Ossoff because
    • he’s not progressive enough
    • he’s a white man
    • he’s too young
    • he holds a specific position on a specific issue to which they object (I don’t know a ton about his campaign, so…)
    • he was the kid they hated most in elementary school (I’m reaching for the absurd here—we wouldn’t expect this kind of explanation to affect large numbers of people, but it is a possible, if silly, reason that someone might choose not to vote for Ossoff—again, I’m exercising my imagination)
  • 4. Republicans who voted for Perdue didn’t vote for Loeffler because
    • she’s too much/too little like Trump
    • they didn’t like her for some position on some specific issue.

Again, these lists are probably not exhaustive—there are probably many other reasons that Dems might vote for Warnock but not Ossoff (and vice versa) and that Reps might vote for Perdue but not Loefller (and vice versa).

How does imagination match up with the real world?

As I start to lay out these different possibilities, it raises questions of how these hypotheses might be reflected in the data. 

If, for example, the differences are caused by damaged or incomplete ballots, what kind of data patterns would we see? To answer this question, we can look for old empirical data: what does previous election data show, with respect to damaged/incomplete ballots? Given the standards set by the historical data, we could compare to see if the damaged/incomplete data would predict the data that we’re seeing—would we see the kinds of discrepancies we see, on the basis of that kind of problem? Have past elections had enough damaged ballots that we could see the differences that we see in this election?  Alternatively, we can use imagination—what would we expect if there were a lot of damaged ballots? Would we expect them the ballot errors to be distributed evenly across all candidates (i.e., the number of votes lost by Warnock due to damaged ballots would be equal to the number of ballots lost by Ossoff)? What if damaged ballots were coming from one specific location, because of a damaged machine, perhaps, and perhaps that machine was damaged so it failed to read both elections and only read one of the two?  (We would expect an error like this to be quickly discovered in checking ballots—someone at the precinct would notice that they weren’t getting votes for a single candidate.)

Even simple questions get complicated very quickly

I’ve gone through all this detail to show 1) how imagination plays a key role in finding hypotheses for research, and 2) how quickly a question can branch out into many questions—even this brief, informal analysis identified a number of different concerns that could lead to further research. I didn’t even begin to ask questions about how I might gather any supplemental data that could support inferences about the vote totals.

My final steps with the voting question

I’m not researching the question of why Warnock got more votes in any formal way. It was mostly a passing curiosity, but I wouldn’t be able to the put an answer to use in any way. So I didn’t go far, but I’m going to briefly mention my final steps in my “research,” just to give an angle on yet more details that crop up in research.

Before I decided to start working on this blog post, here’s what I did: I compared the number of votes received by Warnock and Ossoff—Warnock received about 19,000 more votes when I looked. And I compared the difference between the votes received by Perdue and Loeffler—Perdue had about 19,000 more.  This similarity of numbers was highly suggestive of people splitting their ticket because each person who splits their ticket, voting for Warnock and Perdue, adds one to both their totals and takes one away from Ossoff and Loeffler—a mirroring.  The similarity in numbers could be coincidence, of course (it would require further analysis to study), but it is suggestive of a group of about 19,000 people who split their ticket, voting D for Warnock and R for Perdue.  If the difference was caused by errors in reading or filling ballots, or by people voting for Warnock while leaving the other vote blank, we wouldn’t expect that mirroring. Again, my interpretation of these basic numbers requires imagining how different voting patterns would be reflected in numbers. <y imagination may be wrong, but having written out my premises, I can begin to test them, and other people can check me and, if necessary, correct me.

Why did Warnock get more votes that Ossoff?

Here’s my guess at a simple explanation for those numbers: there is one group that seems most likely to explain people splitting a ticket between Warnock and Perdue: Black Republicans. It seems plausible that some Black Republicans would cross party lines to vote for a fellow Black person. Doing some rough numbers just as estimates: about 5,000,000 votes cast in the GA election; GA is roughly one-third Black–estimate that as roughly 1,500,000 Black voters—roughly 12% of Black voters in GA are Republican (according to Pew Research Center), so that’s roughly 180,000 Black Republicans in GA—far more than the 19,000 in the Warnock/Ossofff difference. If one in ten Black Republicans decided to cross party lines for Warnock, that would explain the observed difference. It’s also worth noting that because the Democrats needed to win both seats to win control of the Senate, it’s possible that a Republican voter might think that voting for Perdue would be their step to preserve control of the senate (“As long as Perdue wins, we keep control, so I can vote for Warnock”). Let me reiterate that this is a simplistic conclusion that probably misses real world truth, but at least offers an easily understood explanation. (Real world explanations might include differences in specific policy positions held by Warnock and Ossoff, but I have not studied them closely enough to do any analysis based on their policy recommendations.)

Another group might also explain the same pattern of data: misogynist Republicans, who might vote for Perdue but against Loeffler because she is a woman. This seems less likely, just on the basis of how many women have previously been elected by GOP voters. (Continuing down the path of imagination, we can conjure up a group of racist Democrats who vote for Ossoff but not Warnock, or feminist Democrats who vote for Loeffler over Ossoff because she is a woman. But these groups would give more votes to Ossoff than Warnock, so don’t help explain the observed data.)


On many levels, what I have offered above is simplistic analysis. Despite my performing a quick analysis, the various considerations and possible questions proliferated. I didn’t do any research beyond looking up the numbers of votes cast.  I could have looked more deeply. I could have looked at different details (what if I had looked at county-by-county breakdowns? Those might provide some counter to the ideas I used above). 

Imagination and Analysis: Refining a research question (part 1)

A lot of dissertation writers with whom I have worked struggled to generate their own research, not because they weren’t smart enough or hard working enough, but because, due to self-doubt or humility, they didn’t rely on their own ability to think—they did not work to develop their own theoretical vision or scholarly voice, but rather tried to adhere to the ideas of others. They would get bogged down in debates of theory and method, not because they failed to understand theory or method, but because they didn’t trust themselves to question, examine, or doubt the theories and methods they read in the literature.  They didn’t trust themselves to make choices based on their own speculation. 

I have previously written about the role of confidence, as well as about the ways in which analysis causes proliferation of details and considerations. In this two-post series, I offer a sort of case study of how I approached a question about which I knew little, to show a process of questioning, how that questioning required my judgement and my imagination to shape the direction of my research, and how that approach naturally opened up several different courses of potential investigation. It’s offered as a view into ways of thinking about questions, and about all the places where imagination can enter.  It’s not scholarly, and so it doesn’t incorporate dealing with scholarly literature, which is an important part of actual academic research. But I hope that the detail might encourage self-doubting scholars to engage their own judgement and imagination.

The need for imagination and judgement

Research is complicated, and there’s lots of doubt involved: if we knew the answers, we wouldn’t need to do research. Because of this difficulty, and because of how smart some researchers appear, lots of people get intimidated, and they look to others for answers without developing their own ideas. I have often worked with scholars desperately looking for answers outside themselves without first developing their own thinking.  Basically, one of the key skills of a researcher is to develop your own theories—your own ideas about how the world works.  Too many scholars struggle because they don’t trust or develop their own ideas or sense of critical judgment.

Key to the work of the researcher is imagination and story-telling.  The researcher gathers data from the world, of course, but the researcher also tries to create a coherent story of how something in the world works. To be a scholar or researcher requires being able to address an unknown, make hypotheses about possible explanations, and then look for evidence that might support or counter those hypotheses. Some research is more exploratory and descriptive—Grounded Theory, for example—but even then the goal is to develop explanations and stories—to take pieces of evidence and start to move towards a coherent story.  The researcher uses imagination to help build an explanatory and descriptive  story of the world.

An additional role for imagination, for critical judgment, and also for natural analysis (see my previous posts), is the process of developing general questions and turning them into questions that could drive actual research projects. In this post, I develop an example that shows both the repeated role for imagination, the rapid proliferation of different questions that can overwhelm some researchers, and the need to make choices about which possibilities to pursue.  

The example I have chosen came from the political news, and it’s not in an area that I have explicitly studied, so I lack the sophistication that a researcher in politics would have. That lack of sophistication, however, is useful when speaking about how someone might develop a research project or question from a starting place of relative naiveté. The post below tries to reflect both my initial thinking and the extra details that cropped up as I wrote about it. (Writing is a useful tool for developing ideas—I learn as I write. But a discussion of that dynamic is outside the scope of this post.)

The question that sparked this

On the morning of January 6, I noticed that in the Georgia special election, Warnock’s win was called early, but Ossoff’s was not. My first reaction was: “Who would vote for Warnock but not Ossoff? Why wouldn’t they both get the same votes? Who would split their votes between R and D?” My strong assumption was that, given the context—with control of the US Senate hanging in the balance—everyone who wanted Warnock would naturally want Ossoff. 

Beginning the process of exploration and analysis

Just starting to think through that, however, forced me to shift: as soon as I seriously started thinking about large groups of people, I had to abandon the silly idea that all people will behave similarly.  It’s pretty safe to assume that if you have a group of millions, there will be significant variety among them. And it also occurred to me that the discrepancy in votes might not be because of the behavior of voters, but due to some other factor.

My question became: What is the explanation for the difference in votes? I have not explicitly studied politics or voting behavior, so what follows is just my naive attempt to think through the issues at hand—what are possibilities? What are different dimensions of the issue?

Proliferating questions

I started using my imagination: what are possible causes of the difference? One potential cause was that people might split the ticket—they might vote for one D and one R—but this seemed so unlikely to me that I wondered about whether there might be other explanations. Were there people who voted for only one candidate but not the other? Were there ballots that were damaged in some way so that only one vote was readable?

These are pretty much first-level hypotheses/analyses, in which I try to imagine different potential causes.  This is where research really starts: it starts with a question about hw things might work, ad some ideas about how things might work.  

I wondered whether there were factors that might explain the vote differences while allowing me to retain my “everyone who votes for one D (or R) will also vote for the other D (or R)” hypothesis. As mentioned above, I hypothesized that some ballots would have been damaged so that only one vote tallied, and as I write about it now, I realize a second, related possibility: improperly completed ballots (i.e., with the vote entered properly for one candidate, but not for the other).

Imagining possibilities; proliferation of possibilities

Notice the crucial role of imagination, how it aids me as a researcher by generating hypotheses that could be studied, but also adds complexity. I started with a question about something I saw in the world (Warnock getting more votes than Ossoff) that contradicted what I would have assumed (people would vote for both candidates of the same party). And then, I started making stuff up.  And as I started making stuff up, the complexity of the question grew.

To have a coherent story, I would need an explanation for why voters in this election would not split their ticket, even though split-ticket voting is not historically all that unusual. And my general assumption along those lines is that voting has become so polarized in America that split-ticket voting is far less common than previously, and that it would be especially crucial in this election where control of the senate rested in the balance.

To recap so far:

  • 1. I saw the discrepancy in votes and wondered why 
  • 2. I expected no split-ticket voting in this election
    • despite the fact that split-ticket voting was not rare in the past
    • increased polarization
    • the stakes of this specific election
  • 3. I sought explanations to allow me to retain the no-split-ticket hypothesis
    • damaged ballots
    • improperly completed ballots

Each individual line in that little list above potentially leads to questions and research angles—information that would help me answer my question or understand the situation better. I could look for information on:

  • 1. split-ticket voting over time (historical data)
  • 2. potential causes of non-split-ticket voting (generation of explanatory hypotheses for voter behavior)
    • polarization
    • political context (in this case because control of the senate hangs in the balance)
  • 3. potential causes that ballots might not be counted correctly (generation of explanatory hypotheses for non-voter causes)
    • damage to ballot
    • improper completion of ballot

Already there are several different issues that can usefully contribute to the examination of the question at hand, but I’m really just getting started because I haven’t answered any question or found any reason to accept or reject any of the stories I have imagined so far.  And, it should also be noted that while I have listed potential causes for split-ticket voting or mis-counted ballots, those are hardly exhaustive lists—it’s entirely possible that there are factors that I have not considered. In the lists above, I could add “other” lines to emphasize the likelihood that the list is not complete.

Conclusion (Part 1)

This discussion will be continued in a following post. I’m going to break here to keep this from getting too long, and also to reiterate my main aims for this post and the one that follows: to offer an example of how imagination plays a role in the development of research questions. In particular, it emphasizes the ability to look at some event in the world, to imagine a variety of different stories that explain the situation, and then to try to explore those different possibilities in order to judge which might be most likely.

To briefly recap: I saw a question of interest (why does Warnock have more votes than Ossoff?), sparked by an assumption (that they would have the same amount of votes), and I started imagining possible explanations for the observed discrepancy, and I split those explanations into two groups: those that would allow me to retain my original assumption (e.g., ballot-reading errors caused the discrepancy, not split-ticket voting) and those that would force me to reject my original idea.

In the next post, I will continue my illustration of this process, showing yet more use of imagination and more proliferation of complexity.

Outlines in the writing process, part 1

Outlines are extremely useful for a writer.  But they are a limited tool.  

Recently, I got email from a philosopher with whom I’m working, which said, approximately (I’ve paraphrased a good deal): 

I’m having a hard time writing due to lack of formal organization of the theory and how the writing should reflect it, especially since recent changes in my plans, so I’m reworking my outline! Just started this today and it’s already taken me from frustrated to optimistic and excited about engaging these ideas. . . . My eventual goal is to establish a more detailed ToC before tackling the main content so that I can write with greater ease and efficiency instead of anxiously winging it.

What this writer expressed here reflects a general pattern that I have seen in other writers, and personally experienced, many times. It indicates the advantages of outlining—clarity of concepts and how to present them—and also hints at some of the problems: redoing an outline means changing plans that you laid earlier. In this post, I’m going to discuss outlines and the benefits and dangers of working with them.

Outlines are good

Outlines are excellent for trying to get a vision of the whole project, and having a vision of the whole project is really valuable for a writer: the better your sense of purpose for the whole, the easier it is to see the purpose of each part. And the easier it is to see the purpose of each part, the easier it is to write it effectively. If you see the large scale, then you can see how the pieces work together.  Without that large vision, it’s hard to write individual pieces that mesh with and support the rest of the work.  

Outlining is such a good tool for exploring that larger vision because it is something that can be done so easily: it only takes a few minutes to write down a sketchy outline of the main sections of a work.  A sketchy outline can be written and rewritten many times in the course of 15 minutes.  Admittedly, you can’t get into lots of details in a detail that you rewrite several times in 15 minutes, but, that’s just as well, in a way, because an outline’s help clarifying the larger vision and the flow of ideas is possibly the most valuable aspect of outlines. 

Outlines become multi-level hierarchies

In planning any large written work (at least of non-fiction), there is a pretty clear hierarchy of at least two levels that governs the work: there are the chapter divisions, and each chapter itself has some internal divisions (and the internal divisions might have internal divisions).  An overall outline of a work therefore, can be described with a detailed two-level outline along these lines:

Having a detailed outline like this is useful in that it gives a sense of the scope of overall work, and a road map to follow. However, while this much detail is good for a table of contents, it may not necessarily be good for a writer in the process of writing because I think the detail can be distracting, especially in the early drafts. If you’re trying to get an overall sense of some project, it’s easier if there aren’t too many parts to keep in mind.

Do one-level outlines

Instead of making a full, multi-level outline, I like to think in terms of working on multiple one-level outlines, each suited to and created for the piece of the project on which you’re working at the moment. You do a high-level main outline of the whole work, showing division into chapters and giving a clear sense of the work as a whole and how the chapters relate to each other. At a different time time, you work on the outline for a single chapter in which you look at the purpose of the chapter and you think about how each part contributes to the chapter’s purpose (which was earlier defined, of course, by the overall outline which identified the chapter’s place in the larger work).  Then, if you’re working on a major section of the chapter, you can do a one-level outline of the section to see its purpose and the main parts of the section.

This process of making different one-level outlines will produce a multi-level outline—as you nest each one-level outline, you generate a multi-level hierarchy. But it is psychologically different because your focus is generally turned towards the main purpose of the work (or the chapter, or section of the chapter) rather than to trying to manage all the details of the large work at once. Making one-level outlines, there are fewer distracting details, allowing greater focus on the sense of purpose for the main point of each section. Each new one-level outline is just a few pieces, which means they can all be kept in your head (short-term memory is commonly considered to hold about 5 to 7 items).

As you work with each one-level outline, you’re continually focusing on the main point (for either the whole work, or the part of the work), and the main sense of purpose, which should drive the work. This can help maintain motivation: when you get drowned in detail, in addition to the danger of being overwhelmed, there is the danger that the larger motivation is lost. Scholars who think of their work as too narrowly focused, or as too small/limited, often start doubting the value of their work, while those who see the larger purpose that motivated the work see value in it, even if it is highly specific in some way. This sense of motivation is true at all levels of detail: it’s motivating to see the sense of purpose of each piece of writing, so it’s valuable to work in a way so that every piece of writing is given purpose by its larger context.

Are detailed, multi-level outlines ever useful?

In this post so far, I have focused on how the attempt to produce a detailed outline can hinder writing, both by demanding an investment of energy, and by getting a writer bogged down in detail before the writing has even been done. (Well, that’s not quite fair: an outline is a form of writing, but it’s not a book or an essay, and for someone who is planning to write an essay or longer piece of narrative writing, an outline is not the goal.)  In order to start writing, you need a sense of direction, a few landmarks along the way, and willingness to start, but you don’t need a complete, detailed outline of chapter X unless you’re working on chapter X (and even then, a clear and strong sense of purpose is more important than a good detailed outline).

As a work develops and matures, a detailed outline can be very useful as a reflection of the current state of the text or a plan for revision. It’s an excellent tool for when you’re trying to review and revise a completed, or nearly completed whole.  It’s for later drafts and later in the process, when the issue is keeping myriad details in order, rather than early in the process, when the task is to get the big, important ideas into order.

Outlines and confidence

Early in the process a detailed outline can be a tool of avoidance and an expression of lack of confidence. If you’re not feeling sure of where you’re going, and you’re not feeling confident in your ability as a writer, an outline can feel like a great way of proceeding (“I’ll know where I’m going”), which it is until the outline becomes detailed enough to bog down the large-scale thinking.

Outlines don’t guarantee confidence, however.  An outline can provide a sense of direction and confidence, but the best, most detailed outline in the world won’t prevent self-doubt from creeping in.  Sometimes a detailed outline can cause doubt when a new insight suggests a different approach (and therefore a different structure/outline for the writing).  The more details an outline includes, the greater the chance for new insights to suggest an alternative—and, as long as you can learn, you’ll get new insights as you write. A sparse one-level outline, by contrast, offers space for improvisation and revision of details while retaining focus on the big issues and main arc of the narrative.  


Outlines can be helpful, but they can also provide a distraction.  My recommendation for writers near the beginning of writing—especially those who have not yet written a complete draft—is to stick to writing one-level outlines of parts of the work, allowing focus on how a few parts relate to a larger whole.  Don’t try to capture all the details in an early outline; do use a simple outline to help keep focused on the main purpose of your writing.

I had planned only a single post on outlines, but as I wrote, I kelp finding more that I wanted to say, so I’m going to follow this with a second post that discusses outlines further: it discusses some of the limits of outlines (which are linear and hierarchical, unlike the ideas a writer tries to express) and how to try to write about non-linear ideas.

For sufferers of imposter syndrome: Trust your natural analytical abilities

Last spring, I wrote a series of posts about analysis. In this post, I take a different approach to the same ideas.  Two threads contributed to this essay, one sprang from my series of posts on writer’s block, and the other from a conversation with a professor whose students weren’t enthusiastic abut analyzing a text. This post focuses on the question of analysis with respect to writer’s block caused by self-doubt.

In my series on dealing with writing blocks, I most recently wrote a post related to the anxiety-causing doubt about having sufficient intelligence (a primary aspect of what is called “impostor syndrome”).  The basic argument there is that scholars should focus on using and developing the skills that got them where they are, rather than worrying about whether they have enough innate ability.

While I was working on that post, I had a conversation with a professor who felt uncomfortable explaining to her students the value of analyzing a text, and that drove me down a tangent of thinking about analysis, and it seemed to me that for both intimidated scholars and uncaring/unenthusiastic students, the general problem was the same: the task seems either intimidating or unimportant because they think what is needed is something special and wildly unusual, rather than commonplace and everyday. For both the scholar experiencing anxiety due to imposter syndrome and the student doubting the value of analysis, some doubts can be eliminated with an appropriate perspective on the nature of analysis. For both the self-doubting scholar and the uninterested/unconvinced student, part of the problem lies in the language more than in the actual difficulty or value of the task. To say you’re going to “analyze” something, gives an intimidating appearance of formality to what is, in fact, a basic skill. If I ask you to “analyze” something, and you’re not entirely sure what “analyze” means, then, naturally, you’ll have some doubt about whether you can do it and whether it’s worth it to try. Understanding analysis, makes it easier to see its value and believe you can do it. 

What is analysis?

Analysis is, at its heart, a basic, everyday ability possessed by all humans. It is something we all do automatically.  Of course, “analysis” is also something done by highly educated, highly specialized experts using complex and abstruse systems.  The word “analysis” covers a lot of territory.

Basically, “analysis” is examination to understand something better, particularly characterized by distinguishing different issues, aspects, contexts, or perspectives relevant to some main idea. (For example, Psychoanalysis identifies different symptoms and causal factors in a patient; DNA analysis identifies different genes within a DNA strand; Chemical analysis something identifies different ingredients.)

Etymologically, “analysis” means “separate” or “unloose;” it can be viewed as a process of intellectually breaking larger wholes into component parts.   Such thinking is something humans naturally do all the time. Our visual system separates the colors, detects edges, and otherwise divides our visual input into meaningful groups. Our sense of smell (floral vs. fetid, etc.), taste (sweet vs. sour, etc.), touch (smooth vs. rough, etc.), and hearing (high pitch vs. low pitch, etc.) all discriminate. Our experiences and education teach us to discriminate in countless ways to guide us through the world. We “separate” the world into different categories, a process reflected in language, with different words for different aspects of the world and our experiences in it. In short, we all do analysis all the time. 

People analyze for decision making.

Analysis is a basic aspect of learning about the world and decision making. A child eating dinner analyzes, separating things they like from those they don’t. That child might “analyze” a meal, physically separating foods they like from those they don’t on their plate. They might analyze a specific food, distinguishing flavor from texture: “I don’t like okra because it’s slimy. It tastes ok, but it’s still gross!” We wouldn’t expect a child to offer a sophisticated analysis, but they do analyze in a meaningful way.

Decisions rely on basic analysis. If you’re trying to decide what movie or show to watch, you might consider the genre (drama, comedy, action, etc.), run time (do I want to watch for 45 minutes or 90 or 180, etc.?), actors (who do you like or dislike?), director/producer (did you like their other work?), and more.

If you’re trying to decide where to eat dinner, you might consider cost, atmosphere, quality of food, quality of service, etc. If you’re trying to buy a car, you consider cost, gas mileage, comfort, room, power, handling, etc.

We naturally analyze to understand better: we look at the different aspects of the issue in question, trying to get a better understanding of the issue at hand.

Analysis is a skill that can be developed

Analysis is also task that we can learn to do better. It is a skill that can be developed, improved, and refined. The child’s analysis of okra imagined earlier is a simple analogue to the gourmet’s refined critique of a meal based on a trained and discriminating palate. The difference between the two is largely a matter of experience: the gourmet has a larger vocabulary and ability to make finer distinctions than the child largely because the gourmet has eaten more different foods and given a lot of thought and interest to foods. The child eating okra for the first time has limited context in which to judge the experience. The gourmet who has eaten okra many times, on the other hand, has extensive experience for making comparisons: one okra dish is overcooked, another undercooked, one over-spiced, another under-spiced, etc. 

A scholar beginning study of some specific subject may fail to notice issues that they would notice with more experience.  If you’re performing your first close analysis of a [Dickens/Melville/DeLillo/etc.] novel using [psychoanalytic/Marxist/queer] critical theory, you may not notice the same issues as if you had previously analyzed other works by the same author or using the same theory.  These differences in what you notice might be entirely caused by lack of experience rather than any lack of innate ability.

Imagine a pair of identical twins. One takes a job in a wine bar, and the other takes a job in a bookstore. Their innate abilities are presumably identical, but the one working in a wine bar learns to distinguish different flavors and aromas, while the one working in a book store learns about marketing books and issues that affect the marketing of books.  If the two are asked to taste (and analyze) a wine, the one will provide a detailed, complex assessment, while the other will offer a much more simplistic analysis. And if the two are asked to read a book, the one might just respond to the story, while the other would provide a more sophisticated analysis that includes not only the story itself, but the book design, and issues of context in the book market. Each sibling might be surprised at the detail noticed (or not noticed) by the other, but those differences would be entirely explained in terms of experience, not ability.

The scholar doubting their own ability needs to trust that their own abilities will grow with practice.

Specialized analyses

In academic (and professional) settings, analysis becomes formalized because the scholar or professional needs to be able to explain their decisions. The formality involved in academic settings makes the process appear unfamiliar and intimidating, but, in fact, much of the formal detail of academic analyses is the product of persistent, careful attention rather than any specical innate ability of discernment.  Simply put, if you study something, you learn more about it.   The gourmet is able to make sophisticated culinary judgements in part as a result of having eaten many different foods and many of those foods many times. Someone who has tasted 100 different wines and carefully attended to the characteristics of wines and who has cared enough to learn the language of wines will produce a more detailed analysis of a wine than someone who has not—the difference has little to do with any innate ability, and a great deal to do with the time invested. Complex scholarly analyses arise out of careful attention to detail more than out of any innate brilliance in a scholar.

For those doubting their intelligence, it’s important to remember what’s at stake when dealing with some complicated analysis: it’s just a more complex approach to doing something that everyone does.  Inability to use one system of analysis does not preclude using other systems of analysis to good effect. Not every scholar will be able to use every specialized analytical system, but a careful and attentive scholar will pretty naturally develop increasingly sophisticated analyses on subjects of interest.


Analysis is something that we do naturally.  It’s at the heart of what academia does, and although academic analyses are often highly formalized, the basic mechanics are still the natural process of distinguishing differences. For those who worry that they’re not smart enough, it’s important to remember that although academic analyses can become complex, they do not necessarily demand more “intelligence” than other analyses, but rather more attention to detail.  

It would be foolish and naive to ignore the reality of intellectual differences: not everyone has the same perceptual, intellectual, and imaginative abilities.  Most of us are not going to get groundbreaking insights on a par with Einstein’s development of relativity, but that doesn’t mean that we can’t do good, important work. Indeed, the vast majority of published scholarship doesn’t include groundbreaking insights. The vast majority of scholarship, however, does make a positive contribution. If you are doubting your ability, it’s ok to admit that you might not be an Einstein, but don’t forget or make light of the abilities that you do have.  If you are in graduate school or have already completed an advanced degree, trust the abilities that you do have and look to build them through careful work.

The Basics of Logical Analysis 3: Concluding

I wanted to conclude the line of discussion I was following in my previous posts, with an eye toward the experience a researcher might have in beginning to define a new project, particularly those in areas where the researcher has not done a lot of previous research.  I also wanted to try to make my examples a little more detailed and academic in terms of focus. I’m still going to be working with an example from an area where I have little experience because it’s close to one of the concerns of a writer with whom I’m working.

Down the Rabbit Hole 4: Fractals

The previous post was talking about “going down the rabbit hole” for the way that a question can seem initially simple and small, but takes on detail and scope as it is examined more closely. Another parallel would be fractals, which are patterns/images derived from mathematical operations that are recursively defined in such a way that as you magnify the image, new detail continuously emerges. The Mandelbrot Set is one of the most famous of the fractals. 

Research shares something of this characteristic. It may not be infinitely recursive (though some have argued that it is), but generally, if you examine any issue closely, it will lead to more questions.  This is due to the basic nature of analysis: if we analyze things into separate parts/aspects/issues, each of those separate parts can itself be analyzed into its ow constituent part.  Jorge Luis Borges wrote an essay titled “Avatars of the Tortoise,” in which he argues that infinite regressions “corrupt” reasoning, giving examples, like how, to define a word/concept, it is necessary to use other words, and each of those other words then needs to be defined, which requires other words, which then all require their own definition, and so on. I’m not sure that the pattern is infinite (there are, after all only a finite number of words, so for definition at least the regression can’t be infinite), but the multiplication of details can quickly become overwhelming. 

The Nobel Prize-winning psychologist and economist Herbert Simon, who studied decision-making, coined the term “satisficing,” to speak of how some decisions must be made without a full logical analysis because such analyses take so long and become so detailed.

As my earlier examples of reviewing a restaurant or movie showed, it’s pretty natural to see different aspects in things: the restaurant has food and service and ambiance; the service has courtesy and competence; courtesy has all the different things that different people said and did. It may be simple to say whether you liked the restaurant, but to explain in detail all the different factors that contributed to that decision is another matter altogether.

Fractal: The Barnsley Fern
Each leaf, if expanded, will show similar structure and fine detail as the larger frond.
Image by: DSP-user / CC BY-SA (

A More Focused Example

So far, I was giving pretty general examples, now let’s try to get more focused.

Let’s imagine a hypothetical student, studying business management.  And let’s imagine that this student has what we can call “The Fruit Theory of Management,” in which they assume that giving employees fruit improves performance. (I was going to call it “Apple Theory” but didn’t want this to be confused for a reference to the big corporation.)

The Fruit Theory

On its face, the fruit theory of management is ridiculous, but since I’m talking about a general structure of research, the precise theory in question is not so important (as will hopefully become obvious in a moment). Instead of “giving employees fruit” we could use “giving employees training in XYZ,” or, more generally, “instituting policies ABC.”  ”Giving fruit” can stand in for any possible intervention. And instead of “employees,” we could substitutes almost any group—students, parents, plumbers, etc.—and in each of these cases we could either find a suitable measure of performance, or we could replace “performance” with some other construct to measure (e.g., happiness, health, etc.).  

We can even generalize this to any basic causal pattern: “giving fruit leads to better performance” is a specific example of the general pattern “X causes Y.” Most research is concerned with causal relationships in some way or another, so although I’m going to focus on fruit theory

Studying Fruit Theory

So, we have our business management student who wants to research fruit theory. Generally speaking, a starting point for fruit theory would be to define the theory.

So the student tries to write down a definition (or speaks a definition in conversation with someone). At this point, the process of analysis inevitably has already begun: the words used can themselves be examined individually.  So, if the theory is “giving fruit leads to better performance,” there are elements that can be defined individually. 

For starters, we can ask “what is  fruit?”  In everyday conversation, we know what a fruit is and don’t need definition. But if we’re talking about developing research and examining causal relationships, we want to define things more closely and formally. (Research needs formality and detail so that others can check the research.)  For example, fruit theory might call for fresh, ripe, worm-free fruit that people would enjoy eating (a definition that is not identical with a more general understanding of fruit that includes unripe or wormy or rotten fruit). That might lead us to a whole set of questions of how to identify fruit that people would enjoy eating, which could lead to more general questions of what it means for people to enjoy eating. (Or maybe the real issue is that people enjoy receiving fruit as gifts—that would lead to a different definition of what “fruit” is.)

To study fruit theory, we also need to define what counts as “giving” and what counts as “better performance.”  As for “giving,” there is some question of the specific details of how the transfer is made and whether any conditions are placed on that transaction, including any potentially hidden costs of the transaction. But defining giving is relatively simple compared to the question of “better performance.” Measuring performance a huge array of questions: Whose performance? Are we measuring the performance of the organization as a whole? Or of individuals in it? What kind of performance? What dimensions of performance are we measuring (speed? accuracy? gross sales? net sales? etc.) and over what time periods? Are we measuring cash flow of the business over a month? Or the employee sick days taken over a year? Or are we measuring profitability over a decade? There are any number of different ways to think about the general concept of performance.

To develop research, we might also need to specify further the causal mechanism by which fruit theory works. Does giving fruit work because fruit makes people healthier, and therefore better able to work hard (as the old saying goes “an apple a day keeps the doctor away”)? Is there a physiological causality? Is that physiological causal path one that gives people more energy? Or one that improves their strength? Or one that boosts their mood?  Or maybe the causality is not physiological but psychological: giving employees gifts makes them feel appreciated and they want to work harder as a result?

Answers lead to new questions

Whenever we make a choice of where to focus attention, we can find new questions to pursue. We may start pursuing a question of business, as in fruit theory, but that question might lead into other fields of study.  If we posit a physiological cause for fruit leading to better performance of employees, then we need to study physiology. That study might lead in a variety of directions: maybe fruit theory works because fruit improves health, reducing sick-time lost—that would lead to study of immunology: how and in what ways do apples improve immune response? Or maybe fruit theory works because of some other physiological effect: strength, endurance, mood. Since different foods and substances can impact strength, endurance, and mood, maybe fruit has such effects?  If one thinks that fruit has a physiological effect on mood, one might then be led into questions of which specific biological pathways lead to mood improvement, and perhaps in studying that research, you see that other researchers have identified different kinds of mood improvement, and perhaps debate ways in which physiology affect mood.

New answers pretty much always suggest new questions.  

Preventing Over-analysis

You can take analysis too far. If you constantly analyze everything, you end up with a great mass of questions and no answers.  It can lead to getting swamped in doubt.  There is no rule for this, beyond that at some point it is necessary to pick the point at which you say “I’m satisfied with my answer to this question.”  Such statements close off one potential avenue of study to allow focus on another, and to set limits to what you need to study—limits that are necessary for the practical reason that it’s good to finish a project even if that project is imperfect.

If you say “I’m satisfied that the reason Fruit Theory works is because fruit makes people healthier,” you don’t need to pursue questions of whether and how and how much fruit promotes health, and you can go on to focus on how improved health helps a business.  Or if you say, “I’m satisfied that fruit theory works,” you can go on to study details of implementing fruit theory.  Of course, it’s good to have reasons, and good to be able to explain those reasons: if you’re satisfied that fruit theory works, it’s useful to be able to give evidence and reasoning. In academia, that evidence often comes in the form of other research literature. If you can cite five articles from reputable sources that all say “fruit theory works,” then you can go on to your research in implementation without getting embroiled in any debate about whether fruit theory works—even if the five articles you cite are not yet accepted by all members of the scientific community.


Analysis itself isn’t really that hard in the small-scale—we do it automatically to some extent. But it is something that grows increasingly difficult as we invest more energy into it: the more detail we add to our analysis, the more there is an opportunity to analyze further, which can lead to paralysis or to getting swamped. It is something that wants care; it wants attention to detail. 

The basics of logical analysis 2: Down the rabbit hole

Continuing my discussion of analysis from my previous posts, I look at how analysis can lead to new questions and new perspectives. Just as Alice ducked into the small rabbit hole and found an entire world, so too can stepping into one small question open up a whole world of new questions and ideas.

If you look at things right and apply a bit of imagination, analysis quickly leads to new questions.  Even something that looks small and simple will open up to a vast array of interesting and difficult questions. 

The multiplication of questions that arises from analysis can be good or bad. New questions can be good because they can lead to all sorts of potentially interesting research. But having too many questions can be bad, both because it can interfere with focusing on one project, and because it leads to complexity that can be intimidating. Learning to deal with the expanding complexity that appears with close study is a valuable skill in any intelligence-based endeavor—whether scholar or professional, decisions must be made and action taken, and falling down a rabbit hole of analysis and exploration will sometimes interfere with those decisions and actions.

This post follows up on my previous in which I argued that we analyze automatically and that the work of a researcher includes making our analyses explicit so that we and others can check them.

In this post, in order to show the potential expansion of questions, I’ll look at a couple of examples in somewhat greater detail. While I won’t approach the level of detail that might be expected in a scholarly work meant for experts in a specific field—I want my examples to make sense to people who are not experts and I’m not writing about fields in which I might reasonably called an expert—I hope to at least show how the complexity that characterizes most academic work arises as a natural part of the kind of analysis that we all do automatically.

Looking more closely: Detail appears with new perspectives

In the previous post, I used the example of distinguishing the stem, seeds, skin, and flesh of an apple as a basic analysis (separation into parts), but it was quite simplistic. Now I want to examine how to get more detail in an analysis of this apple.

For starters, we can often see more detail simply by looking more closely (literally): In my previous post, I separated an apple into skin, flesh, seeds, core and stem.  But we could look at each of those in greater detail: the seed, for example, has a dark brown skin that covers it and a white interior.  With a microscope, the seed (and all the rest of the apple) can be seen to be made up of cells.  And with a strong enough microscope, we can see the internal parts of the cells (e.g., mitochondria, nucleus), or even parts of the parts (e.g., the nuclear envelope and nucleolus of the cell’s nucleus). This focus on literally seeing smaller and smaller pieces fails at some point (when the pieces are themselves about the same size as the wavelengths of visible light), but in theory this “looking” more closely leads to the realms of chemistry, atomic and molecular physics, and ultimately to quantum mechanics. Now we don’t necessarily need to know quantum mechanics or even cellular biology to study apples—you don’t necessarily visit all of Wonderland—but those paths are there and can be followed.

In this apple example, each new closer visual focus—each new perspective—revealed further detail that we naturally analyzed as part of what we saw.  But division into physical components is only one avenue of analysis, and others also lead down expansive and detailed courses of study.

So Many Things to See!

We can look at different kinds of apples in a number of different ways. (Not to go all meta here, but we can indeed separate—analyze—distinct ways in which we can analyze apples.)

At the most obvious, perhaps, we can separate apples according to their variety, as can be seen in markets: there are Granny Smiths, Pippins, etc., so that customers can choose apples according to their varied flavors and characters.  Some people like one variety and not another.  These distinctions are often made on the basis of identifying separate characteristics of apples (another analysis): “I like the flavor and smell, but it’s kind mealy and dry;” or “It’s got crisp flesh and strong flavor; it’s not too sweet.” Flavor, texture, appearance (color, shape, etc.), and condition (ripe, overripe, e.g.,) are all distinct criteria that a shopper might consider with respect to an apple.  These aren’t exactly the kind of thing that would be the subject of academic study, but they could certainly lead to more academic questions.

The question of apple variety, for example, could be seen through the lens of biology. There are the questions of which genetic markers distinguish varieties and the ways in which those genetic markers tell us of the relationships between different types of apples and their heritages.  The question of heritage brings up another aspect of apples that could be a study for a biologist: How did a given strain develop? There are wild apples, which developed without human intervention; heirlooms, which develop through selective breeding; and hybrids, which grow from planned crossbreeding.  Combining these questions of genetics and heritage might lead a scholar to study the migration of a specific gene, for example to see if GMO commercial apple farms are spreading their modified genes to wild populations.

Another characteristic of an apple that a shopper might consider at the store is the price.  This is obviously not a matter for biologists, but rather for economists. And an economist might want to look at how apples get priced in different markets.  That might lead to questions of apple distribution and apple growing. Questions of apple growing might lead back to questions of biology, or to other fields of study like agronomy. Questions of distribution might lead to questions of transportation engineering (what’s the best means to transport apples?) or to questions of markets (who are potential producers/distributors/vendors/consumers? what products ‘compete’ with apples?) or questions of government policy (how did the new law affect apple prices?).

So Many Different Perspectives

Different analytical frameworks can be found by imagining different perspectives on apples. In the previous section, I already linked the study of apples into fields like biology and economics and more, but there is wide potential for study of apples in many areas. 

Think about university departments where apples might get studied. Biology, economics, and agronomy are three already suggested. But people in literature departments might study apples in literature—“The apple in literature: From the bible to the novel”. People in history departments could study the history of apples—“Apples on the Silk Road in the 14th century.”  Anthropology: “Apples and the formation of early human agricultural communities.” Ecology/Environmental Science: “Apples and Climate Change.”  

These example titles are a little strained because I have not made a study of apples in these contexts, and therefore I’m throwing out general ideas that are rather simplistic and free of real theoretical considerations.  More complexity would attend a real project.  The student of literature might be looking at different things that apples have symbolized because they want to make a point about changing cultural norms. Or they might look at how apples have been linked to misogynistic representations of women. Such studies, of course, are interested in more than just apples. As we combine interest with apples with other interests, too, new potential ideas being to arise.

Combining Perspectives

Most people have multiple interests and these interests can combine in myriad ways to create a vast array of different questions that could be asked about apples (or any other subject).

Pretty much any scholarly perspective has its own analytical frameworks that structure research. Biology analyzes according to genetic structure, for example. Business analyzes according to market and economic factors. When these frameworks start to overlap—a business analysis using genetic factors, or a genetic analysis driven by specific economic factors—multiple points of intersection appear. Each genetic structure (each type of apple) can be examined with respect to a variety of different economic factors (e.g., flavor, shelf life, durability, appearance). 

This multiplication of different ways of dividing things up (analytically, anyway) can be problematic because it creates a lot of complexity and because it can be confusing/overwhelming, but it can also present opportunities because each new perspective might have some valuable insight to add. 


What seems small and simple to a first glance—a rabbit hole has a small and unassuming entrance—usually opens into a vast and expanding world of questions.

Analysis requires a bit of imagination—imagination to see a whole as composed of parts, imagination to consider different perspectives from which to view an issue, imagination to recognize the different aspects of things.  But a lot of this analysis is pretty automatic: little or no effort is required for the necessary imagination. Still, because it’s so easy and so natural, this process gets discounted—especially if you view “analysis” as something highly specialized that only experts do.

To develop a practice of analysis, all you really need to do is make a point of trying to make your different observations explicit.  Whether you’re judging an apple (taste, appearance, scent, etc.) or a theory (the various assumptions, conclusions, relationships to other theories), chances are good that you’ll pretty automatically respond to different aspects at different times. If you can formalize and record these different observations, you lay the foundation for developing your own analyses.

The Basics of Logical Analysis 1: Seeing Parts of Wholes

In this post, I revisit the general issue of analysis that I discussed in my previous post. There is a measure of overlap because I’m really searching for a way to communicate both the fundamental simplicity of analysis with all its potential complexity.  Maybe the general principle for this post is that analysis is, at its roots, a simple intellectual action: dividing something into different parts, but that this simple action inevitably leads to increasing complexity.

As with so many things in which analysis is involved, this post started out simpler and shorter than it has become. My original plan was to write one short post that just did a better job of explaining the ideas in the previous post. But then, as I thought more closely about it, I found issues that hadn’t been discussed in my previous.  It’s now looking like this will be a series of posts—at least two: this one will discuss the big idea of analysis and relatively simple, everyday examples; the next will look at some examples more closely, in hopes that they feel more like an academic example. I suspect that may end up as two or more posts. In a way, this story encapsulates an aspect of analysis in practice that I want to emphasize here: the more you do it, the more complexity you see, and that leads to expanding projects, that must be reined in for purely practical reasons: basically, if you want to finish a project, you have to stop analyzing everything. (And as I write that, I wonder whether I haven’t sparked the foundation for a third post: how do you stop analyzing once you’ve started. It’s an idea that I touch on briefly in the second post, but maybe it deserves its own? I’ll have to think about that…)

What is “Analysis”

At its root (its etymological foundations), “Analysis” is derived through medieval Latin from the Greek for “unloose” or “take apart.” (In contrast to “synthesis” whose roots lie in the Greek for “put together.”) This sense is generally in line with how the word might get used in a conversation. For example, after [a movie/a TV show/a meal at a restaurant], if one person is talking at length criticizing details of the [movie/etc.], the other might get exasperated and say “Stop picking it apart,” or “stop over-analyzing it.”

It is this basic “picking apart” that concerns me in these posts. It is a basic principle that can manifest informally (as a person might do with a movie/etc.) or one that can manifest as extremely detailed and formalized systems of analysis, as with psychoanalysis, or statistical analysis, or data analysis, or any other field that uses “analysis” in a title.

We Do It Automatically

The kind of analysis that is important in research (and other intellectual work) is something that humans do naturally and automatically—often without even noticing that we’re analyzing.

To apply it in research is to take an automatic, unconscious ability and work to make it conscious and explicit. Splitting things into pieces—into different parts or different aspects—is pretty easy. But making those divisions explicit is hard because of the complexity that tends to develop.

We all automatically split things up into different parts, which is reflected in our languages (including words like “parts,” “pieces,” “components,” “elements,” etc.) and much of our daily lives. We separate the world into all sorts of different categories. We eat food, which includes fruit, vegetables, meat, etc. We work, but have many different kinds of work: homework, housework, yard work, not to mention jobs, which are work. We separate the good from the bad. We divide people up into different groups: family, friends, acquaintances, people we don’t know, etc.

It’s true that many of these divisions are learned, but that doesn’t mean that we don’t naturally make divisions of some sort.

Analysis: Examples

Consider an apple.  It is a whole in itself, but we pretty naturally separate it into a few different parts: stem, skin, flesh, core, seeds.  Our basic sensory apparatus provides distinguishing information: stem, seed, and flesh taste different, smell different, look different, and feel different.  Our basic sensory apparatus is already providing us information about differences in the world that lead to analysis of the apple into its different parts.

Consider a movie.  It is a whole in itself, but we can easily divide it in many different ways that are familiar to cinephiles. We can say “The acting was pretty good, but the script was weak.” Or “The cinematography is great, the writing is great, the direction is ok, but the star annoys me, so I had trouble enjoying it.” We might like what we see (“great cinematography!”), but not what we hear (“poorly written dialogue”). We might like one actor and not another. Again, this is analysis in action, although few would think of this kind of thing as analysis. Unless we were to really get into a lengthy discussion of different aspects of a movie, and then someone might say “stop analyzing it! You’re ruining it for me!” 

Research and Analysis

Research takes this basic ability to distinguish between things and tries to make it explicit and formal. For the researcher, it’s not enough to say that it’s obvious that you have stem, seeds, and flesh, or acting, directing, writing, and cinematography. It’s necessary to begin to formalize.

Formalized analysis is crucial in research because it allows a research community to work together.  Researchers who doesn’t explicitly express their analyses can’t have their researcher reviewed or trusted by others. The need to share and provide explanations and evidence that can be examined leads to detailed discussions (articles books, etc.) that can themselves be analyzed (and will be by other researchers who will look for strengths on which to build and weaknesses to correct).

In practice, research communities develop different analytical frameworks and methods of analysis as a result of the attempt to explain and examine each others’ work. These become increasingly detailed and complex over time, as each successive generation of researchers turns their analytical abilities to the questions of interest. Sometimes entirely new analytical frameworks develop, but these, too, are subject to close examination that leads to complex formal analytical systems. 

Psychoanalysis, for example, depends on familiar analytical divisions: the id, ego, and super-ego represent parts of a large whole. So, too, the conscious and unconscious.  Each different pathology is a part of the large whole of “poor mental health.” And each pathology itself is distinguished by a number of different characteristics that are parts of the pathology. To become a psychoanalyst, is to adopt a specific set of analytical frameworks regarding the psychology of individuals and the nature of psychotherapy as well.   Other theories of psychology and psychotherapy may not be called “psychoanalysis,” but they too adopt different analytical frameworks.

Mathematical analyses separate the world into different symbols that represent different parts of the world and distinct relationships between the parts. Physics, of course, presents the interactions of objects in the world as a set of symbols and mathematical equations. In a business setting, the large-scale system of a factory, for example, might get represented in mathematical equations that separate out machines that produce goods, goods that are produced, rates of production, costs of production, necessary workers, etc.


Analysis happens.  If you examine something closely—an object, an interaction, an idea—you will begin to distinguish different aspects or parts of it.  These distinctions are analysis. To move that analysis into an academic or research setting really only requires that you try to make your analyses explicit as you develop them, so that they can be examined for flaws (by you and by others).

Of course, making analyses explicit and then looking at those analyses with an eye for flaws may be a path to good research, but it is not a path to simplicity.

I’m going to close here and in my next post (or posts), I’ll look with greater detail at some examples to show different ways in which things can be analyzed and to discuss the expansion of complexity, which can be both good and bad.

The Basics of Logical Analysis: Making Judgments

A writer recently expressed doubts to me about making judgments, which is a pretty common reservation: there are good reasons that we don’t want to be overly or inappropriately judgmental.  At the same time, however, life is filled with judgments that we have to make, and we want to make them well.

Life is filled with choices and each choice is a judgment. Are you going to get out of bed now, or will you roll over and pull up the covers? Are you going to go out or stay home? What will you wear? How will you prepare to go out? What will you bring with you? What will you eat? Where will you go? etc. etc.

In positions where experience and/or expertise are required, it is because of the difference between people who can make good judgments and people who make bad ones. You want your doctor/dentist/teacher/lawyer/accountant/auto mechanic/public transit driver/etc. to make good judgments when serving you, for example. You want people making policy, whether for business or organization or government, to make good judgments.  And if you aspire to fill any such role yourself, then you need to be able to make judgments yourself.

Most judgments are complicated, and that’s why people use analysis. The word “analysis” is fraught with a certain mystery or awe—for many, at least—but “analysis” is something that we all do pretty commonly. At its root, the word “analysis” means “to take apart” (in contrast to “synthesis” —put together), and at its root, this is what most forms of analysis do: they start to “take apart” the factors that make up a situation where judgment is required.

Consider a really simple example of analysis that most of us have experienced: you go to a store and there are two similar items that might satisfy your basic needs. Let’s say it’s a food item. Most people will perform an informal analysis that might be quite detailed: they compare prices (dimension 1), sizes (dimension 2), ingredients (dimension 3), as well as, perhaps, reputation of producer (dimension 4), aesthetics of packaging (dimension 5).  We could accurately call that comparison a “multi-dimensional analysis,” and it’s one that people do all the time.

This kind of analysis might continue with many of these factors.  With respect to ingredients, we might say “I’m glad they use X in this product, but I’m allergic to Y.” And then we’re analyzing.  An ingredient list, of course, itemizes different constituent parts of a product, so it’s already an analysis of the product.  But we could do the same with the packaging. Indeed, I said “aesthetics of packaging” above, but that’s only one dimension of an evaluative analysis of packaging—in addition to appearance, we might consider materials (paper vs. plastic, for example, is an aspect of packaging that producers absolutely care about; consumers might not be as concerned) and protection of contents. And protection of contents, might itself have different dimensions—for example, preservation of freshness and preservation of form (a cardboard box, for example, will protect the shape of brittle foods—e.g., chips, cookies—better than a plastic bag, but the plastic bag might preserve freshness better).  And if we started studying preservation of freshness, we might start to see different dimensions, again carrying out analysis. I have not studied preservation of freshness, but in my informal off-the-cuff analysis right here, we might consider freshness over weeks, over months, and over years as being different dimensions of preservation. We can imagine packaging that is inferior in the short run, but superior in the long run. For example, a loaf of bread stored in a paper bag will go stale faster than a loaf stored in a plastic bag, but storing a loaf in plastic can make a loaf’s crust less crunchy, which for some breads is a bad thing. (This basic analysis stems from an analysis of desirable qualities of bread—I like crunchy crusts and I like bread that is not stale.)

We analyze almost automatically: we see a movie and like something about it—“I liked the star; I liked the cinematography; etc.—and we have begun the process of analysis. We go to a restaurant and we analyze: “I loved the food, and the service was great, too!”  However, in situations where there is more formality—in educational settings, or when writing and imaging the response of critics—we don’t think of applying the same basic skills that we would apply automatically in our daily life.

Reasons people hesitate to analyze. 

1. We may not feel qualified. As I’ve described it, analysis is a really basic process that we all do, pretty much all the time. But “analysis” is a term typically associated with high levels of expertise. Things like statistical analysis or psychoanalysis or systems analysis are all tasks for experts.  If you doubt yourself—as many do (cf. imposter syndrome—then it is easy to put the tasks of experts outside your own set of abilities.  But, in fact, these formal systems of analysis are no more than extensions of the basic analysis I have described.  The formal details are an outgrowth of repeated attempts to use analysis productively and the recognition that formal systems of analysis are useful.  But those formal systems of analysis all start with the basic willingness to look at something and respond to the complexity that you see. Psychoanalysis, for example, looks at different components of a person’s psychology—id, ego, and super-ego is one analytical axis; conscious and unconscious is another; identification of distinct life-shaping events is another.  Such formal systems of analysis may be detailed and complex, but their use is acquired through practice that starts with trying to identify different issues of significance.  To be an expert in analysis requires practicing analysis, and that means practicing analysis while not yet an expert.  Our analyses, after all, need not commit us to anything. If we feel that an analysis has not helped us, we are perfectly free to ignore it or redo it as we wish.

2. We may feel it is inappropriate.  There are at least two reasons (in addition to the fear of being unqualified): 1. Analysis is often tied to evaluation and negative criticism, which can lead people to avoid it out of a desire to avoid being judgmental. The unfortunate conflation of analysis and negative criticism places analysis in a negative light that it doesn’t deserve. 2. Analysis can also be overdone: not all analysis is useful. Sometimes analysis can be paralyzing: instead of making a decision, we can get stuck thinking more analysis is necessary. And often analysis will focus our attention on negative aspects that we might not have given much consideration. this is not necessarily bad, but it can be an unnecessary damper on enthusiasm. If you enjoy a movie, for example, does it necessarily help you if you suddenly notice a flaw?  Analysis can take attention away form holistic concerns, too.  But these “problems” with analysis are not so much inherent in analysis as they are inherent in misuse. As with drugs or guns, use need not be misuse. There are valuable uses for drugs, for guns, and for analysis. It lies with the practitioner to use with care. 

Research and Analysis

Analysis comes naturally in research. Every choice of topic starts with separating a focal topic from the rest of the world.  If we study “education”, we’re focusing on one part of the world (and leaving out others); if we study “business,” or “history,” or “biology,” again, we’re choosing to separate one aspect of the world from others.  This is not to say that we need imagine any of these ideas as completely distinct from the rest of the world, but only that for various reasons, we are separating out one thing we want to focus on from others that we do not wish to consider. (Or we might have a more sophisticated analysis that separates the world into three general classes—the focal issue, closely related issues, and issues of little direct relevance).  Choices like this are the basis of research, so you want to make them.  

If we ask “how does X affect Y”, a starting place is to literally break out and examine each piece of that sentence: what is X? what is Y? what kind of effects are you imagining?  That is to say that we look at the sentence and separate out different aspects, with each word representing an aspect of the situation in question. The very language that we use reflects our analytical tendencies. Defining different terms used in research is a fundamental process of analyzing a situation into component parts.

Suppose, for example, that we are looking at Montessori education’s (X) effects on students (Y), then  we would naturally want to explain what Montessori education is and who Montessori students are. We would also want to consider what “affects” means in this context, and with a little thought, we can probably find a number of different things that could be relevant to this discussion: maybe Montessori education affects students’ overall success as students, or maybe it affects their emotional health as children, or their ability to make friends, or their long-term success in school, or their success in college, or their success as students of STEM subjects, or their success as students of language arts.  Each of these possible implications for an educational system on its students is one of the factors identified by this very informal process of analysis that I have undertaken.

Or, suppose that we are interested in management practices (X) and business performance (Y).  First we need to look at what we mean by management practices—what counts as a management practice? And if many things do, will we choose to study all of them?  Then, separately, we can look at different dimensions of business performance, starting, perhaps, with profitability, but also including such things as employee morale.

Research starts with casual analysis

Research depends on analysis in many different forms—from finding different aspects of situations to examine to finding different perspectives from which to analyze a situation.  All of these forms essentially spring from the observations that you have as a researcher and your interest in and attention to detail.

In the course of your research, you will probably be motivated to move beyond the initial steps of casual analysis that you would carry out in everyday life—you don’t need to exhaustively list all the different possible characteristics of a movie to decide whether you want to see it (or whether or why you enjoyed it). But don’t be afraid of those first steps: analysis is not something inappropriate or reserved for some special class of analyst. It is one of the foundations of critical thinking, and if you want to come up with original research, your observations of the world, the way that you organize your observations, and the analyses that you come up with are the roots of original research.

So look closely, don’t be afraid to identify specific details, and then see what you can learn from those observations.  At its root, analysis is something we all do. Research is just a move to try to formalize this common practice.