Thanks to Daniel Shea for inviting me! In the podcast we discuss my book, Literature Review and Research Design especially focusing on the scholar’s interactions with other scholars in their field.
A few days ago, I was asked for advice on taking notes on readings and I realized I didn’t have any clear advice to offer beyond: take notes! But that’s not very insightful and doesn’t add much to the discussion—the querent already knew that she wanted to take notes, and wanted something more specific about how to do it, so I’m going to commit myself to some suggestions in writing (and if you think I’m wrong, feel free to let me know!).
First, however, as is my wont, I will take a step back from the specifics of “how,” to look at the question of “why?”
Why take notes?
Notes help you remember; they help you organize your thoughts; they help you focus. All of these things are themselves contextually dependent. What you want to remember depends on the context. There may be times when you’re trying to do a wide review of literature in a field, at other times you may be interested in summarizing a specific work in some detail, and at yet other times, you may have a more specific focus on some theoretical or methodological issue. There are times that you are reading a work for a first time, and are trying to sort out the basic points, and other times where you are re-reading a work from a new perspective. In each case, what you want to get out of it, and therefore how to take notes changes. (I’ll note that there are some times when you might read without “taking notes,” such as if you’re looking for a specific quote to use.)
Contextual efficiency concerns
Taking notes takes time and effort. As with all tasks, we have limited time and effort available, so it’s very important to use available resources efficiently. Taking notes is valuable—so valuable that it’s usually worth dedicating time to it. Taking notes, however, is not so valuable that it should interfere with other, more important tasks. In particular, it’s crucial that time spent taking notes does not take away from time spent writing. Therefore, it’s necessary to suit appropriate note-taking to the context, to achieve efficiency. To illustrate this point, I want to discuss three examples from my recent experience that illustrate different concerns.
1. Writing a book review.
Recently, I posted a book review on this blog (Write More, Publish More, Stress Less), and that post basically developed from taking notes on what I saw after a first review led me to want to write a review. I started just by skimming the book and looking more closely at a few parts. I didn’t take any notes then, but I did think that I might write a review. Then I went through the book—again pretty quickly, taking notes to mark the places that I liked best and wanted to discuss in my review. Those notes gave me a skeleton of the details that I would cover in my review. In this case, the notes could almost be viewed as a rough first draft or outline of my review, and, indeed, they got revised into the review, so I no longer have a record of those notes. They have served their purpose and I no longer need them.
2. Preparing for a lecture
A professor preparing for a lecture might take a similar approach to the one I used in writing a review, in which the notes become the outline for a draft of the lecture. The professor would want to focus the notes on their relation to the main points of the course. Rather than simply trying to get the main ideas of the reading, the professor might want take notes about how the different parts of the reading agree or disagree with points made in other lectures or by other readings. This kind of note-taking is not just reflecting what’s in a text but also analyzing and interpreting that work for the specific context of the course. If, for example, a course is on Victorian Literature, it might still be appropriate to engage with a reading that is about Modern Lit, but that uses an analytical method that could be applied to the Victorian work.
3. Reviewing a body of unfamiliar literature
A scholar reviewing a body of literature in an unfamiliar field would have different note-taking and reading strategies. In the first two examples, I talk about making notes about one specific work to which there is already some level of commitment. The dynamic changes when trying to manage a whole bunch of work. In this case, there is less of any preconceived focus guiding the note taking. There is some preconceived focus—whatever reason led you to choose this body of literature should influence the note taking (and the reading)—but mostly the point is to get a sense of the different arguments and ideas being used, than to make the readings serve a specific purpose in the same way that I was focused on writing a review or the professor was preparing for a specific lecture. In this case there is, on the one hand, greater range to what might go into notes about any one reading (because the focus hasn’t been limited by specific purpose), but also greater need to be concise and highlight the most important parts because of the greater number of different sources on which notes are to be taken. In a general review of material, it would be ideal if your notes could capture all the important issues in any one reading (which takes more time), but at the same time, it would be ideal if you could review a lot of different readings in a short time. Therefore, a balancing act is necessary. My suggestion in this case is to go through a process of review and focus: first, just review all the titles and authors, to get a sense of the general field (reading a title offers a lot of information, at least if the title is well written); second, select some that get a closer review, reading the abstract or introduction, perhaps; finally, only a few are selected for full reading. At each step you can takes notes. Even the titles you don’t pursue might get a brief note of what you saw and why you didn’t pursue it. The same is true at the abstract/intro stage: even the ones you choose to set aside get a brief note. Only the ones that get close reading get more substantial notes. And even with the more substantial notes of the readings you examine most closely, there must be a careful parcelling out of time. After all, even if you were trying to do an unbiased review of material in a field, you still had a purpose that guided that review, and to achieve that purpose you probably need to do more than just do the review.
Notes of readings can help you learn, but notes also take time away from other tasks. While it is important (indispensable) to read the literature, it’s more important to actually write the papers for which you were doing the reading. Learning from others is necessary, but is no substitute developing your own vision of how the world works and what is important. Spending too much time with readings takes time away from developing your own ideas.
Suggestions for taking notes
- 1. Be careful of the time you spend! Taking notes takes extra time. Be careful not to fall into the trap of saying “I can’t start my own project until I finish reading and taking notes.”
- 2. Focus on the big picture.
- 3. Skim the whole and maybe look closely at the introduction and conclusion before you start taking notes.
- 4. Be careful not to get sucked into details. Details are fascinating, but they distract from the big picture. Still, a detail can be useful if it helps you contextualize and remember the main message or themes. Also, details take time.
- 5. Note the major theories and scholars who receive the most attention, to get a sense of how the reading fits into the larger discourse.
- 6. Limit your notes; don’t attempt to get everything. Most scholarship is very dense and filled with crucial details (good scholarly authors try to leave out stuff that isn’t important). If you look closely, you can get sucked into any number of different rabbit holes of value: looking at methods, for example, you can examine and explore variations and alternatives, and the discussion of how different methods suit a specific research question is neither trivial nor insignificant.
- 7. Note the greatest strengths.
- 8. Note the greatest weaknesses.
- 9. Write briefly about how it helps your own work.
Notes help improve the quality of your reading, but they can be a trap. They can distract you with details and divert your attention from your purposes. A scholar needs to be able to read and make sense of the literature in their field, and remember what they’ve read. But more importantly, a scholar needs to keep producing and developing their own perspectives and insights. It’s good to read and take notes on your reading, but it’s far more important to be developing your own work. Therefore, when taking notes, use it as an opportunity to refine your own ideas. Notes are not taken only to help you absorb the ideas of other people, but also to use the ideas to help you with your own work. Consider the purpose for which you are taking notes: why are you taking those notes? How will they help you? If you lose sight of your own purpose and your own projects, notes can be a big distraction and delay t your own work. If you take notes with a specific purpose in mind, those notes will be more useful in achieving that purpose.
Back in January, I posted about my thought process in response to a question I had about the Georgia runoff elections. My intention was to illustrate both the crucial role that imagination plays in developing research and ways that imagination combines with analysis to generate myriad hypothetical explanations, each of which could be examined and researched. This post is partly following in those footsteps, but pays more attention to the emotional elements in this process, especially with respect to questions that arise in the process.
As you can see from the title, I’m trying to pack a lot of different ideas in here, but the basic message is that the process of research moves forward by generating a variety of possible avenues of exploration and by choosing one of those avenues. If you recognize that dynamic, you can benefit from it. A crucial part of that approach is the element of confidence needed to make choices in the face of uncertainty. The researcher needs to be able to see a wide array of different possible questions in order to develop a robust argument that can withstand reasonable criticism, but also needs to be able to choose specific limits to each project without any objective logic to determine them. Such limits can be frustrating—they can feel almost arbitrary, or at least arbitrary with respect to purely intellectual issues—but, from a practical perspective, they allow completion. From the perspective of a long-term research practice that wants to produce multiple publications (as expected of professors), these different limits suggest new projects; each acknowledged limit indicates a project that tries to move past that limit.
A “trivial” question
This post was partly triggered by the recent Super Bowl, in which Tom Brady won again. Brady has won the championship an amazing 7 times in 19 seasons, more than 1/3 of the possible titles in those years. According to much of the sports media, Brady is the GOAT (Greatest Of All Time) in football—that he’s the best player ever at the game’s most important position. When I started thinking about that first, I wondered where he stood amongst the wider realm of GOATs: which GOATs are the greatest GOATs? Is Tom Brady greater than Babe Ruth? greater than Serena Williams? It’s a question that could be viewed as trivial: how important is it, really, to identify the GOAT? At the same time, it’s the kind of loose question that could spark research by leading to more detailed questions about how to measure greatness across fields. That question depends on having some idea of how to measure greatness within one field. Which sparks more questions, for example, how do we weigh the peak performance of a player vs. the long-term contribution? How does a player who was really, really great for a relatively short time (Sandy Koufax, Penny Hardaway, Kurt Warner) compare to someone who was just very good for a very long time (Warren Spahn, Jason Kidd, Eli Manning)? And more questions: how do you measure peak greatness? How do you measure long-term greatness? Drilling into any of these questions leads to yet more questions. For the researcher, this can be a bonanza of different potential projects. Or at least could be, if the emotional element is in place: if you tell yourself that a question is trivial, you’re not going to work on it. (I’m not going to be doing significant research on sports GOATs because the questions aren’t sufficiently important to me.) Some questions are trivial, but assuming that a question is trivial too soon can mean ignoring potential courses of research.
Partly this post was sparked was sparked by feedback a writer received on a paper, particularly the phrases “It might have be useful to further discuss…” and “it would have been great to further explore…” Phrases like this are reflections of the process of discovering additional questions: every time we commit ourselves to a new sentence on the page, we offer a target to criticism (but wait…is that true? Every time I commit to a sentence? Are there exceptions?…). Whether or when to answer such questions is largely a negotiation between the author and the audience, taking into account the specific context. It means different things to a student receiving a grade on a paper (that will not be revised) and an author responding to a revise-and-resubmit.
I titled this section “confidence” because the key factor, in a way, is in having the confidence to make decisions of whether and when to pursue these further questions. At any moment in time, there is a limit to what you can do. And in writing, there is almost always a word count limit, sometimes formally stated, sometimes implicit. Therefore, choices must be made: which avenues do you explore and when? Confidence is a necessary guide: without confidence to make your own decisions, you wander aimlessly in response to the most recent stimulus; with confidence, you pursue your own goals and are not swayed by others telling you that your work is trivial. If a question is important to you—if you’re passionate about seeking an answer—that may lead you to ideas that are important.
When I was in college, the whole idea of sports analytics was still relatively obscure. Sports teams didn’t have entire analytics departments; there were no sports analysis websites; there weren’t sports analytics conferences hosted by prestigious universities. Such questions weren’t viewed as particularly important by most involved in sports, and those who didn’t care about sports quite naturally didn’t view those questions as important. Today, of course, sports analytics are a huge industry, and therefore consequential to the many who are involved in sports (though people who don’t think sports are important probably still think that sports analytics aren’t important).
But, when I was in college, sports analytics was just beginning to burst on the scene. The baseball writer Bill James, for example, who started self-publishing his analytics work a few years before I entered college to reach out to a relatively small group of statistical analysts, was starting to gain popularity. James had had the confidence to pursue his work despite the extensive scorn it generated (especially in the early years). James, his colleagues, and those who followed, built sports analytics into a huge industry simply by pursuing questions they found interesting. James would ask simple things like “is batting average the best way to judge a player?” or, more generally, “how do we identify good players?” And he explored those ideas, exploring and developing different analytical methods, revising and refining or even redefining his theories and techniques. He, and the many others who joined that pursuit, simply kept saying “it would be great to further explore…” Indeed, James’s writing often included statements like “when I have time, I want to do a better analysis of X.”
Choosing to pursue a question takes confidence, particularly if others doubt or ask questions. It’s harder to maintain motivation if someone tells you that your work is worthless or uninteresting, and it’s pretty much guaranteed that if you tell enough people about your work, some will find it worthless and uninteresting. It’s not all that uncommon for scholars (particularly graduate students, I think, though I have no empirical evidence) to start to think that their work is so narrowly focused that it is essentially worthless. Questions are inevitable. The question is what to do with them, and confidence is key.
Practically speaking, the scholar faced with a question can do one of three things: they can ignore it; they can pursue it; or they can get bogged down by it. The third can be a big problem. Pursuing questions can take a lot of time and effort. Ignoring questions can actually be good because it allows continued focus on one specific project.
Putting off questions and building a list of projects
When I speak of “ignoring” a question, I don’t necessarily mean entirely ignoring it, but rather temporarily putting it to the side so that it doesn’t derail or delay a current project. Often such a delay can be explicitly acknowledged in writing: every “it would be interesting to explore…” question can be treated with a sort of promissory note by writing “It would be interesting to explore __X__, but that is outside the scope of this project.” Such a response can deflect reasonable concerns by presenting them as the practical choice of a scholar with limited resources (and all scholars have limited resources, especially time): it’s not that you ignored the problem, but that you made the choice to set that concern aside for a time.
From the long-term perspective of a scholar, each of those deferred questions can serve as the seed for a new project. If your system of evaluating football greatness has trouble comparing value across different positions, that’s a project. If your system has trouble comparing across eras (“How do we compare Unitas to Staubach to Montana to Brady when the rules of the game were different?”), that’s a project.
Imagination is a double-edged sword for a researcher: it offers so many questions for further exploration that paralysis can set in. It takes confidence to choose to pursue questions that others view as unimportant, and to set aside questions that others view as important. Still, all questions provide the potential seeds for future projects. Every limit you put on your current project suggests a future project. For someone considering a career as a researcher, it’s valuable to see that dynamic: the question you ask yourself and the questions of others can be viewed as possible future projects rather than flaws in your current work. Every research project has limits or it never gets finished, so it’s crucial to be able to accept questions as limits to the current project even if those questions are obviously important and relevant. I often say that one of the most important phrases for scholarly writing is “but that’s outside the scope of this project,” but at a deeper level, it’s not just about the phrase but about the perspective that it represents. Projects do have limits; it takes confidence to move forward despite limits and doubts. Building a list of future projects from current questions can help build confidence that you are acting responsibly as a scholar or researcher.
Last spring, I wrote a series of posts about analysis. In this post, I take a different approach to the same ideas. Two threads contributed to this essay, one sprang from my series of posts on writer’s block, and the other from a conversation with a professor whose students weren’t enthusiastic abut analyzing a text. This post focuses on the question of analysis with respect to writer’s block caused by self-doubt.
In my series on dealing with writing blocks, I most recently wrote a post related to the anxiety-causing doubt about having sufficient intelligence (a primary aspect of what is called “impostor syndrome”). The basic argument there is that scholars should focus on using and developing the skills that got them where they are, rather than worrying about whether they have enough innate ability.
While I was working on that post, I had a conversation with a professor who felt uncomfortable explaining to her students the value of analyzing a text, and that drove me down a tangent of thinking about analysis, and it seemed to me that for both intimidated scholars and uncaring/unenthusiastic students, the general problem was the same: the task seems either intimidating or unimportant because they think what is needed is something special and wildly unusual, rather than commonplace and everyday. For both the scholar experiencing anxiety due to imposter syndrome and the student doubting the value of analysis, some doubts can be eliminated with an appropriate perspective on the nature of analysis. For both the self-doubting scholar and the uninterested/unconvinced student, part of the problem lies in the language more than in the actual difficulty or value of the task. To say you’re going to “analyze” something, gives an intimidating appearance of formality to what is, in fact, a basic skill. If I ask you to “analyze” something, and you’re not entirely sure what “analyze” means, then, naturally, you’ll have some doubt about whether you can do it and whether it’s worth it to try. Understanding analysis, makes it easier to see its value and believe you can do it.
What is analysis?
Analysis is, at its heart, a basic, everyday ability possessed by all humans. It is something we all do automatically. Of course, “analysis” is also something done by highly educated, highly specialized experts using complex and abstruse systems. The word “analysis” covers a lot of territory.
Basically, “analysis” is examination to understand something better, particularly characterized by distinguishing different issues, aspects, contexts, or perspectives relevant to some main idea. (For example, Psychoanalysis identifies different symptoms and causal factors in a patient; DNA analysis identifies different genes within a DNA strand; Chemical analysis something identifies different ingredients.)
Etymologically, “analysis” means “separate” or “unloose;” it can be viewed as a process of intellectually breaking larger wholes into component parts. Such thinking is something humans naturally do all the time. Our visual system separates the colors, detects edges, and otherwise divides our visual input into meaningful groups. Our sense of smell (floral vs. fetid, etc.), taste (sweet vs. sour, etc.), touch (smooth vs. rough, etc.), and hearing (high pitch vs. low pitch, etc.) all discriminate. Our experiences and education teach us to discriminate in countless ways to guide us through the world. We “separate” the world into different categories, a process reflected in language, with different words for different aspects of the world and our experiences in it. In short, we all do analysis all the time.
People analyze for decision making.
Analysis is a basic aspect of learning about the world and decision making. A child eating dinner analyzes, separating things they like from those they don’t. That child might “analyze” a meal, physically separating foods they like from those they don’t on their plate. They might analyze a specific food, distinguishing flavor from texture: “I don’t like okra because it’s slimy. It tastes ok, but it’s still gross!” We wouldn’t expect a child to offer a sophisticated analysis, but they do analyze in a meaningful way.
Decisions rely on basic analysis. If you’re trying to decide what movie or show to watch, you might consider the genre (drama, comedy, action, etc.), run time (do I want to watch for 45 minutes or 90 or 180, etc.?), actors (who do you like or dislike?), director/producer (did you like their other work?), and more.
If you’re trying to decide where to eat dinner, you might consider cost, atmosphere, quality of food, quality of service, etc. If you’re trying to buy a car, you consider cost, gas mileage, comfort, room, power, handling, etc.
We naturally analyze to understand better: we look at the different aspects of the issue in question, trying to get a better understanding of the issue at hand.
Analysis is a skill that can be developed
Analysis is also task that we can learn to do better. It is a skill that can be developed, improved, and refined. The child’s analysis of okra imagined earlier is a simple analogue to the gourmet’s refined critique of a meal based on a trained and discriminating palate. The difference between the two is largely a matter of experience: the gourmet has a larger vocabulary and ability to make finer distinctions than the child largely because the gourmet has eaten more different foods and given a lot of thought and interest to foods. The child eating okra for the first time has limited context in which to judge the experience. The gourmet who has eaten okra many times, on the other hand, has extensive experience for making comparisons: one okra dish is overcooked, another undercooked, one over-spiced, another under-spiced, etc.
A scholar beginning study of some specific subject may fail to notice issues that they would notice with more experience. If you’re performing your first close analysis of a [Dickens/Melville/DeLillo/etc.] novel using [psychoanalytic/Marxist/queer] critical theory, you may not notice the same issues as if you had previously analyzed other works by the same author or using the same theory. These differences in what you notice might be entirely caused by lack of experience rather than any lack of innate ability.
Imagine a pair of identical twins. One takes a job in a wine bar, and the other takes a job in a bookstore. Their innate abilities are presumably identical, but the one working in a wine bar learns to distinguish different flavors and aromas, while the one working in a book store learns about marketing books and issues that affect the marketing of books. If the two are asked to taste (and analyze) a wine, the one will provide a detailed, complex assessment, while the other will offer a much more simplistic analysis. And if the two are asked to read a book, the one might just respond to the story, while the other would provide a more sophisticated analysis that includes not only the story itself, but the book design, and issues of context in the book market. Each sibling might be surprised at the detail noticed (or not noticed) by the other, but those differences would be entirely explained in terms of experience, not ability.
The scholar doubting their own ability needs to trust that their own abilities will grow with practice.
In academic (and professional) settings, analysis becomes formalized because the scholar or professional needs to be able to explain their decisions. The formality involved in academic settings makes the process appear unfamiliar and intimidating, but, in fact, much of the formal detail of academic analyses is the product of persistent, careful attention rather than any specical innate ability of discernment. Simply put, if you study something, you learn more about it. The gourmet is able to make sophisticated culinary judgements in part as a result of having eaten many different foods and many of those foods many times. Someone who has tasted 100 different wines and carefully attended to the characteristics of wines and who has cared enough to learn the language of wines will produce a more detailed analysis of a wine than someone who has not—the difference has little to do with any innate ability, and a great deal to do with the time invested. Complex scholarly analyses arise out of careful attention to detail more than out of any innate brilliance in a scholar.
For those doubting their intelligence, it’s important to remember what’s at stake when dealing with some complicated analysis: it’s just a more complex approach to doing something that everyone does. Inability to use one system of analysis does not preclude using other systems of analysis to good effect. Not every scholar will be able to use every specialized analytical system, but a careful and attentive scholar will pretty naturally develop increasingly sophisticated analyses on subjects of interest.
Analysis is something that we do naturally. It’s at the heart of what academia does, and although academic analyses are often highly formalized, the basic mechanics are still the natural process of distinguishing differences. For those who worry that they’re not smart enough, it’s important to remember that although academic analyses can become complex, they do not necessarily demand more “intelligence” than other analyses, but rather more attention to detail.
It would be foolish and naive to ignore the reality of intellectual differences: not everyone has the same perceptual, intellectual, and imaginative abilities. Most of us are not going to get groundbreaking insights on a par with Einstein’s development of relativity, but that doesn’t mean that we can’t do good, important work. Indeed, the vast majority of published scholarship doesn’t include groundbreaking insights. The vast majority of scholarship, however, does make a positive contribution. If you are doubting your ability, it’s ok to admit that you might not be an Einstein, but don’t forget or make light of the abilities that you do have. If you are in graduate school or have already completed an advanced degree, trust the abilities that you do have and look to build them through careful work.
Continuing my discussion of analysis from my previous posts, I look at how analysis can lead to new questions and new perspectives. Just as Alice ducked into the small rabbit hole and found an entire world, so too can stepping into one small question open up a whole world of new questions and ideas.
If you look at things right and apply a bit of imagination, analysis quickly leads to new questions. Even something that looks small and simple will open up to a vast array of interesting and difficult questions.
The multiplication of questions that arises from analysis can be good or bad. New questions can be good because they can lead to all sorts of potentially interesting research. But having too many questions can be bad, both because it can interfere with focusing on one project, and because it leads to complexity that can be intimidating. Learning to deal with the expanding complexity that appears with close study is a valuable skill in any intelligence-based endeavor—whether scholar or professional, decisions must be made and action taken, and falling down a rabbit hole of analysis and exploration will sometimes interfere with those decisions and actions.
This post follows up on my previous in which I argued that we analyze automatically and that the work of a researcher includes making our analyses explicit so that we and others can check them.
In this post, in order to show the potential expansion of questions, I’ll look at a couple of examples in somewhat greater detail. While I won’t approach the level of detail that might be expected in a scholarly work meant for experts in a specific field—I want my examples to make sense to people who are not experts and I’m not writing about fields in which I might reasonably called an expert—I hope to at least show how the complexity that characterizes most academic work arises as a natural part of the kind of analysis that we all do automatically.
Looking more closely: Detail appears with new perspectives
In the previous post, I used the example of distinguishing the stem, seeds, skin, and flesh of an apple as a basic analysis (separation into parts), but it was quite simplistic. Now I want to examine how to get more detail in an analysis of this apple.
For starters, we can often see more detail simply by looking more closely (literally): In my previous post, I separated an apple into skin, flesh, seeds, core and stem. But we could look at each of those in greater detail: the seed, for example, has a dark brown skin that covers it and a white interior. With a microscope, the seed (and all the rest of the apple) can be seen to be made up of cells. And with a strong enough microscope, we can see the internal parts of the cells (e.g., mitochondria, nucleus), or even parts of the parts (e.g., the nuclear envelope and nucleolus of the cell’s nucleus). This focus on literally seeing smaller and smaller pieces fails at some point (when the pieces are themselves about the same size as the wavelengths of visible light), but in theory this “looking” more closely leads to the realms of chemistry, atomic and molecular physics, and ultimately to quantum mechanics. Now we don’t necessarily need to know quantum mechanics or even cellular biology to study apples—you don’t necessarily visit all of Wonderland—but those paths are there and can be followed.
In this apple example, each new closer visual focus—each new perspective—revealed further detail that we naturally analyzed as part of what we saw. But division into physical components is only one avenue of analysis, and others also lead down expansive and detailed courses of study.
So Many Things to See!
We can look at different kinds of apples in a number of different ways. (Not to go all meta here, but we can indeed separate—analyze—distinct ways in which we can analyze apples.)
At the most obvious, perhaps, we can separate apples according to their variety, as can be seen in markets: there are Granny Smiths, Pippins, etc., so that customers can choose apples according to their varied flavors and characters. Some people like one variety and not another. These distinctions are often made on the basis of identifying separate characteristics of apples (another analysis): “I like the flavor and smell, but it’s kind mealy and dry;” or “It’s got crisp flesh and strong flavor; it’s not too sweet.” Flavor, texture, appearance (color, shape, etc.), and condition (ripe, overripe, e.g.,) are all distinct criteria that a shopper might consider with respect to an apple. These aren’t exactly the kind of thing that would be the subject of academic study, but they could certainly lead to more academic questions.
The question of apple variety, for example, could be seen through the lens of biology. There are the questions of which genetic markers distinguish varieties and the ways in which those genetic markers tell us of the relationships between different types of apples and their heritages. The question of heritage brings up another aspect of apples that could be a study for a biologist: How did a given strain develop? There are wild apples, which developed without human intervention; heirlooms, which develop through selective breeding; and hybrids, which grow from planned crossbreeding. Combining these questions of genetics and heritage might lead a scholar to study the migration of a specific gene, for example to see if GMO commercial apple farms are spreading their modified genes to wild populations.
Another characteristic of an apple that a shopper might consider at the store is the price. This is obviously not a matter for biologists, but rather for economists. And an economist might want to look at how apples get priced in different markets. That might lead to questions of apple distribution and apple growing. Questions of apple growing might lead back to questions of biology, or to other fields of study like agronomy. Questions of distribution might lead to questions of transportation engineering (what’s the best means to transport apples?) or to questions of markets (who are potential producers/distributors/vendors/consumers? what products ‘compete’ with apples?) or questions of government policy (how did the new law affect apple prices?).
So Many Different Perspectives
Different analytical frameworks can be found by imagining different perspectives on apples. In the previous section, I already linked the study of apples into fields like biology and economics and more, but there is wide potential for study of apples in many areas.
Think about university departments where apples might get studied. Biology, economics, and agronomy are three already suggested. But people in literature departments might study apples in literature—“The apple in literature: From the bible to the novel”. People in history departments could study the history of apples—“Apples on the Silk Road in the 14th century.” Anthropology: “Apples and the formation of early human agricultural communities.” Ecology/Environmental Science: “Apples and Climate Change.”
These example titles are a little strained because I have not made a study of apples in these contexts, and therefore I’m throwing out general ideas that are rather simplistic and free of real theoretical considerations. More complexity would attend a real project. The student of literature might be looking at different things that apples have symbolized because they want to make a point about changing cultural norms. Or they might look at how apples have been linked to misogynistic representations of women. Such studies, of course, are interested in more than just apples. As we combine interest with apples with other interests, too, new potential ideas being to arise.
Most people have multiple interests and these interests can combine in myriad ways to create a vast array of different questions that could be asked about apples (or any other subject).
Pretty much any scholarly perspective has its own analytical frameworks that structure research. Biology analyzes according to genetic structure, for example. Business analyzes according to market and economic factors. When these frameworks start to overlap—a business analysis using genetic factors, or a genetic analysis driven by specific economic factors—multiple points of intersection appear. Each genetic structure (each type of apple) can be examined with respect to a variety of different economic factors (e.g., flavor, shelf life, durability, appearance).
This multiplication of different ways of dividing things up (analytically, anyway) can be problematic because it creates a lot of complexity and because it can be confusing/overwhelming, but it can also present opportunities because each new perspective might have some valuable insight to add.
What seems small and simple to a first glance—a rabbit hole has a small and unassuming entrance—usually opens into a vast and expanding world of questions.
Analysis requires a bit of imagination—imagination to see a whole as composed of parts, imagination to consider different perspectives from which to view an issue, imagination to recognize the different aspects of things. But a lot of this analysis is pretty automatic: little or no effort is required for the necessary imagination. Still, because it’s so easy and so natural, this process gets discounted—especially if you view “analysis” as something highly specialized that only experts do.
To develop a practice of analysis, all you really need to do is make a point of trying to make your different observations explicit. Whether you’re judging an apple (taste, appearance, scent, etc.) or a theory (the various assumptions, conclusions, relationships to other theories), chances are good that you’ll pretty automatically respond to different aspects at different times. If you can formalize and record these different observations, you lay the foundation for developing your own analyses.
A writer recently expressed doubts to me about making judgments, which is a pretty common reservation: there are good reasons that we don’t want to be overly or inappropriately judgmental. At the same time, however, life is filled with judgments that we have to make, and we want to make them well.
Life is filled with choices and each choice is a judgment. Are you going to get out of bed now, or will you roll over and pull up the covers? Are you going to go out or stay home? What will you wear? How will you prepare to go out? What will you bring with you? What will you eat? Where will you go? etc. etc.
In positions where experience and/or expertise are required, it is because of the difference between people who can make good judgments and people who make bad ones. You want your doctor/dentist/teacher/lawyer/accountant/auto mechanic/public transit driver/etc. to make good judgments when serving you, for example. You want people making policy, whether for business or organization or government, to make good judgments. And if you aspire to fill any such role yourself, then you need to be able to make judgments yourself.
Most judgments are complicated, and that’s why people use analysis. The word “analysis” is fraught with a certain mystery or awe—for many, at least—but “analysis” is something that we all do pretty commonly. At its root, the word “analysis” means “to take apart” (in contrast to “synthesis” —put together), and at its root, this is what most forms of analysis do: they start to “take apart” the factors that make up a situation where judgment is required.
Consider a really simple example of analysis that most of us have experienced: you go to a store and there are two similar items that might satisfy your basic needs. Let’s say it’s a food item. Most people will perform an informal analysis that might be quite detailed: they compare prices (dimension 1), sizes (dimension 2), ingredients (dimension 3), as well as, perhaps, reputation of producer (dimension 4), aesthetics of packaging (dimension 5). We could accurately call that comparison a “multi-dimensional analysis,” and it’s one that people do all the time.
This kind of analysis might continue with many of these factors. With respect to ingredients, we might say “I’m glad they use X in this product, but I’m allergic to Y.” And then we’re analyzing. An ingredient list, of course, itemizes different constituent parts of a product, so it’s already an analysis of the product. But we could do the same with the packaging. Indeed, I said “aesthetics of packaging” above, but that’s only one dimension of an evaluative analysis of packaging—in addition to appearance, we might consider materials (paper vs. plastic, for example, is an aspect of packaging that producers absolutely care about; consumers might not be as concerned) and protection of contents. And protection of contents, might itself have different dimensions—for example, preservation of freshness and preservation of form (a cardboard box, for example, will protect the shape of brittle foods—e.g., chips, cookies—better than a plastic bag, but the plastic bag might preserve freshness better). And if we started studying preservation of freshness, we might start to see different dimensions, again carrying out analysis. I have not studied preservation of freshness, but in my informal off-the-cuff analysis right here, we might consider freshness over weeks, over months, and over years as being different dimensions of preservation. We can imagine packaging that is inferior in the short run, but superior in the long run. For example, a loaf of bread stored in a paper bag will go stale faster than a loaf stored in a plastic bag, but storing a loaf in plastic can make a loaf’s crust less crunchy, which for some breads is a bad thing. (This basic analysis stems from an analysis of desirable qualities of bread—I like crunchy crusts and I like bread that is not stale.)
We analyze almost automatically: we see a movie and like something about it—“I liked the star; I liked the cinematography; etc.—and we have begun the process of analysis. We go to a restaurant and we analyze: “I loved the food, and the service was great, too!” However, in situations where there is more formality—in educational settings, or when writing and imaging the response of critics—we don’t think of applying the same basic skills that we would apply automatically in our daily life.
Reasons people hesitate to analyze.
1. We may not feel qualified. As I’ve described it, analysis is a really basic process that we all do, pretty much all the time. But “analysis” is a term typically associated with high levels of expertise. Things like statistical analysis or psychoanalysis or systems analysis are all tasks for experts. If you doubt yourself—as many do (cf. imposter syndrome https://en.wikipedia.org/wiki/Impostor_syndrome)—then it is easy to put the tasks of experts outside your own set of abilities. But, in fact, these formal systems of analysis are no more than extensions of the basic analysis I have described. The formal details are an outgrowth of repeated attempts to use analysis productively and the recognition that formal systems of analysis are useful. But those formal systems of analysis all start with the basic willingness to look at something and respond to the complexity that you see. Psychoanalysis, for example, looks at different components of a person’s psychology—id, ego, and super-ego is one analytical axis; conscious and unconscious is another; identification of distinct life-shaping events is another. Such formal systems of analysis may be detailed and complex, but their use is acquired through practice that starts with trying to identify different issues of significance. To be an expert in analysis requires practicing analysis, and that means practicing analysis while not yet an expert. Our analyses, after all, need not commit us to anything. If we feel that an analysis has not helped us, we are perfectly free to ignore it or redo it as we wish.
2. We may feel it is inappropriate. There are at least two reasons (in addition to the fear of being unqualified): 1. Analysis is often tied to evaluation and negative criticism, which can lead people to avoid it out of a desire to avoid being judgmental. The unfortunate conflation of analysis and negative criticism places analysis in a negative light that it doesn’t deserve. 2. Analysis can also be overdone: not all analysis is useful. Sometimes analysis can be paralyzing: instead of making a decision, we can get stuck thinking more analysis is necessary. And often analysis will focus our attention on negative aspects that we might not have given much consideration. this is not necessarily bad, but it can be an unnecessary damper on enthusiasm. If you enjoy a movie, for example, does it necessarily help you if you suddenly notice a flaw? Analysis can take attention away form holistic concerns, too. But these “problems” with analysis are not so much inherent in analysis as they are inherent in misuse. As with drugs or guns, use need not be misuse. There are valuable uses for drugs, for guns, and for analysis. It lies with the practitioner to use with care.
Research and Analysis
Analysis comes naturally in research. Every choice of topic starts with separating a focal topic from the rest of the world. If we study “education”, we’re focusing on one part of the world (and leaving out others); if we study “business,” or “history,” or “biology,” again, we’re choosing to separate one aspect of the world from others. This is not to say that we need imagine any of these ideas as completely distinct from the rest of the world, but only that for various reasons, we are separating out one thing we want to focus on from others that we do not wish to consider. (Or we might have a more sophisticated analysis that separates the world into three general classes—the focal issue, closely related issues, and issues of little direct relevance). Choices like this are the basis of research, so you want to make them.
If we ask “how does X affect Y”, a starting place is to literally break out and examine each piece of that sentence: what is X? what is Y? what kind of effects are you imagining? That is to say that we look at the sentence and separate out different aspects, with each word representing an aspect of the situation in question. The very language that we use reflects our analytical tendencies. Defining different terms used in research is a fundamental process of analyzing a situation into component parts.
Suppose, for example, that we are looking at Montessori education’s (X) effects on students (Y), then we would naturally want to explain what Montessori education is and who Montessori students are. We would also want to consider what “affects” means in this context, and with a little thought, we can probably find a number of different things that could be relevant to this discussion: maybe Montessori education affects students’ overall success as students, or maybe it affects their emotional health as children, or their ability to make friends, or their long-term success in school, or their success in college, or their success as students of STEM subjects, or their success as students of language arts. Each of these possible implications for an educational system on its students is one of the factors identified by this very informal process of analysis that I have undertaken.
Or, suppose that we are interested in management practices (X) and business performance (Y). First we need to look at what we mean by management practices—what counts as a management practice? And if many things do, will we choose to study all of them? Then, separately, we can look at different dimensions of business performance, starting, perhaps, with profitability, but also including such things as employee morale.
Research starts with casual analysis
Research depends on analysis in many different forms—from finding different aspects of situations to examine to finding different perspectives from which to analyze a situation. All of these forms essentially spring from the observations that you have as a researcher and your interest in and attention to detail.
In the course of your research, you will probably be motivated to move beyond the initial steps of casual analysis that you would carry out in everyday life—you don’t need to exhaustively list all the different possible characteristics of a movie to decide whether you want to see it (or whether or why you enjoyed it). But don’t be afraid of those first steps: analysis is not something inappropriate or reserved for some special class of analyst. It is one of the foundations of critical thinking, and if you want to come up with original research, your observations of the world, the way that you organize your observations, and the analyses that you come up with are the roots of original research.
So look closely, don’t be afraid to identify specific details, and then see what you can learn from those observations. At its root, analysis is something we all do. Research is just a move to try to formalize this common practice.