Lecture: Research Design

CEP 822: Approaches to Educational Research

These lecture notes are modified from notes created by Dr. Nick Sheltrown


Introduction to Research Design

This unit introduces you to the basic taxonomy of research design. You'll likely find that research is a larger, more diverse collection of methodologies and designs than may have first anticipated. One hope is that by reviewing some of the basic ways researchers organize and categorize their work, you'll have a better understanding of your own project's design and be better equipped to review others work.

This is a reading-intensive unit.  Here is my suggestion - grab a stack of index cards, or post-it notes (or if you're more high-tech a word document, blog, or note-taking program like Tiny Pad), or whatever type of study hack that works best for you. We are going to walk through Trochim's chapters on Experimental Design and Quasi-Experimental Design, but before we do I prefer to lay the foundation with a very helpful chapter on research design.  

To follow what we'll be studying in this unit, I suggest printing out the 2nd chapter from James McMillan and Sally Schumacher's (2006) book Research in Education: Evidence-Based Inquiry.  This chapter gives an overview of different research designs and offers practical advice in reading research reports.  For this unit, you only need to read pages 21-28.

The chapter begins with a helpful diagram:

image

As you can see, the diagram divides research into two fundamental activities: designing and reading research. Let's focus on designing research (which is the overarching task for this module).  McMillan and Schumacher suggest that research design "summarizes the procedures for conducting a study, including when, from whom, and under what conditions the data will be obtained."  The purpose of a design is to "specify a plan for generating empirical evidence that will be used to answer the research questions."  (If you are taking notes at home, these would be good things to record).

Your research question is your goal; it's your mission statement --- the finish line for your project (not that you are expected to actually conduct this research).  The next step in the research process is to develop a research design, specifically one that supports your question by providing you with empirical evidence to support you in making legitimate conclusions.  Notice I bolded a key term (again, good note-taking fodder).  Empirical evidence is information gained through observation, experience, or experiment.  In our world, it's data --- which again, doesn't just mean numbers (and often doesn't mean numbers).  It can be words, interviews, historical artifacts, ideas, etc.  Empirical evidence can help us in making conclusions or drawing inferences that are reasonable, but our research design is what protects us from making claims that violate certain validity thresholds.  In other words, if you want to make legitimate claims that can't be easily dismissed, you need a design that matches your research question. The purpose of your design, as mentioned in the previous paragraph, is to supply you with data related to your research question; however, the important point is that a good research design doesn't just give you data (it's easy to get data).  It also provides a structure as to how data are produced, a structure that controls for possible problems with the validity of your work. Recall that validity is an expression of faith in our work, and the more questions or potential problems in our research design, the less faith we have that our findings are authoritative. Four major areas of valdity were described, as represented in Trochim's ying-yang diagram.

 image

I can't emphasize enough how important the concept validity is in research design, so you may want to review these terms in unit 1 or read Trochim's very good descriptions.

Now, let's take a look at your research design options.

image

Focusing on the top half of the diagram for now, you will notice that there are many types of research design, but most designs fall into one of three broad categories: quantitative, mixed methods, and qualitative.  Each design category contains a number of specific designs, each fitting a specific scenario.  Use Table 2.1 in the article as a reference guide for the options that exist for your own research project, but be sure to familiarize yourself with each of the designs mentioned in this article.  The article clearly describes the defining characteristics of each design, so as you read, think about where your question may fit. 

An example from Dr. Nick Sheltrown, instructor for CEP822

It may be instructive for you to see how one's work might be classified.  In my work, I do a fair amount of quantitative analysis, specifically single and multiple regression analysis, transformations, and descriptive statistics.  As I read these designs for research, I realized that much of my work falls into the category of secondary data analysis (though I also do correlational studies and use ex post facto designs).  For example, one of my primary responsibilities is to make sense of school achievement data so as to evaluate program efficacy.  That's just fancy talk for saying that I need to use test scores and context data (socioeconomic data, racial/ethnic data, etc.) to make decisions about how effective schools are in teaching and learning.  It's difficult work because there are, as you well know, many factors that influence test scores.  As such, I find the need to contextualize test scores in order to make 'high fidelity' comparisons.  This may include correlating the percentage of students in schools who qualify for federally supplemented lunch with their test scores, and always includes making use of large data sets published by my state's department of education.  Because I rely so heavily on large data sets published by government agencies, most of what I do qualifies as secondary data analysis, though the techniques I may employ are drawn from comparative, correlational, and ex post facto.  In this way, I hope you see that designs bleed together; their boundaries are less clearly defined than table 2.1 would suggest.

This kind of research, while nonexperimental, is quantitative and as such, likely conforms to your past understanding of educational research.  But as we've emphasized, there are other rich traditions in research.  I am, as you likely have guessed, more of a 'numbers person,' but one area of research that I have found quite compelling is noninteractive qualitative research.  On page 27 on the chapter, noninteractive methods of research are described.  You may have been surprised when I wrote earlier in this lesson that ideas can be data, and in fact, ideas can be the fodder of very excited, powerful research methods and designs.  A noninteractive qualitative design "investigates concepts and events through an analysis of documents."  As the authors suggest, we most often see examples of this research design in historiography, but it actually is quite popular in education as well.  People who study various educational concepts --- differeniated instruction, data-driven instruction, school choice theory, school law --- by reviewing what others have said (e.g. their ideas), synthesizing, expanding, or critiquing those ideas are employing concept analysis.  We often don't think of that as "research" because there aren't subjects, interviews, numbers, and other traditional elements of research.  Yet, as these authors argue (and I agree), noninteractive qualitative research is an important thread in the evolving world of research.  For an example, I can point you to my own dissertation, which was an extended concept analysis.  In my research, I didn't have subjects, conduct surveys, collect test scores, etc.  Rather, I spent two years at the library reading, writing, synthesizing, critiquing, constructing, and deconstructing how we understand the Internet.  Believe it or not, that's research and in many ways, it's a more difficult design than the fancy algorithms with which I currently work.  More recently, I wrote a chapter for a book edited by MSU's Elizabeth Heilman (whose views on philosophy were featured in unit 1).  The book's title is Critical Perspectives on Harry Potter, which suggests that each chapter analyzes Harry Potter through a different theoretical "lens."  The chapter I contributed, Harry Potter's World as a Morality Tale of Technology and Media, examines themes of technology and media in Harry Potter.  Such work could be classified as "analytical research" as it "investigates concepts and events through an analysis of documents" (p. 27).  In this case, the documents analyzed were the 7 installments of the Harry Potter series.  Such "work" may seem trivial, but Heilman argues that researchers should strive to understand Harry Potter because of the important position this series has held in the lives of school-aged children and adults.  She begins her book by arguing that "Harry Potter has become more than just a book; it has become an icon, a Michael Jordan, a Coca-Cola, a Pop-Tart, in modern pop culture. The Potter books are now ubiquitous early texts for children, and are also a popular choice for many adults. As the most commercialized books in recollection the phenomenon deserves multidisciplinary analysis" (p. 1).  

These personal examples are included to reveal how wide the world of research can be.  As you think about your own attitudes and experience in research, know that the world is full of questions waiting for answers.  This course will focus specifically on questions in education, but certainly social science research is much larger than one field.  That said, when it comes to influencing policy, attitudes, values and beliefs in education, not all forms of research are treated equally.  Or to borrow from Orwell, all forms of research are equal, but some are more equal than others.  What's "more equal" in educational research?  Answer: Experimental designs


Experimental Designs

In 2002 the National Research Council published Scientific Research in Education.  This book gave evidence to a growing shift in research, as the pendulum in the United States moved towards rigorous scientifically based research.  As Trochim states:

Experimental designs are often touted as the most "rigorous" of all research designs or, as the "gold standard" against which all other designs are judged. In one sense, they probably are. If you can implement an experimental design well (and that is a big "if" indeed), then the experiment is probably the strongest design with respect to internal validity.

Experimental designs require that the researcher be able to exert control over what happens to the subjects in the study.  Often, this translates to subjects receiving different interventions (a reading program, technology, etc.) or one group receives an intervention and the other serves as a control.  The purpose of experimental design is to make statements about cause-and-effect relationships between what we do (interventions) and our "measured outcome" (as represented by test scores or some other instrument).  As you'll find in reading McMillan and Schumacher's chapter, there are several categories of experimental design. They vary by how the subjects for the study are selected, and when we think about experimental design, we often assume that the subjects are selected randomly.  Randomization controls for various threats to validity (internal validity, construct validity, external validity, etc.).  Picking subjects randomly should (in theory) remove bias from the sample and control for outside factors influencing measured outcomes.  Experimental randomization is often admired.  Recall that even David Berliner appreciates randomization, writing that randomized experiments are "a method of research with which I too am much enamored."

If experimental randomization is so powerful, why isn't all experimental research done this way?  In education, the limiting reagant is found in the structure of schools.  It's not that easy to randomly select students to be part of treatment groups, largely because students are already organized into classes and disrupting their schedules can be problematic.  True experimental randomization would require reconfiguiring classes, and as they may impact other dimensions of the students' experiences, it's often viewed as impossible or undesirable. Secondly, the experimental design introduces certain ethical questions such as: if we think a treatment is beneficial to students, is it ethical to intentionally deprive certain students that treatment just to collect better data?  I suppose someone may argue for the greater good -- that the research will impact more schools if the experimental design is preserved and not all students receive the intervention, but when it's your child that isn't part of the intervention group, the benefit for all becomes less convincing.  While I'm no expert, I imagine the same holds true for medical research.

As you likely realize, that within experimental designs, there are a number of different specific methodologies that one could employ.  Trochim lists the following subcategories of experimental design:

* Two-Group Experimental Designs
* Classifying Experimental Designs
* Factorial Designs
* Randomized Block Designs
* Covariance Designs
* Hybrid Experimental Designs

Each offers advantages and disadvantages; perks and trade-offs.  In doing your own research at your school (or other institution), true experimental design may not be feasible.  In such cases, other options emerge.


Quasi-Experimental

A quasi-experimental design is one that looks a bit like an experimental design but lacks the key ingredient -- random assignment.  "A common situation for implementing quasi-experimental research involves several classes or schools that can be used to determine the effect of curricular materials or teaching methods," write McMillan and Schumacher (p. 24).  As I mentioned in the previous paragraph, randomization is not a viable option in most classroom settings, and as such, quasi-experimental designs offer some of the benefits of true experimental designs without the complications of randomization.  Likely some of the projects in this course will require quasi-experimental designs.  If you are interested in researching an educational intervention, you may want to design a research project that uses several classes in your district --- assigning some classes as intervention groups and some as controls.  

Trochim provides more specific guidance in quasi-experimental designs, and offers 3 subcategories for this branch of research. 

* The Nonequivalent Groups Design
* The Regression-Discontinuity Design
* Other Quasi-Experimental Designs


Reality Check

If you are feeling overwhelmed, you are not alone.  Let's boil this down to practical uses.  As professionals you are often handed research or asked to implement programs based upon research.  Knowing the terminology above will assist you with seeing things from a different perspective.  Recently, two links came my way that caught my attention. First, was this flier came across my email - NCTQ Research Competition - Call for Proposals. The second was a link to the US Department of Education's Educational Technology Grant programs.  Take a quick glance through these calls for proposals - do you see a subtext of research oriented language emerging?