Practice-based research in educational technology1

Dede, C. (2005). Why design-based research is both important and difficult. Educational
Technology 45, 1 (January-February), 5-8.
Why Design-Based Research is Both Important and Difficult
Chris Dede
Harvard Graduate School of Education
August, 2004
This issue of Educational Technology describes advances in design-based research
(DBR), an important addition to the methodological repertoire of educational scholars,
practitioners, and policy-makers. These articles portray why DBR is a useful complement to
traditional research strategies such as laboratory studies and randomized clinical trials. The case
studies they provide also illustrate why DBR is most effectively conducted through partnerships
of researchers with educators immersed in the crucible of practice. However, DBR is difficult to
do well, and this issue delineates the considerable challenges of using this methodology to
develop effective, scalable, and sustainable educational innovations.
What is Design-Based Research?
Collins, Joseph, and Bielaczyc (2004) define DBR thus:
Design experiments bring together two critical pieces in order to guide us to better
educational refinement: a design focus and assessment of critical design elements.
Ethnography provides qualitative methods for looking carefully at how a design
plays out in practice, and how social and contextual variables interact with
cognitive variables. Large-scale studies provide quantitative methods for evaluating

the effects of independent variables on the dependent variables. Design experiments
are contextualized in educational settings, but with a focus on generalizing from
those settings to guide the design process. They fill a niche in the array of
experimental methods that is needed to improve educational practices.

In his article immediately following this introduction, Kurt Squire presents a more detailed
exposition defining DBR and describes examples of its various aspects.
Recently, both the special issue on DBR of The Journal of the Learning Sciences
(Volume 13, No 1, 2004) and the special DBR issue of Educational Researcher (Vol. 32, No. 1,
2003) provide detailed, research-oriented expositions of this methodology’s theoretical,
conceptual, and analytic foundations. In contrast, this special issue of Educational Technology
focuses on more applied perspectives about DBR, illustrating these with case studies of
exemplary work using this method. But why is DBR important enough to merit special issues of
all these journals?
Design-Based Research Deals with Important Issues, Sizable Effects, and Significant Results
Numerous researchers, practitioners, and policy makers have criticized many of the
findings from educational research as having little impact on practice, or even on the evolution of
theory (Lagemann, 2002; Haertel & Means, 2003). In part, this is because the priorities of
scholars are often divergent from those who are immersed in policy and practice. At times,
researchers select problems to study because of a desire to resolve some point of theory only

loosely connected to educational practice, out of curiosity, or perhaps primarily because the
situation invites the use of a favorite methodology. The practical importance of a study is too
often of little weight; given the titles of more than a few published educational studies, many
practitioners could write an abstract of the conclusions section without any further information.
The results of these studies are simple “common sense” for anyone with experience in
educational settings. In contrast, as Stokes (1997) describes, DBR resembles the scholarly
strategy chosen by the scientist Pasteur, in which investigation of difficult, applied, practicedriven questions demands and fosters studies of fundamental theoretical issues.

2

Also, scholars sometimes value statistical validation over sizable effect. Researchers
report findings that reach the 0.05 or better level of statistical significance, meaning their results
have a less than a one in twenty chance of being due to random effects. However, at times the
findings themselves are trivial, showing only a small “effect size” (Thalheimer & Cook, 2002)
from an intervention that consumes much time and resources. Practitioners and policymakers, in
contrast, have greater interest in findings that reveal large effect sizes, backed by plausible
evidence of likely causation -- even if the statistical significance of these results might be
difficult to measure or below the typical standard of scholarly proof.
Too often, articles in practice-oriented publications describe innovations that address
important issues and have sizable effects, but provide only thin evidence for their worth and

generalizability. At the other extreme, articles in research journals frequently document
statistically significant outcomes with low effect sizes for trivial problems. In contrast, DBR
attempts to create important, theory-based educational interventions of sizable effect and
reasonable plausibility and generalizability.
Contrary to traditional research methods, in DBR studies many variables are deliberately
and appropriately not controlled, the “treatment” may evolve considerably over time, and even
the research methodologies utilized may shift to fit the morphing intervention (Dede, 2004).
Further, to aid with interpretation under these difficult circumstances, in DBR large qualitative
and quantitative datasets of various types are often collected by many different participants,
introducing substantial problems of alignment, coordination, and analysis. At a time when some
are clamoring for greater “scientific rigor” in education (Coalition for Evidence-Based Policy,
2003), DBR’s methods can seem strange and even alarming. DBR advocates appropriately
respond to these concerns from conservative research methodologists with metaphors

3

reminiscent of the apocryphal story of the drunk looking for his lost keys under the streetlight
where he can see, rather than in the dark alley where they were dropped. But isn’t combining
design and research a “forced fit” in terms of the almost opposite characteristics of these two
approaches?

Why Mix Design and Research?
The skills of creative designers and the attributes of rigorous scholars have limited
overlap. Even theory-based design generally does not follow recipes, but rather draws heavily
on imagination and instinct. When designers receive formative feedback, their intuition often
leads to changes that may neither be grounded in theory nor be limited to enable comparative
research across time. For example, the research literature on innovation shows frequent “design
creep” (e.g., a curriculum intervention escalating into a full-scale systemic reform initiative),
with investigators responding to every implementation difficulty with increasingly more
sweeping designs rather than providing bounded research on a particular type of potential
advance. Also, innovators fascinated by a particular type of design approach (such as wireless
mobile devices as a means of ubiquitous access to information) often start with a predetermined
“solution” and seek educational problems to which it can be applied (to a person with a hammer,
everything looks like a nail), a dubious basis for DBR.
Researchers tend to have an opposite set of weaknesses. Their temptation is to produce
designs in which all variables lend themselves to easy collection and analysis. In this situation,
when an intervention does not fare well in its initial implementation, changes are made more to
preserve the analytic framework and methods than to increase effectiveness. Also, scholars may
believe that any possible flaws in a design must stem from inadequate implementation rather

4


than faulty conceptions; this stifles the evolution of theory. Such “design constipation” is no
better a foundation for effective, sustainable, and scalable innovation than “design creep.”
Effective DBR groups have a complex “cognitive ecology” with contradictory tensions:
freewheeling, “whatever works” innovation versus controlled, principled variation. Collins et al.
(op cit) describes DBR as similar to Simon’s design sciences (e.g., aeronautics, acoustics), as
opposed to analytically oriented natural sciences such as physics and biology. Conceptualizing
DBR as a form of “interventionist ethnography,” in which research studies perturb typical
learning settings by introducing evocative, theory-influenced designs, then draw out implications
for new theories of teaching, learning and schooling, illustrates a potential advantage of this
method. Certainly, as the articles in this issue document, implementing well-formulated designs
has led to surprising findings about the quality of thought, motivation, and action that children
can accomplish at various developmental levels, results that inform both theory and practice.
Design-Based Research and Scalability
Another key way in which DBR differs from both conventional design and traditional
research is its emphasis on adapting a design to its local context, a vital attribute for scaling up
an innovation successful in one place to many other venues with dissimilar characteristics (Dede,
op. cit.). In making judgments about the promise of an intervention, differentiating its design
from its “conditions for success” is important. The effective use of antibiotics illustrates the
concept of “conditions for success”: Antibiotics are a powerful “design,” but worshiping the vial

that holds them or rubbing the ground-up pills all over one’s body or taking all the pills at once
are ineffective strategies for usage – only administering pills at specified intervals works as an
implementation strategy. A huge challenge we face in education, and one of the reasons our
field makes slower progress than venues like medicine, is the complexity of conditions for

5

success required in effective interventions; nothing powerful in facilitating learning is as simple
as an inoculation in medicine.
Under circumstances without these criteria, major intended aspects of an innovation’s
design may not be enacted as planned (Means & Penuel, in press); developers can expect parts of
their design to be “defenestrated” (thrown out the window). Planning for successful
implementation in such contexts involves design akin to the “egg drop” experiment that is part of
many science curricula (Dede, in press). Students are given raw eggs and a few basic materials,
such as dry pasta or pipe cleaners. The learners are asked to construct some sort of “packaging”
for an egg that will cushion it from breakage, even when dropped from a considerable height.
Researchers similarly attempt to develop aspects of their design package that help its
effectiveness to survive even when parts of its intended enactment are defenestrated. This is not
an easy task; oftentimes, scholars fail to identify the key features that lead to small-scale
successes, resulting in failure when their recommended adaptation-and-transfer strategies are

implemented large-scale.
DBR findings typically show substantial influence of contextual variables in shaping the
desirability, practicality, and effectiveness of designs. For example, the articles in this issue
frequently depict “conditions for success” challenges related to teacher professional
development, a common issue in many types of educational interventions. Resolving
implementation problems such as this presents choices about alternative approaches to the
iterative evolution of a design. In this particular case, alternative strategies include changing the
design so that the intervention is more “teacher-proof,” expanding the design so that extensive
teacher professional development is now part of the “treatment,” or abandoning the design as
unpromising because its effective use requires a level of knowledge and skill likely unattainable

6

in the typical teaching population for the foreseeable future. This is not an easy dilemma to
resolve and illustrates the ways that DBR, in contrast to many types of conventional research,
intrinsically confronts scalability issues of great interest to practitioners and policymakers.
(Readers interested in the many challenges involved scaling up technology-based educational
innovations may wish to visit the website of a conference at Harvard recently held on this topic
[http://www.gse.harvard.edu/scalingup/].)
This is not to say that the goals of design-based researchers are identical with those of

teachers, administrators, or state and federal decision makers. For example, in the shadow of the
No Child Left Behind legislation, practitioners struggling with very demanding, narrow criteria
to which they are held accountable (e.g., students classified as “mildly mentally retarded” are
expected to perform at grade level on high stakes tests) are primarily interested in promising
interventions that could reach subpopulations for whom conventional instructional practices are
ineffective. They are typically and understandably less interested in contributing to an iterative
process of theory-based development for interventions with complex conditions for success,
intensive data collection strategies, and no guarantee of effectiveness.
This divergence in motives between DBR investigators and the contexts in which they
must operate often leads to problems. Do scholars start with what practitioners identify as
problems (e.g., how can unqualified reading teachers be trained to use a didactic teacher-proof
approach to compensate for their weaknesses), or with what researchers believe are theoretically
promising interventions that would require transformation of the current system to effectively
implement? What choice should design-based researchers make when faced with the alternatives
of modifying a design in ways that pose interesting issues for the refinement of theory, versus
altering the intervention to increase its usability, scalability, and sustainability? Defining a

7

middle ground that starts with practitioners’ issues, but then helps them evolve their thinking

towards transformative approaches, requires sophistication and patience; yet this is the current
climate within which DBR must now function.
The articles in this special issue illustrate many aspects of all these themes and more.
Kurt Squire provides a more extensive definition of DBR, illustrating this with two case studies
from his research on educational games. These cases illustrate how DBR sequences and
integrates various research methodologies to fully understand learning and teaching in authentic
educational contexts, as well as how DBR can provide insights into better theories about
pedagogy. Sasha Barab and his colleagues then describe how their Quest Atlantis project is
developing Learning Engagement Theory to deepen understanding of the relationships among
learning, playing, and helping. They have used DBR to evolve the initial version of their
designed learning experience to become a broader, socially-responsive context of participation—
more akin to a “brand” than a singular technology.
My research team at Harvard follows with a description of our River City multi-user
virtual environment studies, showing how DBR has helped in improving our design’s
effectiveness, practicality, sustainability, and scalability. We are also gaining insight into the
pedagogical theories of situated learning and guided social constructivism that underlie our
design. Yasmin Kafai and her colleagues then portray the phases and findings of a four-year
longitudinal study using design-based research within a classroom as a living laboratory. Her
group is using DBR to evaluate and evolve a pedagogical approach called “learning science by
design” by keeping several variables constant (such as the teacher, pedagogy, and students)

while varying key aspects, such as collaborative arrangements.

8

Ken Hay and his colleagues describe how DBR aided the evolution of a particular
learning tool within a larger suite of studies about the educational potential of virtual reality.
They delineate how DBR aided with the refinement of the tool, pedagogical theory, curriculum
theory, and research methods. Chris Hoadley portrays how, through two stages of tool
development and evaluation, DBR enabled the evolution and refinement of a theory about
socially relevant representations. He describes the ways in which DBR is a powerful method for
testing theories about issues that really matter in real world contexts. Tom Reeves concludes
with a synthesis of how all these articles interrelate and complement in their approaches to DBR.
Overall, each of the articles provides a different perspective on the “elephant” of DBR. I
hope you will find these ideas and methods intriguing, worthy of putting DBR in an important
place in the pantheon of educational scholarship.
References
Coalition for Evidence-Based Policy. (2003). Identifying and implementing educational
practices supported by rigorous evidence: A user-friendly guide. Washington, DC: Institute of
Education Sciences, U.S. Department of Education.
Collins, A., Joseph, D., & Bielaczyc, K. (2004). Design research: Theoretical and

methodological issues. Journal of the Learning Sciences, 13 (1), 15-42.
Dede, C. (in press). Design for Defenestration: A Strategy for Scaling Up Promising Researchbased Innovations. Chicago, IL: NORC.
Dede, C. (2004). If Design-Based Research is the Answer, What is the Question? Journal of the
Learning Sciences, 13, 1, 105-114.
Haertel, G. D., & Means, B. (2003). Evaluating educational technology: Effective research
designs for improving learning. New York: Teachers College Press.

9

Lagemann, E. C. (2002). Usable knowledge in education. Chicago: Spencer Foundation.
Retrieved August 22, 2004 from www.spencer.org/publications/index.htm
Means, B., & Penuel, W.R. (in press). In C. Dede, J. Honan, & L. Peters (Eds.), Scaling Up
Success: Lessons Learned from Technology-Based Educational Innovation. New York: JosseyBass.
Stokes, D.E. (1997). Pasteur’s Quadrant. Washington, DC: Brookings Institution Press,
Thalheimer, W., & Cook, S. (2002, August). How to calculate effect sizes from published
research articles: A simplified methodology. Retrieved August 22, 2004 from http://worklearning.com/effect_sizes.htm.

10