I am an idealist who believes that knowledge-hoarding is a form of injustice. My two roles, leading an education research and evaluation consultancy as well as a data equity focused nonprofit, put me in situations where I am often advising others on measurement of their own initiatives and programs. As a result, I’ve been spending almost all of my time these days thinking about what, how, and why we measure - mostly in the K12 education sector, but also in the social sector more broadly. This experience, coupled with my focus on racial and social justice in measurement and evidence-building, keeps bringing me back to the question, “What if we could measure the unmeasurable, and know the unknowable?” How does what we can measure influence how we conceptualize knowledge itself?
When I first really started becoming aware of knowledge-hoarding, it was as a researcher based at a large university, where I was confronted daily with the vast gap between what is known about how people, especially children, learn (i.e., the “evidence” around learning) and what is actually done to support learning in K12 classrooms (i.e., the “practice” of teaching). Back then, to me, knowledge-hoarding meant withholding, or not disseminating evidence in understandable and useful ways to practitioners. As my thinking evolved, it became apparent that knowledge-hoarding was bigger than that - and encompassed axioms that we hold true about who is an “expert”, whose data matters, and why we even generate evidence to begin with.
For example, I calculated that while white students are perhaps unsurprisingly somewhat overrepresented (they are about 15% more of the population in the data than the population enrolled in public schools) in our corpus of the most rigorous studies of K12 instructional interventions – those that have met the research standards of the What Works Clearinghouse – Black students are substantially underrepresented (they make up about 40% less of the population in the studies than the population enrolled in public schools) in that same body of evidence. This is unsurprising given the racial and ethnic makeup of education researchers themselves; and the fact that the vast majority of education research studies are conducted in order to answer theoretical questions about the drivers of academic achievement, and not to make the decisions that educators, learners, and their families face in their day-to-day lives.
It was from that perspective that I conceptualized a research design process that would center and be led by communities, and make space for other ways of knowing. From this process emerged 10 Just Research Design Principles that could shape a new type of research and design that was centered in people, rather than in randomization, objectification, and quantification. These principles, however, illuminated the limitations of how we currently measure, and continued to raise the question: what if the questions our communities are asking are unanswerable with data?
More recently, I had the opportunity to speak to several learners and their families, as well as educators including teachers, administrators, and researchers about how they know if a learning environment is working for their learners. Perhaps unsurprisingly, educators largely talked about things they could – and did – measure, from academic competencies like math and reading, to SEL and other non-academic competencies such as self-confidence, student engagement, attendance. However, learners and their families answered with the very types of “unmeasurable” answers the researcher in me hoped they wouldn’t. Learners talked about the ability to “be themselves” and not having to worry about fitting in. Parents talked about their learners being “more relaxed”, and “identifying more as a learner”. Both learners and their families talked about a sense of belonging, being a part of a community, and not having to justify their continued presence in the learning environment.
I mentioned this experience in a group of measurement experts who were considering what measurement justice could and should look like, and what the implications of justice are for how we measure in the social sciences. As we puzzled through these questions, new ones arose for me. How in the world, the researcher in me thought, do we measure, synthesize, and share the things that learners and families value? How do we document and codify these metrics into something meaningful and yet comparable year after year or environment to environment? But, as aggravated as my logical brain was; my idealist brain was inspired and excited, and my mind turned once again - as it had periodically in the past - to quantum physics.
The history of quantum physics is not something I am well versed in, but I do have a fair bit of high school and college level physics training - more, I dare say than the average person. In my understanding, the real difference between classical or mechanical physics and quantum physics is simply scale. Classical physics “works” on a human scale (particles and units of time that we can experience and perceive); while quantum physics instead applies to extremely large or extremely small scales (particles and units of time that are too small or too large for us humans to experience and perceive). The early scholars of quantum physics, then, must have been seen as heretics or simply charlatans in their day - talking about space-time elements that were micro or macroscopic, in ways that went against the prevailing evidence at the time. But, what gave quantum physics legitimacy, was its ability to explain phenomena that were unexplainable by mechanical physics and its ability to make predictions that were then borne out by observations that were not upheld by mechanical physics.
How does this apply to education measurement and research? As a quantitatively trained education researcher who practices mixed-methods (that is combined quantitative and qualitative) research I’m well aware of the limitations of our current approaches, especially our current measures. The learners and families I mentioned above are measuring real variables that are meaningful to them and which help them answer their own questions about the efficacy of a learning environment, but we do not have the formal means to measure those within our current paradigm and approach to knowledge- or evidence-building. In my mind, it is entirely possible that we are at the horizon of a whole new conception of knowledge, via a new approach to measurement itself.
What could we know, what questions could we answer, if we could measure experiences themselves through a more sensory approach to data? How would communities themselves define competencies and micro competencies, and what would they use as indicators of these? What can we learn from indigenous frameworks about what knowledge and knowing truly is and how it is organized in the world and in our experiences? How lucky I am to learn about and meet other researchers that are even further down this path than I am. Those who are pioneering and reclaiming community-led research for their communities, their traditions, and their ways of being in the world. How exciting it is to live in this moment where we are at the cusp of research for, by, and of communities being the norm - centering deep wisdom rather than generalizable or scalable “truth”; and meaningful solutions rather than “causal claims”. From where I am, yes, the time has come.
Book a demo or create an Unrulr account today.
Subscribe to our monthly newsletter for more great content!