Skip to main content
SearchLoginLogin or Signup

How can we better match research supply and decision-makers’ demand?

The Matching Q-M tool!

Published onNov 28, 2024
How can we better match research supply and decision-makers’ demand?
·

Today’s post another type that you’ll be seeing occasionally in 2025. Because a lot of this living literature review focuses on research methods, I frequently learn about new approaches that Good Questions Review readers might find useful. These posts will usually be a little more succinct, seek to situate a new method/approach/tool in learning from previous posts, and show how they might contribute to knowledge going forward. This is one such post.

Researchers focused on evidence use find that program and policy decision-makers are increasingly recognising the value of academic research (See Bragge, 2019 or Mansilla et al., 2024). However, there are many factors that shape if research evidence outputs and decision-makers knowledge needs are aligned. A few that I’ve engaged with already in previous posts on Good Questions Review are the timeliness of research delivery, whether decision-makers are involved and “buy in” to the research being undertaken, and the research being communicated at the right phase of the policy-making process and in the right format. A few other barriers that are worth mentioning are decision-makers’ level of research literacy; researchers’ understanding of and ability to adapt to decision-makers’ constraints; and researcher and decision-maker communication skills in the language of the other sector. 

I’m a researcher who primarily works on demand-driven projects commissioned by government or civil society organisations and I’m enthusiastic about research methods. From my experience, I can say that balancing all the above considerations sometimes feels like more of an art than a science. Learning how to overcome some of those barriers has taken time and practice, and teams like the one I work on have developed robust scripts and approaches for understanding decision-makers’ research needs, identifying the right questions and methods, and delivering research or reviews that are as valuable as possible within the time, budget, and other constraints of the project. However, my experience also tells me that this field would benefit from having more easily navigated, evidence-supported,  plain-language tools to facilitate conversations between researchers and decision-makers.

The Matching Q-M Tool

The Matching Q-M tool, released earlier this year by the Global Commission on Evidence to Address Societal Challenges (2024) and described further in Mansilla et al. (2024), will meaningfully push forward the science of matching decision-maker demand with researcher supply. The tool contains a comprehensive and easily navigated typology of questions that draws a clear line between a decision-makers’ knowledge needs, types of questions that could address those needs, and suitable methodologies to supply useful evidence. My colleagues and I are unaware of any other question typology that is a robust and usable as the Matching Q-M tool.

At this point, it’d be valuable to have a short primer on the key components of the tool, before unpacking the reasons for my optimism about the ways that the Matching Q-M Tool might help overcoming the previously mentioned barriers to supply and demand matching. It might be worth your while to open the tool in another tab and explore the interactive image before continuing reading.

The tool builds on work from Global Commission on Evidence to Address Societal Challenges (2022)’s report that puts forward 4 stages of decision-making processes and related goals of research. The 4 stages are:

  1. “Understanding a problem and its causes

  2. Selecting options to address the problem

  3. Identifying implementation considerations

  4. Monitoring implementation and evaluating impacts”

Several potential goals that a decision-maker may have for a research project are laid out in each stage. For instance under Stage 2 – Selecting options to address the problem, the goals are:

  • “Finding and understanding potential options

  • Assessing the expected impacts of options

  • Maximising the expected impact of options

  • Contributing to prioritize options”

From there, you can navigate to a specific goal, the tool provides example questions under each goal as well as potential methodologies for responding to the questions.

At this point, it is important to note that the example questions and methods are rooted in evidence, not merely sound logic. Mansilla et al. (2024) extended the Commission’s work by using a Delphi study with global experts to:

  • rank the appropriateness of specific types of questions for achieving a goal, and

  • rank the appropriateness of methodologies to respond to each question type.

The results of the Delphi study provide a foundation for the Matching Q-M tool.

How the Matching Q-M tool might help to overcome barriers to evidence use

I’m quite keen to use this tool as a guide in collaborative research commissioning conversations with decision-makers. In part, this is because I see the Matching Q-M tool as a useful way to overcome several of the barriers mentioned earlier. By foregrounding a discussion of the decision-makers’ goals for commissioning a research project, it frames the process in a way that stays centred on the ultimate purpose of the project and its ideal outputs. Moreover, by providing a clear list of potential methods associated with different question types, it creates an opportunity rapidly identify the methods that are likely to provide the most robust evidence within the existing practical and contextual constraints of the project. I can see this tool helping researchers to very efficiently understand the core purpose of the project and its context, and then allow them to explore a wide range of follow-on questions that fall from the proposed methods, for example:

  • Which approach can result in a sufficient level of confidence for all stakeholders involved?

  • Which approach can deliver evidence within an important time frame or a window of opportunity?

  • How complex is it to implement each of the methods? Does the research team have the skills? Are the methods feasible?

  • Which methods’ outputs are likely to be best for communicating with senior decision-makers? (e.g. Will the reader be able to interpret complex statistics? Do decision-makers prefer evidence communicated in certain format, length, etc.?)

  • How well can various methods complement other sources of data that will be used for decision-making?

  • How much do each of the methods cost? Which methods are feasible within the budget?

All these matters can be addressed without the tool. However, my experience tells me that the opportunity to share this tool with decision-makers and collaboratively navigate this well-designed, interactive, plain-language, evidence-support tool with decision-makers will likely support a large step forward in the speed and effectiveness of identifying research questions with the most potential for impact. Beyond the speed and efficiency of identifying the right question, the fact that the landing page of the tool orients the user to all the stages of decision-making where evidence could be used may help decision-makers and researchers anticipate future questions and evidence needs related to the project at hand[i].

I’m hopeful to have the chance to pilot this tool in our research commissioning process over the coming month, and possibly document it[ii]. All going well, I may have some provisional reflections on the claims that I’ve made in this post in the second half of 2025. In all cases, I hope that knowing about this tool might benefit your work as well.  

 Note: this essay is continuously updated as relevant articles are added to Good Questions Review.


[i] Anticipating future evidence needs been found to result in increased evidence uptake (See Rose et al., 2020)

[ii] As a side note: When the initial application for support of Good Questions Review was sent to Open Philanthropy in 2023, one of three goals that I initially put forward for the site was to “…help to build a nuanced typology of policy-focused questions in the social sciences that provides insights into their use cases, strengths/weaknesses, and implications for research and policy decision-making”. When I learned about the Matching Q-M tool, it was clear that the goal should be revised to “help to test and further develop recent substantial work on policy-relevant questions, like the Matching Q-M tool”.


Documents cited

Bragge, P. (2019). Ten ways to optimize evidence-based policy. Journal of Comparative Effectiveness Research, 8(15), 1253–1256. https://doi.org/10.2217/cer-2019-0132  

Global Commission on Evidence to Address Societal Challenges. (2022). The Evidence Commission report: A wake-up call and path forward for decision-makers, evidence intermediaries, and impact-oriented evidence producers. McMaster Health Forum. https://www.mcmasterforum.org/networks/evidence-commission/report/english

Global Commission on Evidence to Address Societal Challenges. (2024). The Matching Q-M tool. McMaster Health Forum. https://www.mcmasterforum.org/networks/evidence-commission/global-evidence-architecture/matching-qm-tool

Mansilla, C., Guyatt, G., Sweetman, A., & Lavis, J. N. (2024). Matching the right study design to decision-maker questions: Results from a Delphi study. PLOS Global Public Health, 4(2), e0002752. https://doi.org/10.1371/journal.pgph.0002752

Rose, D., Burgman, M., & Sutherland, W. (2020, January 28). The civil service doesn’t just need more scientists – it needs a decision-making revolution. Impact of Social Sciences. https://blogs.lse.ac.uk/impactofsocialsciences/2020/01/28/the-civil-service-doesnt-just-need-more-scientists-it-needs-a-decision-making-revolution/


 

Comments
0
comment
No comments here
Why not start the discussion?