Skip to main content
SearchLoginLogin or Signup

Choosing policy-relevant research questions

What might work and why is it important?

Published onMay 21, 2024
Choosing policy-relevant research questions
·

For the last several decades, experts have said that the social sciences have not realised their potential for shaping important policy decisions. In 2002, William Julius Wilson, former president of the American Sociological Society and MacArthur Fellow (the ‘genius’ grant), said “social science has not engaged sufficiently with policy-makers to make it possible to take forward ‘evidence’ based policies”. A decade later, Christopher Whitty, who would go on to be the UK’s Chief Medical Officer during COVID-19, said there remained “a wide-open goal for timely, relevant, rigorous and readable qualitative and quantitative social science addressing practical questions in policymaking.”  Fast forward one more decade and one can find researchers making the similar points from Singapore to South Africa in relation to how social science research was used during the pandemic.

One logical way to address this concern would be to ensure that social scientists are choosing policy-relevant research questions. Doing this would not only mean that research might be ready when a policymaker’s need for knowledge arises, but also that researchers and policymakers might more regularly engage with each other to better coordinate their work. This post will describe a few key methods that might help social scientists develop more policy-relevant questions. 

User engagement and cocreation – involving policymakers in the research process

Phoenix et al. (2019) provides an example of engaging potential research users (policymakers) in a shared process of research question development. Their project sought to understand farmers’ priorities, as well as use the research findings to ensure that farmers’ perspectives are considered when making new agricultural policies in the UK through targeted communications. The researchers in this case were employees of a government department, and much of the article is framed around their playing a ‘knowledge broker’ role between research and policy communities.

The first step in their research process was to work with senior management and policy teams from their department to co-design the project’s research questions to ensure its relevance. Then, in the next phase of research, which was focused on gaining farmers perspectives, the researchers invited policymakers to join focus group discussions so that the policymakers could directly ask follow-up questions. After the research was finished, the team then developed tailored communications about their findings for several audiences – for the policy teams they used emails, briefing session, slides, and strategic meetings; for senior management, they used strategic meetings; for farmers they used emails and slides; for ministers they used slides; among others.

After the project, researchers found that their methods were valuable to ensuring their research questions and findings were policy-relevant for a number of reasons beyond the mere fact that policymakers were actively involved in the drafting of the research question. They observed that that having policymakers visit farming communities to participate in focus groups made policymakers more likely to consider the practicalities of farming and the challenges that farmers may face due their policy decisions. This direct engagement also was perceived to build farmers trust in the process and the eventual outcomes of the research because they had experience of being directly heard by people who would make policy decisions. Lastly, researchers also think that that the relationships built through the project might encourage future policy-relevant research questions being asked via a feedback loop started with this project.

The researchers also found that these methods seemed to open new avenues for decision-makers taking up evidence in their work. To this end, the paper cites an example of a senior manager saying that they wanted more colleagues to learn about the research and engage with the researchers working on this topic in the future. This finding is consistent with other research in the area (that will be mentioned later in this post).

Similar to Phoenix et al. (2019), cocreation of research questions can also work in complex organisations on complex topics. Sienkiewicz (2020) discusses  learning from their work with the Joint Research Centre (JRC) at the European Commission (EC). The JRC is an organisation that provides independent research to support evidence-based policy within the EC.

The JRC set up interdisciplinary ‘Knowledge Centres’ on several priority policy areas to manage knowledge for the EC and develop new questions to meet policymaker needs. Each Centre is overseen by a diverse group of participants that encompass both researcher and policymaker perspectives across diverse agencies. The authors of this chapter highlight an example of the Knowledge Centre on Migration and Demography (KCMD) as a good example of using cocreation to respond to knowledge gaps. Across the EU, the data about migration have historically been of inconsistent quality, not complete, and/or not up-to-date. In many cases, countries may attempt to deal with these issues on their own, coalitions of similar agencies from several countries (e.g. border control) might seek to cooperate on one aspect of the issue, or the JRC might try to independently fill these gaps by commissioning research that would ultimately be supplied to the EC. Instead of taking these more siloed approaches, the KCMD developed a regular series of workshops to discuss the challenges relating to migration data with a range of relevant stakeholders from agencies like migration and home affairs, international development, civil protection, among others. Through the regular meetings and deliberation, the group was able to identify which data gaps were of most importance to the full gamut of KCMD perspectives – not just the perspective of researchers or a single agency. Based on this more comprehensive understanding of the issue, the KCMD was then able to develop research questions that would provide them with the most useful and feasible research methods to meet their policy needs. Similar to Phoenix et al. (2019), the involvement of policymaker and other relevant perspectives on a complex issue resulted in more fit-for-purpose research questions. Also, because this process involved participants from several different agencies and organisations with a unique part to play in migration issues overall, it provides the potential for the KCMD’s research agenda to be better aligned with strategy and research planning across the EC.

Surveys, voting, and consensus-building workshops

Moving on, there is a well-established set of methods in this field that usually involves a combination of surveys to make a comprehensive long list of questions, holding workshops to refine the list, and then, voting to prioritise the a final list important or relevant questions. A great example of this sort of approach is Petrokofsky et al. (2010), which describes an ambitious project to engage with a large number of people in policy-relevant question development for forestry in the UK called the Top Ten Questions in Forestry (T10Q) project. The project sought to compile and prioritize policy-relevant questions for the whole of the UK. (There are several other great examples of this approach most built on the work of William Sutherland and their colleagues)

The T10Q project started with a survey to gather a long list of candidate questions and other data about the participants that was sent to 1,600 researchers, policymakers, and woodland owners. The survey received 481 responses containing more than 1,500 potential questions and data about the participants backgrounds. The researchers then categorised and sorted all of those questions into one of 14 general themes, and selected 10 representative questions for each theme – thereby cutting 1500+ questions to 140.

The next phase involved a 2-day workshop with 43 people from the research, public, NGO and woodland management sectors. Over the 2 days, they were tasked with filtering the questions and ultimately voting to determine a top-10 list of questions. The participants were broken into small groups were first asked to reduce the lists of 10 questions across 14 themes to 5 questions per theme – thus, taking the list from 140 questions down to 70. Then, small groups combined questions and remove duplicates where possible, getting the list down to 47 questions. Finally, after these steps were complete, the workshop participants were asked to vote by secret ballot to identify the Top Ten Forestry Questions to guide research in the UK.

Although the resulting questions did not perfectly reflect the perspectives of all 43 workshop participants, all participants had voted for at least 1 of the 10 final questions. This process, and those like it, clearly provide a road map for a comprehensive, systematic, and relatively transparent way of identifying questions for a particular field. The orderly, step-wise process for reducing the questions helped to make sense of a number of questions (1,500+) that would otherwise be unmanageable. It was also so well-described that it would be feasible to reproduce the approach and possibly even attempt it at a larger scale. Additionally, by reaching out to a wide swath of stakeholders, they were able to gain many perspectives about what issues can and should be considered policy-relevant. However, on this point, although the project did collect data on which sectors were represented in the project (e.g. research, public sector, NGO), a key limitation was who did and did not choose to participate. Most notably, the authors of the study observed that the devolved governments of Wales, Northern Ireland, and Scotland were not represented at the workshop to select the Top Ten Questions. These governments have a key role in making forestry policy in the UK, and so their absence means that the final result of the process may have lacked some critical voices in the discussion. This said, the authors reflect outcomes of processes like the T10Q can provide additional input to existing government decision-making and deliberation, and they believe that such insights may be particularly useful when funding resources are limited and hard decisions need to be made.

Akerlof et al. (2020) applied research methods that are similar to those described in Petrokofsky et al. (2010) in several ways in their work to build an international research agenda about how to provide science advice to legislators. I won’t describe the detail of most of their methods, which just like Petrokovsky et al. (2010) involved a survey, workshops and voting. However, there is one key point of methodological departure relevant to the topic of this post. In the final step of the research, Akerlof et al. (2020) used a method that is more oriented towards understanding and building consensus than the voting applied in Petrokofsky et al. (2010). They used a technique called Q methodology, which involves identifying consensus between members of a group by gathering participants perspectives, processing the data from those perspectives and finally ranking a series of statements or questions that result from the previous steps. Without going into too much detail, this method asks expert participants to prioritise questions or statements, but also systematically explores the participants values, beliefs, and perspectives that shape their areas of agreement and disagreement[1] and identifies commonalities among the participants. This gives researchers firmer footing further navigate and build consensus-building across diverse groups of people. However, similar to the methods from Petrokovsky et al. (2010), Q methodology is only as good as who actually joins the process and how they are involved. In this case, the authors found that they had a lack of coverage in participants from some regions of the world (e.g. Southeast Asia) and that the comprehensiveness of the process may have been limited by the fact that they only provided research materials in English.  

But does any of this result in research getting used in decision-making?

The short answer is, yes, some of it! Langer et al. (2016) provides wide-spanning insights relevant to the papers already mentioned. Their work looked into what might work to increase research evidence use by decision-makers, including findings about what may work to build agreement about policy-relevant, fit-for-purpose research questions. The authors undertook two related literature review projects. One project systematically identified and analysed existing review papers, also called a systematic review of reviews, and the other project broadly explored social science literature on the subject. Through this thorough consideration of previously published papers, the authors found that there was evidence to support a few types of approaches to asking more policy-relevant questions. They included:

  • user engagement [e.g., developing formal processes and lines of communication to support decision-makers’ involvement in the research process],

  • journal clubs [e.g., providing opportunities for researchers and policymakers the opportunity to communicate about the value of various research findings], and

  • Delphi panels [a well-established method that involves asking experts to engage in several rounds of surveys to reveal consensus].

Phoenix et al. (2019) and Sienkiewicz (2020) clearly describe examples of methods that engage policymakers in the process in a way that was perceived as valuable. Moreover, Phoenix et al. (2019) shows the success of tailored communciations in supporting researchers interest in findings, and potentially taking them up into decision-making. Finally, Akerlof et al. (2020) uses Q methodology, an approach that has some similarities to Delphi panels and is designed specifically to identify consensus within the participants.

Langer et al. (2016) were unable to identify research to support the effectiveness of strategies like:

  • using communities of practice to target the creation of policy-relevant research,

  • encouraging participation and inclusion of multiple voices on fit-for-purpose evidence and relevant questions, and

  • professional education that links people from different backgrounds to develop more policy-relevant studies.

Lack of evidence about these and other subjects does necessarily mean that they do not work, but rather, that the authors were not able to identify studies that clearly demonstrated that the methods work. Petrokovsky et al. (2010) and Akerlof et al. (2020) clearly show that it is possible to engage a wide range of people in large scale processes to find the most policy-relevant research questions on a complex topic, but it is not so clear if these processes result in research being taken up into policy decisions.

Looking forward

Good Questions Review will continue to return to the topic of choosing policy-relevant questions, as well as posts that explore other substantial questions like should social science seek to be policy-relevant? However, for now, this post is designed to set the scene for related posts on:

  • uncertainty about how research gets used policy,

  • whether or not delivering research in a timely way shapes whether or not it gets used in policy, and

  • a deep dive about a new tool for aligning policymakers’ knowledge needs with research questions and methods.

Note: this essay is continuously updated as relevant articles are added to Good Questions Review.


[1] One could learn this information through recording and analysing data about the workshop deliberation process in Petrokovsky, but its unclear if this occurred.


Articles cited

Akerlof, K., Allegra, A., Webler, T., Heath, E., Cloyd, E. T., Washbourne, C.-L., & Tyler, C. (2020). New Methods in Creating Transdisciplinary Science Policy Research Agendas: The Case of Legislative Science Advice. Science and Public Policy, 47(4), 536–547. https://doi.org/10.1093/scipol/scaa033

Daros, N., Lam, I., & Woon, K. K. (2024). Singapore’s Response to the COVID-19 Pandemic. Sojourn: Journal of Social Issues in Southeast Asia, 38(3), 409–428. https://www.jstor.org/stable/27266436

Langer, L., Tripney, J., & Gough, D. (2016). The science of using science: Researching the use of research evidence in decision-making. UCL Institute of Education, EPPI-Centre.

Mathew, T. H., Louis, N., Mabila, T. E., & Mugambiwa, S. S. (2021). Implications of the Marginalisation of Social Sciences in the Fight against the Covid 19 Pandemic: A Humanities Perspective. International Journal of Criminology and Sociology, 10, 1533–1541. https://doi.org/10.6000/1929-4409.2021.10.175

Petrokofsky, G., Brown, N. D., Hemery, G. E., Woodward, S., Wilson, E., Weatherall, A., Stokes, V., Smithers, R. J., Sangster, M., Russell, K., Pullin, A. S., Price, C., Morecroft, M., Malins, M., Lawrence, A., Kirby, K. J., Godbold, D., Charman, E., Boshier, D., … Arnold, J. E. M. (2010). A participatory process for identifying and prioritizing policy-relevant research questions in natural resource management: A case study from the UK forestry sector. Forestry, 83(4), 357–367. https://doi.org/10.1093/forestry/cpq018

Phoenix, J. H., Atkinson, L. G., & Baker, H. (2019). Creating and communicating social research for policymakers in government. Palgrave Communications, 5(1), 98. https://doi.org/10.1057/s41599-019-0310-1

Sienkiewicz, M. (2020). From a Policy Problem to a Research Question. In Science for Policy Handbook (pp. 52–61). Joint Research Centre (JRC), European Commission. https://doi.org/10.1016/B978-0-12-822596-7.00006-1

Sutherland, W. J., Fleishman, E., Mascia, M. B., Pretty, J., & Rudd, M. A. (2011). Methods for collaboratively identifying research priorities and emerging issues in science and policy. Methods in Ecology and Evolution, 2(3), 238–247. https://doi.org/10.1111/j.2041-210X.2010.00083.x

Whitty, C. J. M. (2015). What makes an academic paper useful for health policy? BMC Medicine, 13(1), 301, s12916-015-0544–0548. https://doi.org/10.1186/s12916-015-0544-8

Wilson, W. J. (2002). Expanding the Domain of Policy-Relevant Scholarship in the Social Sciences. PS: Political Science &  Politics, 35(1), 1–4. https://doi.org/10.1017/S104909650200001X

Comments
0
comment
No comments here
Why not start the discussion?