Current and future approaches
This post picks up where Good Questions Review left off earlier this year. Apologies for the radio silence, my team had a deluge of demand-driven evidence review projects.
The first two posts on Good Questions Review considered some high-quality methods for selecting policy-relevant questions and the potential importance of timeliness in ensuring that research findings are used on decision-making. This post is intended to be a companion to those posts because it discusses the strengths and weaknesses of one family of methods – analysis of policymaking documents – can be used to verify whether social science research findings are influencing policy and program decision-making. These methods can also help us gain a partial view of when, how, and by whom research findings are being used. However, the ultimate goals of this entry into the Good Questions Review living literature review are to start a continually updated post that keeps you up to date with some of the developments in this growing field and to scope some of the ways that we can address some of the weaknesses of these methods.
A few articles lay out the challenges well. A review written by Sheila Siar (2023) explains that the policymaking process is not orderly or linear and several different people and/or organisations are involved in the process. These factors, and others, make it difficult to what role of social science research played in decision-making, it any. Research and expert opinions provide some good indicators that these challenges are common. For instance, related to the fact that policymaking is not orderly a linear, the Cherney et al. (2015) paper referenced in our timeliness post provides a useful indicator. In their survey of more than 2000 Australian public officials on how social research gets used, about 35% of the respondents said that their department had no formal process for translating research Into policy.
An indicator of Siar’s second point about the diverse actors that influence the policymaking process, is that experts in the field often call for better relationships between knowledge producers, consumers, and intermediaries. Adam Gamoran (2018), President of the William T. Grant Foundation and an expert in this field, argues that we need more and stronger partnerships that “offer standing structure for supporting relationships of trust and shared goals between university-based researchers and government decision-makers” (pg 187). Through these relationships it is possible to better identify if and how evidence is getting used in policymaking. Gamoran’s article underlines that is can be quite difficult to establish whether social science research is getting used in decision-making and that much of the measurement relies on self-report measures like interviews and surveys. Both the Geddes et al., (2017) and Meisel et al. (2019) articles from the timeliness post describe the sorts of work that you’ll find in this field – rich insights built on interviews and workshops with key people in the policymaking process. These sort of studies are hugely useful understanding from from the policymaking participants’s perpectives what knowledge has been used, when it was used, and how it was used. However, methods like this are subject to the quality of interiewees memories, their personal perspectives on the process, and the researchers interpretation of the information. For these and other reasons, Gamoran (2018) argues that “we greatly need more work on measurement, whether validating subjective measures or developing objective measures for new domains” (pg 186). Today’s post is going to focus on a few of measures that can help get a little closer to an objective approach to understanding whether social science research is getting used in decision-making – using citations, content, and other forms of analysis of published documents to observe if, how, when, and by whom evidence is getting used.
Before I provide a few examples of citations and content analysis studies, it may be useful to situate them in a framework. This will be useful as the Good Questions Review develops posts on other families of methods for measuring if and how research is used in decision-making. Sarah Morton (2015) distils the existing approaches to measuring evidence use and impact in policy practice in a useful way – forward tracking, backward tracking, and evaluation of mechanisms to increase use. Forward tracking approaches start with a research output and move forward in time to explore the way in which that research shaped decision-making. Backwards tracking placing focus on a policy or practice outcomes (e.g. a law, regulation, or inquiry report) and exploring what evidence has been used to create or shape it. The evaluation of mechanisms really places the focus on the activities undertaken to increase the uptake user impact of the research.
The methods in today’s post are backwards tracking approaches.
Both examples are drawn from national policymaking processes, but there are examples from other levels of government and contexts.
Ray et al. (2022) analysed citations in Australian federal parliamentary reports, and explored what role academic sources play in government decision-making processes. Their analysis focused on Parliamentary inquiry reports. These reports were chosen because in part due to the strength of insights that could gained due to the reports’ role in the policymaking, but also because they use a consistent format for citations. They analysed the citations contained in 89 majority opinion and dissenting reports, resulting in more than 12,000 total citations analysed. They found that very few citations – only 44 of more than 12,000 – referred to peer-reviewed journal articles. However, they also observed substantially more “academic” citations were attributed to academic experts providing oral testimony or expert providing written submissions to an inquiry. On the basis of this finding, the authors conclude that if academic researchers are keen to have their findings taken up by policy-makers in an Inquiry process, and thus influence any regulations or laws that follow-on from it, that they should make time to respond to calls for inquiry submissions in their area of expertise or, if they are lucky enough to receive an invitation to provide oral testimony to an inquiry, that they should surely accept the invitation. Another interesting insight from Ray et al. (2022) is that the most frequently cited knowledge producers their document set were either Australian organisations or Australian government departments. This may mean that academics who engage in commissioned research for civil society organisations or government departments may also have a higher chance to influence policy than those who seek to publish their work in traditional peer-reviewed academic publications.
Ray et al. (2022) also note some key limitations to their use of citations analysis that are likely relevant for similar studies. The first relates to whether the document set is a good representation of the policymaking process as a whole. Many internal government documents may not be easily accessible to the public, and thus, the research used in those inaccessible documents is not included in the analysis. Moreover, researchers may not be able to gain a strong sense of the totality of inputs into a given policy decision-making process, and thus, it may be difficult to understand how the publicly accessible documents fit into a wider decision-making picture. The culture of evidence use within a department leading a policymaking process may also meaningfully shape the types of evidence and total number of citations that might be used. The Ray et al. (2022) study analysed documents from several different departments. This means that careful consideration may need to be given to the comparability of the documents. Lastly, depending on the subject, the knowledge system or academic discipline relevant to the inquiry may be important context for understanding why certain types of evidence are used more than others. For instance, an inquiry about Indigenous Affairs, a person might expect that more evidence would be supplied by Indigenous communities and that some of this evidence might be in the form that is most valued by the relevant communities.
In sum, this study demonstrates that citations analysis can be used to provide a number of potentially valuable insights what evidence is getting used and when it getting used in the policymaking process but is also subject to a few important limitations that may be difficult to overcome without using a different, complementary method.
Yanovitzky & Weber (2020) provide a great example of an approach that may give us greater insight into how evidence is being used. They used content and thematic analysis approach which involved reading, coding, and analysing text. Their study focused on bills and committee hearing transcripts from United States Congress and sought to demonstrate how an approach like theirs might better capture the complexity of how evidence is being used using a rigorous and reproducible method. The content analysis tool developed for this study captured who supplied evidence, what type of evidence it was, the context in which the evidence was used, and the timing of use. They also used a thematic analysis approach to better understand how arguments based on evidence were made (e.g. conceptual use to support a claim, instrumental use to reframe the issue, or tactical use to justify a pre-existing belief). Without going into too much detail, several aspects of their methods make their approach more transparent, rigorous, and reproducible. The authors describe their document search and selection process in detail so others could reproduce it, they trained the researchers doing the document coding on a set code book to ensure consistent use of concepts when coding the text, and they used a statistical test to confirm that coders were applying their codes in a consistent manner and for quality control throughout the study.
In total, the researchers analysed 224 bills and 190 committee hearing transcripts, thereby including more than 4,500 text excerpts. Unsurprisingly, they found that amount of evidence used on a particular subject was broadly aligned the overall policymaking activity on given subject. Additionally, they found that evidence was used in different ways depending on the context. For instance, instrumental evidence use to reframe an issue was much more common in committee hearings than it was within a finished bill. Further, 94% of all evidence used in bills was conceptual use to support a claim.
Although these findings are useful and interesting to researchers in this field, Yanovitzky & Weber (2020) also sought to use this study as a proof of concept for using content and thematic analysis to focus on how decision-makers deploy evidence towards a specific goal. Learning this how, can round out the critical if, when, and by whom that can be understood through methods like citations tracking (see Ray et al. (2022)). Moreover, it could be used to partially verify more subjective accounts of the how that can could be gained through interviews (see Geddes et al., (2017)).
Yanovitzky & Weber (2020) also mention some critical limitations here as well. Their approach requires substantial human, time, and potentially financial, resources. However, they speculate that natural language processing may allow for some of their methods to be automated in the future. Additionally, they reflect that the method requires researchers to infer decision-makers’ motivation for using evidence based on the text rather than confirming this information with the person themselves. Lastly, similar to Ray et al. (2022), the documents used in this process may be subject a number of biases due to a preference for publicly available or easily accessed docuemnts.
In sum, Yanovitzky & Weber (2020) provide a useful tool to complement the insights that can be gained from other established methods. However, the benefits of this method need to be considered against its limitations, notably the need to invest signficant resources.
The featured studies in this post show that it is possible to use relatively objective methods to assess if, when, by whom, and how social science research is being used in policymaking processes. Practically and methodologically, some of these methods are straightforward to implement because the relevant documents can be easily accessed and may even be presented in consistent formats. Although some methods are more time and labour intensive than others, it is clearly possible to search for and select documents in a transparent way and undertake several analytical methods in consistent, rigorous that could be reproduced by others. Thus, these ‘backward tracking methods’ are viable means of partially verifying the ways that research is being used in decision-making. However, the limitations of data being used for these studies – publicly available documents – should likely not be understated. Given the complex, relational, iterative nature of many policymaking processes, these tools likely do not capture a number of critical moments when evidence shapes decision-making. Luckily, we have other methods to account for this.
Based on the learning from this post, I will plan to monitor newly published literature for two types of studies. First, those that combine document-based analysis with interviews or focus groups to provide an account of evidence use in a policymaking process from multiple perspectives. Studies like this might provide insights into how best to combine methods to give the most holistic and useful insights into the ways that good research can influence decision-making. Second, papers that describe the current state-of-the-art in using natural language processing tools to automate these methods. Developments in this space may give researchers the ability to work with larger datasets or use complementary methods.
Stay tuned!
Note: this essay is continuously updated as relevant articles are added to Good Questions Review.
Gamoran, A. (2018). Evidence-Based Policy in the Real World: A Cautionary View. The ANNALS of the American Academy of Political and Social Science, 678(1), 180–191. https://doi.org/10.1177/0002716218770138
Geddes, M., Dommett, K., & Prosser, B. (2017). A recipe for impact? Exploring knowledge requirements in the UK Parliament and beyond. Evidence and Policy, 14(02), 259–276. https://doi.org/10.1332/174426417X14945838375115
Meisel, Z. F., Mitchell, J., Polsky, D., Boualam, N., McGeoch, E., Weiner, J., Miclette, M., Purtle, J., Schackman, B., & Cannuscio, C. C. (2019). Strengthening partnerships between substance use researchers and policy makers to take advantage of a window of opportunity. Substance Abuse Treatment, Prevention, and Policy, 14(1), 12. https://doi.org/10.1186/s13011-019-0199-0
Morton, S. (2015). Progressing research impact assessment: A ‘contributions’ approach. Research Evaluation, 24(4), 405–419. https://doi.org/10.1093/reseval/rvv016
Ray, A., Young, A., & Grant, W. J. (2022). Analysing the types of evidence used by Australian federal parliamentary committees. Australian Journal of Public Administration, 81(2), 279–302. https://doi.org/10.1111/1467-8500.12503
Siar, S. (2023). The challenges and approaches of measuring research impact and influence on public policy making. Public Administration and Policy, 26(2), 169–183. https://doi.org/10.1108/PAP-05-2022-0046
Yanovitzky, I., & Weber, M. (2020). Analysing use of evidence in public policymaking processes: A theory-grounded content analysis methodology. Evidence & Policy, 16(1), 65–82. https://doi.org/10.1332/174426418X15378680726175