Thought experiments with ‘fake’ research abstracts help policy makers visualise actions to be taken on evidence



TYPE Prevention Centre News

Ours is a tricky situation, politically-speaking. A health department is undertaking Australia’s largest ever scale-up of evidence-based childhood obesity programs into every school and childcare centre across the state.[1] It costs $45m. They have an electronic data monitoring system in place. It’s already telling them that targets are being met. But rather than just rest on their success, they invite a team of researchers to do a behind-the-scenes, no-holds-barred ethnography. It could reveal the ‘real’ story of what’s goes on at the ground level.[2]

Foolhardy or brilliant?

I’m opting for brilliant. But let me put this in context. New South Wales Health is renowned for being research-savvy. They invest heavily in research capacity building. I’m talking in-house research, strategic investment, partnership research and peer reviewed publications.[3] The CVs of some of the policy makers in our partnership put some of us at universities to shame really.

So it was no surprise to me when our policy-maker co-investigators stretched themselves further.

Still, sending in observers on the ground was risky. Who knows what they might find?

But then we thought, “OK let’s imagine it. Let’s imagine the results now. Let’s imagine a range of outcomes and insights, good and bad, and think ourselves out of the situations we could be placed in’. So that is what we did.

The purpose of fake abstracts

Our Evidence & Policy article, ‘Mock abstracts with mock findings: a device to catalyse production, interpretation and use of knowledge outputs in a university-policy-practice research partnership’, describes how we designed and wrote ten fake abstracts for ten ‘pretend’ papers we might publish together after the data had been fully analysed.[4] The abstracts were written as a thought-experiment for use in-house. What they enabled us to do was not simply picture what the results might be, others had done that before.[5] Imagining the whole ‘fake’ article enabled us to write those vital ‘so what’ sentences at the end. Some of these were about what we would do if the ethnography revealed something grim. Writing the abstract enabled us to visualise a pathway out of any sticky situation we could think of. Plus there was something about seeing one’s name on 10 papers that made it more engaging. Human vanity perhaps!

A list of the many purposes of fake abstracts

  • Make more concrete the types of insights that a seemingly vast and amorphous project could produce
  • Illustrate the relationship between particular data collection methods in the study protocol and what the outputs might be
  • Illustrate how theory could be used to guide inquiry and interpret results
  • Interrogate the value of particular insights and what we might learn from them
  • Identify different interpretations
  • Understand which insights are considered more important or more interesting (and why)
  • Prioritise what the order of analysis should be
  • Demonstrate the conventional format for writing abstracts and papers for analysts in the research team who came from disciplinary traditions outside of public health
  • Anticipate what insights and quandaries might be identified by the research and how they might be addressed.

And the real findings?

The real findings are coming out now. The risk taking is being rewarded with new insights about how practitioners orchestrate change processes.[6]

I’d recommend other teams to try what we did. The rehearsal of ideas in advance quickly increased trust. When we started out we were mostly strangers. And the good news is: we are on track for more than ten real papers. So that is a delight.


[1] Green A, Innes-Hughes C, Rissel C, Mitchell J, Milat A, Williams, M, Persson L, Thackway S, Lewis N, Wiggers J. (2018). Co-design of the Population Health Information Management System to measure reach and practice change of childhood obesity programsPublic Health Research and Practice, 28(3): e2831822.

[2] Conte, K. Groen, S. Loblay, V. Green, A. Innes-Hughes, C. Mitchell, J. Milat, A. Persson, L. Thackway, S. Williams, M and Hawe, P. (2017) Dynamics behind the scale up of evidence-based obesity prevention: protocol for a multi-site case study of an electronic implementation monitoring system in health promotion practiceImplementation Science, 12: 146.


[4] Hawe P, Conte K, Groen S, Loblay V, Green A, Innes-Hughes C, Mitchell J, Milat A, Persson L, Thackway S, Williams M. (2019) Mock abstracts with mock findings: a device to catalyse production, interpretation and use of knowledge outputs in a university-policy-practice research partnershipEvidence and Policy. Online in advance of print.

[5] Wutchiett, R, Egan D, Kohaut S, Markman HJ, Pargament KI. (1984) Assessing the need for needs assessmentJournal of Community Psychology, 12: 53–60.

[6] Knowledge generated from practice shows how to bring about system change for better health

Professor Hawe’s blog first appeared in Evidence and Policy Blog: A journal of research, debate and practice and is republished in the Prevention Centre’s blog under Creative Commons.