Keep a ‘human in the loop’ when using big data for health policy
TYPE Prevention Centre News
On a visit to Australia later this month, Professor Osgood will discuss with Australian policy makers how big data technology can revolutionise all levels of health decision-making and inform everything from ad-hoc to long-term surveillance needs to public policy assessment and nimble responses to health crises.
However, the key is in how the data are used – whether there is a “human in the loop” to make decisions, said Professor Osgood, of the University of Saskatchewan in Canada.
“Much of the current loss of trust in big data use has to do with inferencing and automatically undertaking action based on cross-linked data as part of routine processes,” he said.
“The techniques I’m proposing are focused around providing sources of evidence that are far more varied and, critically, are used to enable human decision-making.”
Professor Osgood said missteps in government processes and decision-making were nothing new – for example, early experiments with artificial intelligence in the 1980s and the early Internet in the 1990s had fallen short. However, key elements of these visions were now central to many spheres of life.
Now technology was available to use big data to deliver a dramatically enhanced body of evidence to inform decision-making. It could provide a much improved understanding of health behaviours and exposures and help to improve insights into patient intentions and decision-making.
He said modelling, including dynamic simulation modelling, could help make sense of the cacophony of big data available and provide insights into the implications of new policies.
A key principle for governments in Australia was to invest in big data sufficiently to enable different evidence sources to be cross-checked against each other and to be corroborated – something that was crucially neglected in the posting of Centrelink letters, Professor Osgood said.
“Another recommendation is to recognise the need to train IT staff in government agencies so that they are capable of overseeing and leveraging the new technologies. Failing to make such investments – and relying instead on contractors – threatens to be penny wise and pound foolish.”
He said it would also be important to proactively manage privacy standards during the transition to big data in health.
What are big data?
Professor Osgood says “big data” are characterised by:
- A large Volume of information. For a given participant in a study or patient, this might be tens of millions of records of information.
- High Velocity – the data come in very quickly. For example, a participant in a study might have their physical activity and location sampled every minute to sense how movement varies with exposure to different environments and pollutant levels (as measured by municipal sensors).
- A wide Variety of data. Interlinked data are collected from many sources for each person over time, such as information on exposure to tobacco- or alcohol-related messaging while browsing, SMS messages and tweets, self-reports of health-related knowledge, attitudes and behaviour, information from activity trackers and data on purchases. In many cases, such data can be linked to data from the clinical environment, such as lab test results, electronic health records or patient charts, and health service use data.
- The data have higher Veracity. The evidence collected is often more reliable than that secured using traditional means of data collection, which rely heavily on a limited resource (such as clinical staff) or a patient self-reporting information. A combination of data from different sources permits us to assess health behaviours, exposures and attitudes with far greater confidence than has previously been possible.
Read more: Health sector experience key to testing benefits of data sharing