Identifying relevant studies for systematic reviews and health technology assessments using text mining
Introduction
Systematic reviews are a widely used method to bring together the findings from multiple studies in a reliable way, and are often used to inform policy and practice (such as guideline development). A critical feature of a systematic review is the application of scientific method to uncover and minimise bias and error in the selection and treatment of studies. However, the large and growing number of published studies, and their increasing rate of publication, makes the task of identifying relevant studies in an unbiased way both complex and time consuming.
Unfortunately, the specificity of sensitive electronic searches of bibliographic databases is low. Reviewers often need to look manually through many thousands of irrelevant titles and abstracts in order to identify the much smaller number of relevant ones; a process known as 'screening'. Given that an experienced reviewer can take between 30 seconds and several minutes to evaluate a citation, the work involved in screening 10,000 citations is considerable (and the burden of screening is sometimes considerably higher than this).
The obvious way to save time in reviews is simply to screen fewer studies. Currently, this is usually accomplished by reducing the number of citations retrieved through electronic searches by developing more specific search strategies, thereby reducing the number of irrelevant citations found. However, limiting the sensitivity of a search may undermine one of the most important principles of a systematic review: that its results are based on an unbiased set of studies.
Project aims
We aim to develop new text mining methods to assisting with screening in systematic reviews. Two methods will be developed:
- Screening prioritisation. The list of items for manual screening will be prioritised automatically, so that studies at the top of the list are those that are most likely to be relevant.
- Automatic classification. Based on a retrospective analysis of data from existing reviews, documents will be manually classified as "include" or "exclude". Based on this information, we will apply machine learning techniques to train systems to apply these categorisations automatically.
By reducing the burden of screening in reviews, new methodologies using text mining may enable systematic reviews to both: be completed more quickly (thus meeting exacting policy and practice timescales and increasing their cost efficiency); AND minimise the impact of publication bias and reduce the chances that relevant research will be missed (by enabling them to increase the sensitivity of their searches). In turn, by facilitating more timely and reliable reviews, this methodology has the potential to improve decision-making across the health sector and beyond.
Project team
Prinicpal Investigator: Prof. James Thomas, Evidence for Policy and Practice Information and Co-ordinating Centre (EPPI-Centre), Institute of Education, University of LondonCo-Investigators:Prof. Sophia Ananiadou (NaCTeM)
Mr. John McNaught (NaCTeM)
Dr. Alison O'Mara-Eves (EPPI-Centre)
Researcher: Mr. William Black (NaCTeM)
Related Publications
O'Mara-Eves, A., Thomas, J., McNaught, J., Miwa, M. and Ananiadou, S. (2015). Using text mining for study identification in systematic reviews: A systematic review of current approaches. Systematic Reviews 4:5 (Highly Accessed)
Miwa, M., Thomas, J., O'Mara-Eves, A. and Ananiadou, S. (2014). Reducing systematic review workload through certainty-based screening. In Journal of Biomedical Informatics
Mihaila, C., Kontonatsios, G., Batista-Navarro, R. T. B., Thompson, P., Korkontzelos, I. and Ananiadou, S. (2013). Towards a Better Understanding of Discourse: Integrating Multiple Discourse Annotation Perspectives Using UIMA. In: Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, Association for Computational Linguistics, Sofia, Bulgaria, pp. 79-88 (LAW Challenge Award)
Rak, R. and Ananiadou, S. (2013). Making UIMA Truly Interoperable with SPARQL. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, Sofia, Bulgaria, pp. 88-97
Rak, R., Rowley, A., Carter, J. and Ananiadou, S. (2013). Development and Analysis of NLP Pipelines in Argo. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations, Association for Computational Linguistics, Sofia, Bulgaria, pp. 115-120
Funding
This project is being funded by the Medical Research Council (MRC). Grant number: MR/J005037/1Featured News
- Prof. Junichi Tsujii honoured as Person of Cultural Merit in Japan
- Participation in panel at Cyber Greece 2024 Conference, Athens
- Shared Task on Financial Misinformation Detection at FinNLP-FNP-LLMFinLegal
- New Named Entity Corpus for Occupational Substance Exposure Assessment
- FinNLP-FNP-LLMFinLegal @ COLING-2025 - Call for papers
- Keynote talk at Manchester Law and Technology Conference
- Keynote talk at ACM Summer School on Data Science, Athens
- Congratulations to PhD student Panagiotis Georgiades
Other News & Events
- Invited talk at the 8th Annual Women in Data Science Event at the American University of Beirut
- Invited talk at the 2nd Symposium on NLP for Social Good (NSG), University of Liverpool
- Invited talk at Annual Meeting of the Danish Society of Occupational and Environmental Medicine
- Advances in Data Science and Artificial Intelligence Conference 2024
- New review article on emotion detection for misinformation