Skip to Main Content

Artificial Intelligence in Medical Research

Appropriately using and citing AI tools and risks involved with such usage

What does the evidence say about AI and literature reviews?

A recent scoping review (Figure 3: Large language models for conducting systematic reviews: on the rise, but not yet ready for use-a scoping review) found that AI tools:

  • Had the most evidence to support involvement in data extraction and screening in systematic and other reviews.
  • Did not have supporting evidence for involvement in search development. 
  • Had promising evidence for question development.

Another study found AI methods could be used as a supplement to traditional review methods, and proved useful for finding "studies with missing keywords, unusual phrasing, or limited indexing."

Finally, this guide and its accompanying chart has great examples on when and when not to use AI tools in reviews, with case studies highlighting instances of hallucinations. 

How can you use this in practice?

Covidence, available to MGHers and BWHers, offers machine learning powered assistance during screening and data extraction (both optional), with plans to expand. These tools will:

  • remove papers that are classified as not RCTs
  • sort the citations you're screening by relevance 
  • show highlighted suggestions for the data various fields 
  • identifies the interventions in the papers you're extracting

Signs of quality in an AI tool

What are signs of quality in an AI tool?

✔ The developers of the tool are committed to Responsible use of AI in evidence SynthEsis (RAISE) guideline.

✔ You have read the Terms of Use for the tool and the tool will not retain your content or use it for training. 

✔ The tool does not have a funding source that will conflict with your paper. 

✔ The tool has been trained on on data that would be used in your paper. 

✔ The tool has third party impartial evaluations that you can cite.

✔ The tool has evidence in PubMed or another database to support it. Want to find literature to support a certain AI tool? Ask Us!

✔ The tool has been validated by a high quality study. 

References

Information on this page was adapted from the RAISE (Responsible AI in Evidence Synthesis) Guidelines webinar hosted by the EAHIL Evidence-Based Information SIG with information presented by Ella Flemyng, James Thomas and Anna Noel-Storr.