A recent scoping review (Figure 3: Large language models for conducting systematic reviews: on the rise, but not yet ready for use-a scoping review) found that AI tools:
Another study found AI methods could be used as a supplement to traditional review methods, and proved useful for finding "studies with missing keywords, unusual phrasing, or limited indexing."
Finally, this guide and its accompanying chart has great examples on when and when not to use AI tools in reviews, with case studies highlighting instances of hallucinations.
Covidence, available to MGHers and BWHers, offers machine learning powered assistance during screening and data extraction (both optional), with plans to expand. These tools will:
What are signs of quality in an AI tool?
✔ The developers of the tool are committed to Responsible use of AI in evidence SynthEsis (RAISE) guideline.
✔ You have read the Terms of Use for the tool and the tool will not retain your content or use it for training.
✔ The tool does not have a funding source that will conflict with your paper.
✔ The tool has been trained on on data that would be used in your paper.
✔ The tool has third party impartial evaluations that you can cite.
✔ The tool has evidence in PubMed or another database to support it. Want to find literature to support a certain AI tool? Ask Us!
✔ The tool has been validated by a high quality study.
Information on this page was adapted from the RAISE (Responsible AI in Evidence Synthesis) Guidelines webinar hosted by the EAHIL Evidence-Based Information SIG with information presented by Ella Flemyng, James Thomas and Anna Noel-Storr.