Skip to Main Content
The University of Edinburgh home
Show/hide site search
Information Services

Guidance for systematic reviews

This guide aims to round up the most frequently asked questions (FAQs) on conducting systematic reviews

Managing search results

Once you have finished developing the searches for each resource, run the saved searches and export the results to a place you control. This provides a clean cut-off date for your data/results which are the ones you will use for the rest of the process. 

  • Use bulk export where possible as it saves time and reduces error.
  • Select export options that give DOIs and other identifiers which can speed up finding full-text.
  • Look for the option to export results as a RIS file as these are accepted by all reference manager tools and a web-based tool Covidence.

Resist screening live results in abstracting and indexing databases:  

  • It is difficult to do in one sitting and repeating the search is likely to produce changes in the results, additional ones which have been added to the database since the last time the search was run. 
  • Results from databases searched on different platforms will contain duplicates (as well as unique results) and you will be unnecessarily screening the same record multiple times. 

Keep the files of the exported results somewhere safe to allow back-tracking in case of questions/data loss. 

  • Use a reference manager to keep “golden copy” and “working copy” of search results.
  • Consider saving the exported file itself and with a filename that helps you remember what it is, e.g. ResilienceTraumaSysRev_pscyINFOEmbaseMEDLINE_16thAug21.ris 


Using a reference manager or Covidence helps when you come to write the method section as they can be used to record decisions and reasons for choices made during the screening stage. They are also a record of how numbers of records changed through the screening stage.

For the screening process, all the results need to be in the same place for the screening stage. Before screening though, duplicate records/results should be removed. 

If using EndNote, do not delete the duplicate records but move them to a separate library instead. 

Mendeley “helps” by not importing duplicates, so remember to note the total number of results at the time of export.

Platforms to manage systematic review processes, e.g. Covidence, Rayyan, can also identify duplicates.

If the deduplication process has been automatic, make a manual check as well. Beware of multiple, distinct, publication items about the same study population – best identified manually and merged.

Once you've developed a draft search strategy, you'll need to look for evidence of search effectiveness. This can be done by testing your search ability to retrieve known articles that represent your search query. This screening process requires you to have a set of relevant articles and is sure to get picked up in your search. The steps to test a set of benchmark Papers are described below:

  • Once you have implemented your search strategy and obtained a set of results, the goal is to ensure that this set of benchmark Articles is included.
  • Graphical user interface, text, application

Description automatically generated

 

 

 

  • First, you need to see what benchmark articles are included in the database you are using. If you are testing the Ovid Medline search strategy, the benchmark set you are using can only contain articles indexed in Ovid Medline. The easiest way to check if a post is present is to search in the Title field to see if it was found.

The checklist is presented in Table 1 of the article, and Table 2 provides some helpful explanations of the main sections of the checklist.