This guide aims to round up the most frequently asked questions (FAQs) on conducting systematic reviews
Once you have finished developing the searches for each resource, run the saved searches and export the results to a place you control. This provides a clean cut-off date for your data/results which are the ones you will use for the rest of the process.
Resist screening live results in abstracting and indexing databases:
Keep the files of the exported results somewhere safe to allow back-tracking in case of questions/data loss.
Using a reference manager or Covidence helps when you come to write the method section as they can be used to record decisions and reasons for choices made during the screening stage. They are also a record of how numbers of records changed through the screening stage.
For the screening process, all the results need to be in the same place for the screening stage. Before screening though, duplicate records/results should be removed.
If using EndNote, do not delete the duplicate records but move them to a separate library instead.
Mendeley “helps” by not importing duplicates, so remember to note the total number of results at the time of export.
Platforms to manage systematic review processes, e.g. Covidence, Rayyan, can also identify duplicates.
If the deduplication process has been automatic, make a manual check as well. Beware of multiple, distinct, publication items about the same study population – best identified manually and merged.
Once you've developed a draft search strategy, you'll need to look for evidence of search effectiveness. This can be done by testing your search ability to retrieve known articles that represent your search query. This screening process requires you to have a set of relevant articles and is sure to get picked up in your search. The steps to test a set of benchmark Papers are described below:
The checklist is presented in Table 1 of the article, and Table 2 provides some helpful explanations of the main sections of the checklist.