NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
Robinson KA, Akinyede O, Dutta T, et al. Framework for Determining Research Gaps During Systematic Review: Evaluation [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2013 Feb.
Framework for Determining Research Gaps During Systematic Review: Evaluation [Internet].
Show detailsReview and Revise Framework and Develop Detailed Instructions
The team reviewed and discussed the original framework and instructions. Revisions were made iteratively and based on consensus. The initial revised framework and instructions are provided in Appendix C. The primary revision of the framework was the addition of sub-categories for the reasons for the gap. The team felt that further granularity within the categories of reasons for gaps would make completion of the framework more straightforward for review teams, and would ease translation of research gaps to specific research questions, with guidance for studies needed to address those questions.
Definitions for each subcode were added to the instructions.
The specific reasons for gaps are listed in the footnote of the worksheet and described below:
- Insufficient or imprecise informationInformation is insufficient or imprecise if data are sparse and thus uninformative and/or confidence intervals are wide and thus can include conflicting results or conclusions.
- A1 – This reason should be selected if no studies are identified.
- A2 – This reason should be selected if a limited number of studies are identified.
- A3 – This reason should be selected if the sample sizes or event rates in the available studies are too small to allow conclusions.
- A4 – This reason should be selected if the estimate of the effect (such as achieved from a meta-analysis) is imprecise. That is, if the width of the confidence interval is such that the conclusion could be for benefit or harm.
- Information at risk of biasThe aggregate risk of bias is contingent upon the risk of bias of the individual studies.
- B1 – This reason should be selected if the study design(s) are inappropriate to address the question of interest (e.g., non-randomized studies for question where randomized studies are more appropriate).
- B2 – This reason should be selected if there are major methodological limitations to the available studies leading to high risk of bias or limited internal validity.
- Inconsistency or unknown consistencyConsistency is the degree to which results from included studies appear to be similar or in concordance.
- C1 – This reason should be selected if only one study is identified. If there is only one available study, even if considered a large sample size, the consistency of results is unknown.
- C2 – This reason should be selected if the results from available studies are inconsistent. Elements to consider include whether effect sizes vary widely, if the range of effect sizes is wide, limited or no overlap of confidence intervals, and, as appropriate, if statistical tests, such as I2, indicate heterogeneity.
- Not the right informationThere are a number of reasons why identified studies might not provide the right information to make conclusions about the review question.
- D1 – This reason should be selected if the results from studies might not be applicable to the population of interest.
- D2 – This reason should be selected if the duration of the interventions and/or comparisons is considered too short.
- D3 – This reason should be selected if participants are not followed up for long enough duration in the included studies.
- D4 – This reason should be selected if the optimal and/or most important outcomes are not assessed in the included studies. This reason also includes instances where only data on surrogate outcomes are available while data on more clinical and/or patient-important outcomes are needed.
- D5 – This reason should be selected if the results from studies might not be applicable to the setting of interest. This would include cases where the interventions assessed in the studies are not applicable or available in setting of interest.
Test Framework and Instructions Through Application to Existing Systematic Reviews
There were 23 EPC reports published on the Effective Health Care Program Web site from January 1, 2009, to December 12, 2011. During screening, four were deemed ineligible due to the following reasons: “not an effectiveness review (n=2) and “not a clinical topic” (n=2).
There were 19 eligible EPC reports; therefore, 31 Cochrane reviews were randomly selected for initial consideration of eligibility criteria to bring the total sample of systematic reviews to 50. There were 6,967 records for January 1, 2009, to December 12, 2011, in the Cochrane Database of Systematic Reviews. Removing protocols, there were 4,269 records. After random sorting and selecting 31 reviews, 6 were determined to be ineligible due to the following reasons: “no RCTs [random controlled trial] included” (n=3) and “not a clinical topic” (n=3). After random selection of an additional six reviews, all six were deemed eligible. A listing of the reviews used in this project is provided in Appendix D. See Figure 1 for a flow diagram of the identification and selection of systematic reviews.
There were 144 review questions included in the 50 systematic reviews. Of the 31 Cochrane reviews, 23 had one review question, 8 had two review questions (average 1.3 questions per review). This was quite different for the EPC reports; the smallest number of review questions was 4 and the highest was 7, with an average of 5.5 review questions per report. The estimated time taken for each reviewer to complete full gaps abstraction was about 7.5 hours for an EPC report and about 3 hours for a Cochrane review. Our four reviewers, two reviewers for each systematic review, took approximately 11 weeks total to complete gaps abstraction for the 50 systematic reviews.
The total number of gaps abstracted, counting those abstracted by each reviewer separately, was 1,830. The number of gaps per Key Question per reviewer ranged from 1 to 165. The average number of gaps abstracted by each reviewer per Key Question was 8.5 (95% confidence interval [CI]: 6.23 to 10.32) and 14.3 (95% CI: 9.80 to 18.87) for the Cochrane reviews and EPC reports respectively. The overall mean number of gaps that each reviewer abstracted per Key Question was 12.7 (95% CI: 9.35 to 16.05).
However, in reviewing the abstracted information we noted that one reviewer abstracted 165 gaps for one of the questions while the other reviewer abstracted 5 gaps for the same review question. This large discrepancy was due to the former abstractor listing each gap separately and the latter reviewer grouping interventions, comparators and outcomes together. After removing this outlier value, the number of gaps per Key Question per reviewer ranged from 1 to 99. The average number of gaps abstracted by each reviewer per Key Question was 8.5 (95% CI: 6.23 to 10.32) and 12.75 (95% CI: 9.31 to 16.19) for the Cochrane reviews and EPC reports respectively. The overall mean number of gaps that each reviewer abstracted per Key Question was 11.6 (95% CI: 8.94 to 14.07). Based on the former averages, there were about 264 gaps identified from the Cochrane reviews (31 reviews × 8.5 gaps per review) and about 242 gaps identified from the EPC reviews (19 reviews × 12.75 gaps per review). We estimate that if full adjudication were completed there would be about 600 unique research gaps identified.
Insufficient or imprecise information (Gap Reason A) was the most frequent reason that prevented the original systematic reviewers from reaching a conclusion on several research questions (Gap Reason A was used 1,716 times). Inconsistency or unknown consistency among studies (Gap Reason C) was the next common reason for the research gaps (selected 462 times). The reason “not the right information” (Gap Reason D) was chosen 273 times. Biased information (Gap Reason B) was selected 227 times. There were 18 instances where reviewers thought that gaps existed due to another reason (the gap reason did not fit into Gap Reason code A, B, C, or D). Table 1 provides a breakdown by reason code. Note that multiple reasons could be selected for each gap, and these are total numbers across both reviewers' abstractions.
Two trained team members independently applied the framework retrospectively to each existing systematic reviews. A third team member reviewed all abstractions and brought forward to the team apparent discrepancies in the number and type of gaps, as well as the reasons for gaps, abstracted from the same review question. This iterative adjudication process identified a number of issues. The key issues, and our responses, are outlined in Table 2. We did not consider analysis of correlation between the reviewers as necessary or appropriate as we would not expect complete agreement, nor is there a reference standard, for this task. Completing full adjudication was considered beyond the scope of this report, but is planned as future work.
Evaluate Implementation of Framework
Of the 14 EPCs, three did not respond to invitations and two declined to participate. Nine EPCs initially agreed to participate and, after several reminders, seven EPCs submitted eight evaluations (one EPC submitted an evaluation form from two different project teams). Most evaluation forms were submitted in June with the last form submitted July 7, 2012. Detailed results are provided in Appendix E.
Five respondents (63%) used the framework during the completion of a FRN. The remainder applied the framework as part of a systematic review. Because there may be differences in how the framework works when applied retrospectively rather than during a systematic review, we have noted next to the feedback comments whether the framework was applied during a systematic review or as part of a FRN.
All eight respondents indicated that they had previously identified gaps from systematic reviews. However, only one provided a description of methods that had been used by the EPC to identify gaps. The other respondents listed titles of prior FRN topics rather than describe any methods that they had used for the identification of gaps.
Respondents noted a number of advantages to using the framework. The primary advantage noted was that use of the framework facilitated a structured and systematic approach. The structured approach required EPC team members to consider all areas, helped to see areas of redundancies, and kept the team members focused on the scope of the project. Respondents highlighted that the systematic approach was in contrast to the somewhat arbitrary process typically used, and that use of the framework may limit the potential influence of the particular priorities of the research team.
Each respondent provided feedback on the disadvantages and problems, as well as suggestions for the framework and instructions. Some of the issues raised were very similar to those we encountered in applying the framework to the existing systematic reviews. We have provided a detailed response to each comment in Appendix E. We summarize here some of the common issues and our response:
- Implementation of framework to reviews or questions with very limited evidence is cumbersome.We agree that the framework may be too specific to use for questions for which, essentially, the entire question is a gap. This may lead to an unmanageable number of gaps and an overly cumbersome process. We revised the instructions to suggest that team members meet prior to the start of the process of identifying gaps to decide on how to handle questions with very limited evidence. We think that in such cases it would still be useful to follow an explicit process, but the framework may be completed for the entire question versus characterizing specific gaps within the question.
- Implementation of framework to questions for which strength of evidence was not assessed was challenging. In other cases, application of framework was replicating work completed in strength of evidence grading.We have developed the framework to leverage work already being completed by the EPCs in assessing strength of evidence. However, it was clear from responses that the efficiency of this process is dependent on when the framework is applied and for what specific questions. We had previously suggested that the optimal time for application may be during the writing of the results. We revised the instructions to also include the suggestion that teams consider using the framework for questions, and outcomes that were included in the strength of evidence assessments.
- Completing worksheet when there are gaps comprising multiple comparisons and/or outcomes was cumbersome.We revised the instructions to include the need to have a discussion and make a decision as to whether to lump or split. For instance, it may be more manageable and useful to abstract gaps by class of intervention or comparison.
Revise and Finalize Framework and Instructions
We added or revised text, and included examples, to provide clarification and further guidance within the instructions. These changes were based on results of the retrospective application of the framework and on feedback from the EPCs, as detailed earlier. The final framework worksheet and instructions are provided in Appendix F.
- Results - Framework for Determining Research Gaps During Systematic Review: Eval...Results - Framework for Determining Research Gaps During Systematic Review: Evaluation
Your browsing activity is empty.
Activity recording is turned off.
See more...