NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
Robinson KA, Akinyede O, Dutta T, et al. Framework for Determining Research Gaps During Systematic Review: Evaluation [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2013 Feb.
Framework for Determining Research Gaps During Systematic Review: Evaluation [Internet].
Show detailsKey Findings
- Initial investigator review of the original framework resulted in the revision of the framework to provide for more specific coding for the reason for the research gap. The team felt that the framework would be easier to apply and also provide more useful information if the categories for reasons for gaps were more detailed.
- Each of the EPC respondents indicated that they had previously identified research gaps from systematic reviews. However, only one described methods used to identify gaps. This finding is in line with results from prior EPC project that EPCs and other systematic reviewers do not use formal methods or frameworks for identifying gaps from systematic reviews.
- Key issues emerged from our application of the framework to existing systematic reviews, and through the evaluation of the use of the framework by EPCs. Common issues included challenges based on what point the framework is applied (during a systematic review or retrospectively using an existing systematic review) and the level of detail needed when characterizing gaps (i.e., lumping versus splitting). We modified the instructions to provide guidance for addressing the challenges, and highlighted areas that should be discussed by team members prior to the process of identifying research gaps using the framework.
- Key advantages to using the framework were also noted by the EPCs. The primary advantage noted was that use of the framework facilitated a structured and systematic approach that helped team members to consider all areas within the scope of the project. The perception of the EPCs, that needs further evaluation, is that the use of the framework provided for a more comprehensive process less open to bias.
Limitations
We chose to apply the framework to 50 systematic reviews to have a number that could be accomplished within our timeframe, yet a large enough number to include systematic reviews across a range of topics. We further limited our application to systematic reviews of randomized controlled trials of clinical topics from two well-known organizations that produce systematic reviews. We imposed this restriction to get a more homogenous set of systematic reviews; to be more certain that differences we saw during the application of the framework were due to potential issues with the framework rather than distinct differences in the study design included, topic addressed by the systematic review (i.e., clinical versus other) or quality of the systematic review. Future testing of the framework, including the use with reviews of other study designs and different sorts of questions, may lead to further revisions of the framework or instructions.
We chose to include Cochrane reviews as these reviews follow a clear and explicit method, and were likely to meet eligibility criteria (i.e., include RCTs and address clinical topic). This was seen as preferable to conducting a search and screen for eligible systematic reviews.
We did not analyze the correlation between the reviewers applying the framework retrospectively as this was not felt to be necessary or appropriate. We would not expect complete agreement, nor is there a reference standard. The process of applying the framework, retrospectively or prospectively, is a task of interpretation and judgment similar to the grading of the strength of evidence. As with the task of grading, appropriate guidance, such as through the instructions developed, and meeting for the training of and calibration among team members is advised.
We were able to solicit feedback from 8 different EPC teams; however, only 3 of these applied the framework to an ongoing systematic review (and one of these applied the framework after completing of the results section). Further use during a systematic review may identify issues or challenges requiring additional revisions to the framework or instructions.
We did not ask EPCs to track the time it took them to apply the framework. We had discussed this in detail and ultimately felt that it was not a matter of simply completing the framework worksheet. The time to complete the process, similar to grading strength of evidence, is very dependent on the specific review questions, team structure and process, etc. Similarly, it is inherently iterative so it is not clear at what point one would start and stop the completing the worksheet. Issues of how the use of the framework fits into a systematic review project, including considerations of any additional time needed, is an area for future research.
On a related note, we did not assess the best process for application of the framework. We feel that the same team process should be used as in completing the strength of evidence assessments. This would suggest a need for individuals with methodological and domain expertise, but this was not assessed. As with the strength of evidence assessments, there is judgment involved in identifying and characterizing gaps. This suggests a need for team orientation and pilot testing, followed by team discussions after the completion of the process.
While we asked EPCs to try using the framework as part of one of their projects we have limited information about how the EPCs applied the framework. To that end, we don't know if the EPCs applied the framework as an academic exercise (therefore providing information on usability) or if they integrated the completion of the framework with a current project (that might provide us with better idea of usefulness). Similarly, we do not know how, or if, EPCs used the results of applying the framework in their project(s).
Because we do not have a sense of how the use of the framework could fit within the production of an EPC report, including the time needed to complete the process, we cannot make recommendations about the feasibility of using the process. Further, as noted below, future research is needed to assess the potential value of using the framework in order to weigh the potential benefits versus potential costs.
Future Research
There are several outstanding questions or research that may further this work:
- Do the changes made to framework and instructions improve usability? As review teams use the framework there may be additional challenges identified. Further testing across different types of questions, and with reviews including different study designs, may be warranted.
- What is the best process for using the framework? Further evaluation is needed as to whether the process could be conducted by reviewers independently or sequentially. This could include assessments of reliability, specifically whether two reviewers identify the same gaps and reasons for gaps. In addition, a training packet and process could be developed. A set of examples could be provided to illustrate common issues, such as how to use the framework to capture methodological gaps.
- What is the most efficient and appropriate way to integrate this process into the conduct of systematic review or FRN? Is there an optimal time during a systematic review or FRN at which to complete the framework?
- In our previous report, we had proposed a format for presenting research gaps based on the results from the framework.5 Future research could assess if the use of the framework facilitates the use and presentation of identified gaps. This research could be specific to the different uses of the identified research gaps including (a) to develop FRN sections for systematic reviews, or (b) to solicit input from stakeholders in developing FRN documents.
- Similar to the assessment of strength of evidence, the identification of gaps and the reasons for gaps is based on interpretation and judgment. We outlined in the instructions some issues that should be discussed by a team before starting to identify research gaps. Included are the often arbitrary decisions about which reason(s) is most important in limiting ability to draw conclusions. Future research could determine if a decision system, like a hierarchy, could be established to aid these decisions. Such a ranking might be based on the extent of influence in limiting conclusions and/or the ability to ameliorate the reason(s) through future studies.
- The framework facilitates a more systematic approach to the identification of research gaps, but there is little research on how this information may be utilized and by whom, and whether gaps identified through the framework are more useful. As with other methods of conducting systematic reviews, we think that implementing a more explicit process provides for a more comprehensive product with less bias but, also as with other methods, we don't know if this is true. Does using a formal method to identify gaps, such as the framework, provide value for the systematic review authors and for the users of the systematic review? Is there similar, more or less, benefit when using the framework as part of an FRN project? A comparison to other methods would answer questions such as whether use of the framework identifies more research gaps, whether gaps are characterized more completely, and whether gaps identified in this way provide a more useful basis for the development of research agendas.
As noted earlier, we also plan to adjudicate all of the gaps and reasons for gaps abstracted during this project with a goal of quantitatively and qualitatively describing the characteristics of the gaps, and the relative proportions of research gaps that are due to different types limitations in the evidence. This will provide an evidential basis upon which to improve the design of future RCTs to better address comparative effectiveness questions.
Implications for Practice
We provide here some guidance about the use of the framework, based on the results of this project and our experiences. As noted above, many of the specifics of the integration of the framework within the work of the EPCs represent areas of future research. The first question in determining whether and how to use the framework is determining the purpose of identifying gaps. This will determine the level of granularity needed for the characterization of the research gaps. The second question is related to the systematic review being used to identify gaps. For instance, if the team feels like “the entire systematic review is a gap” then it may not be worthwhile going through the process of using the framework. However, we do feel that even in that case the elements of the framework may help to ensure an explicit process.
We recognize that there are different structures for systematic review teams. We suggest that the framework be applied by the same team members and process as employed in completing the strength of evidence grading, ideally at the time of completing the synthesis and grading. We make this suggestion based on our findings that there are different challenges in applying the framework retrospectively, and to increase the potential for leveraging the work completed in assessing the strength of evidence.
If completing the identification of research gaps as part of a FRN or otherwise using the framework in a retrospective manner with existing systematic reviews, we suggest the following:
- Restrict abstraction of gaps and reason(s) for gaps to explicit statements made by the review authors. Do not review and interpret the specific results to identify gaps or reasons for gaps. Abstract the gaps and reasons for gaps that are specifically noted by the systematic reviewer authors.
- The team completing the abstraction retrospectively should meet to discuss and agree on sections to be reviewed (text, tables, etc.) as well as what to do if there are apparent discrepancies between sections of the systematic review.
- Inserting the section name and page number (in Notes field of framework worksheet) used to identify a gap might be helpful for adjudication and review.
For an FRN, the gaps identified could be used by the team in developing the list of gaps to be presented to and considered by stakeholders. Depending on the number of gaps identified, the team may choose to prioritize or categorize the gaps prior to presentation to stakeholders.
Whether being completed during a systematic review or applied retrospectively, the instructions (Appendix F) should be reviewed by all participating team members prior to use of the framework. The instructions provide the current guidance for the use of the framework. To leverage the work of assessing strength of evidence, the relevant guidance on the grading system should also be reviewed. Pilot testing should be completed with, as in strength of evidence assessment training, meetings with the full team to calibrate judgments. As noted in the instructions, the research gap framework may be used in different formats (Word, Excel, Access, and DistillerSR) depending on the process being employed by the review team.
Conclusions
In our prior project, we found that very few systematic reviewers used an explicit method to identify research gaps. We completed further evaluation and development of a framework to identify research gaps from systematic reviews. While our focus in this project was on developing the framework for use by EPCs, the framework is not EPC-specific and may be applied by others conducting systematic reviews and/or identifying research gaps from systematic reviews. Future research is needed, especially to evaluate the potential benefit and feasibility of identifying research gaps using the framework. Our framework may be applied during the conduct of or using existing systematic reviews to facilitate an explicit process to characterize where the current evidence falls short and why or how the evidence falls short.
- Discussion - Framework for Determining Research Gaps During Systematic Review: E...Discussion - Framework for Determining Research Gaps During Systematic Review: Evaluation
Your browsing activity is empty.
Activity recording is turned off.
See more...