top of page

Topic:

Issue:

Category:

Research

Title:

Interobserver and Intraobserver Agreement are Unsatisfactory When Determining Abstract Study Design and Level of Evidence

Author:

Patel NM, Schmitz MR, Bastrom TP, et al

Journal:

J Pediatr Orthop

Date:

July 1, 2022

Reference:

42(6):e696-e700. doi:10.1097/BPO.0000000000002136

Level Of Evidence:

II

# of Patients:

13

Study Type:

Location:

-

Summary:

Methods:

13 reviewers from POSNA’s Evidence-Based Practice Committee were asked to determine the level of evidence and study design for 36 accepted abstracts from 2021 POSNA meeting, first without resources or assistance and four weeks later with the JBJS Level of Evidence Chart. Fleiss’ kappa statistic (k) was used to calculate interobserver and intraobserver reliability and chi-squared (χ2) analysis was used to compare mismatches between the two review rounds.

Exclusions:

Results:

Interobserver reliability for level of evidence was fair (k=0.28) without and moderate (k=0.43) with the JBJS chart. The most frequent disagreement was between level 3 and 4 LOE. Interobserver reliability for study design was k=0.27 without and k=0.37 with the JBJS chart.

Conclusions:

Even experienced reviewers achieved only fair interobserver agreement for rating level of evidence and study design. Agreement was slightly improved for level of evidence but not study design with the use of the JBJS/Oxford chart.

Relevance:

Limitations:

Perspective:

bottom of page