This episode was a bit unplanned (*ahem* not that we plan every episode), and arose after a brief text exchange where Carrie talked about an experience she had earlier in the day. We decided it was something that might be worth a broader discussion - and thus was this episode born! Carrie and Tracy chat about the problem of teaching evidence synthesis work.
Carrie wrote a blog post for Covidence about using systematic reviews as class assignments — spoiler: we aren’t fans of that, or cherry picking. (In the data/evidence sense, as we do like cherries in general. We also like blueberries, and blackberries, and…) The PRISMA flow diagram is not something you can just fill out whenever, either.
Grant and Booth’s 2009 article on typology of reviews is a great resource to learn about the various types of reviews, from narrative to systematized to systematic, and everything in between. There’s also a more recent article from 2019 by Sutton and colleagues that expands on the typologies that is a good reference for all the reviews possible.
Searches take a long time to do and are not something that can be thrown together quickly. (A quick recommendation to read Amanda Ross-White’s 2021 article in JMLA, “Search is a verb: systematic review searching as invisible labor”, if you haven’t already.)
Tracy mentioned Luddites and perpetuated misinterpretations in the process. This TikTok from Scientific American offers a correction to Luddite slander, and provides some timely context (AI in particular comes to mind). Carrie mentioned the Right Review tool, which is pretty nice! Also shout out to the Bond University folks who developed the SR-Accelerator tools and promote a well-planned 2weekSR. We also talk a bit about AI with our evidence synthesis work (probably more to come in future episode about that).
Oh, the cookies we mentioned? Those are from Levain Bakery.
Share this post