For librarians who predominantly search the literature, very few of us have discussed the unique process we take. Here is how Jenn Monnin, guest contributor and friend of the pod, builds her searches.
When I build a search, I break the topic down into its main components, then run those components as a keyword search in PubMed. If one of the search components is a less common phrase, I will run that as individual keywords (example: emergency AND action AND plan). Then I go to the Advanced Search page to see how the ATM (Automatic Term Mapping) interpreted the search. From here, I clean up the search by getting rid of excess terms and determining if/where to experiment with truncation. I also examine the MeSH (Medical Subject Heading) record for each ATMed MeSH term to determine whether the term should be exploded, and if there are any previous indexing or similar terms that should be included in the search. I create and compare two search strategies: one with the [tiab] tag on all keywords and one with the [tw] tag on all keywords. I compare the number of results for each search, and scan the first few pages of results for search relevancy and potential missing keywords. If term additions are found, I add them to the search. I replicate this process for each component of the search, then combine all components into one search strategy.
I test this search against the PMIDs (PubMed ID) of 10+ on-target articles from the faculty member, if they provided any. I pull up the article record for any missing PMIDs and examine the record for missing and relevant search terms. Once these terms are added to the search and it is rerun, I again check for missing articles. If articles are still missing, I test the on-target articles against each component search looking for the problem section. Once found, I troubleshoot that section specifically.
Once the PubMed search appears to be in its final form, I send results to the faculty member. If this search is for a systematic review, I touch base with the faculty member to confirm that they also think the search is ready for peer review. Then I use the Polyglot Search Translator to do the initial search translations. I fix any errors in the searches for each database, and validate it against the on-target articles I know are indexed in that database. Searches for each database are sent in for peer review, after which all comments/concerns are addressed. It is re-submitted for peer review if necessary. Finally, all searches are run, deduplicated, and uploaded into the SR software for the team to begin screening.
Everyone's process is a little different here, as searching is an art and not a science. What does your process look like? We'd love to hear the steps you take in the comments.
PS: Carrie recently did a #FiveMinuteFriday video about this very topic on her YouTube Channel! Check out what she does by watching this video. (We highly recommend subscribing while you're there!)
I know Polyglot is supposed to be a time saver... That said, I often hand-translate. When I'ved used Polyglot, I've ended up doing enough cleaning and further tweaking that I didn't see the time savings. Have others noticed that? Do you use it for some searches or databases and not others?