The textbook on grounded theory suggests being careful not to get too anchored in existing theories when doing a literature review. My impression is that RSE in general and particularly the question of RRI in research software is so little-explored that a systematic review would not throw up any work close enough to get concerned about. On the other hand, reviewing the grey literature would demonstrate a wealth of worldviews and opinions that could help me to identify my own biases.
Hemingway and Brereton 2009 describe SLR from the evidence-based medicine perspective. Systematic review needed in that discipline because a wealth of literature can be hard to navigate for practitioners so unbiased, consistent summary is beneficial. Traditional reviews are not based on a peer-reviewed protocol so replicating their findings can be difficult.
That last sentence points to a different goal for me. Purpose of my LR is to demonstrate that I have critically engaged with the literature and identified the gap appropriate to my research so I do not necessarily need to follow somebody else's protocol. I need to demonstrate a quality argument that is based on the literature (both peer-reviewed and grey). Perhaps a systematic searching practice is necessary but not a systematic reviewing protocol? Still need to make sure I approach the literature with an open mind and do not bias conclusions through my selections, and need to be open in my selection techniques (even though I've already collected 191 articles before observing that need!).
Needs for systematic reviews:
establish clinical/cost effectiveness of an intervention
define new research agenda (that is my case)
needed for grant funding in primary healthcare research
postgraduate theses (also my case)
NICE technology appraisals
Their protocol is appropriate to a meta-synthesis or meta-analysis:
research question identified, proposing search terms and types of literature needed
literature searched (published and unpublished, white and grey)
articles assessed for eligibility and quality and full text retrieved for those that meet the criteria
results combined into meta-synthesis or meta-analysis.
findings contextualised.
They note user involvement -> relevant to RRI!
Petersen et al Systematic Mapping Studies in Software Engineering 2008. Relevant to SE, but contrast systematic mapping with systematic literature review.
Coming from an EBSE perspective, describes SLR as an effort-intensive way to synthesise quantitative results but also note a reference for qual. Systematic mapping (unsurprisingly) maps research reports by categorising them. Does this fit into the GT model of grounding the theory in the data? Yes, as the literature is a source of data; no, as the review would be done before engaging with participants. Interesting to think more on. Diagrammatic presentation of SM is very similar to SLR: define question, review scope, search literature, assess/screen, but then do keyword categorisation based on abstracts and then produce a map from that.
Examples of RQs from Bailey et al 2007: which journals discuss this topic? What are the most investigated subjects in this topic over time? What research methods are applied in what contexts?
From Mujtaba et al 2008: What areas in the topic are addressed and how many articles address them? What types of papers are published in the area, and what is their contribution?
The second one sounds most like what I'm trying to do so follow up their map for a "worked example". Filtering/assessment step is fairly broad: does the abstract make it look like this paper is relevant?
Seems like SM is a good way to find all of the literature I need, but I will then still need to do a deep dive on each discovered paper to produce the literature review for my thesis rather than building a map based on abstracts.