Skip to content

Cross Case Thematic Analysis Essay



There is a growing recognition of the value of synthesising qualitative research in the evidence base in order to facilitate effective and appropriate health care. In response to this, methods for undertaking these syntheses are currently being developed. Thematic analysis is a method that is often used to analyse data in primary qualitative research. This paper reports on the use of this type of analysis in systematic reviews to bring together and integrate the findings of multiple qualitative studies.


We describe thematic synthesis, outline several steps for its conduct and illustrate the process and outcome of this approach using a completed review of health promotion research. Thematic synthesis has three stages: the coding of text 'line-by-line'; the development of 'descriptive themes'; and the generation of 'analytical themes'. While the development of descriptive themes remains 'close' to the primary studies, the analytical themes represent a stage of interpretation whereby the reviewers 'go beyond' the primary studies and generate new interpretive constructs, explanations or hypotheses. The use of computer software can facilitate this method of synthesis; detailed guidance is given on how this can be achieved.


We used thematic synthesis to combine the studies of children's views and identified key themes to explore in the intervention studies. Most interventions were based in school and often combined learning about health benefits with 'hands-on' experience. The studies of children's views suggested that fruit and vegetables should be treated in different ways, and that messages should not focus on health warnings. Interventions that were in line with these suggestions tended to be more effective. Thematic synthesis enabled us to stay 'close' to the results of the primary studies, synthesising them in a transparent way, and facilitating the explicit production of new concepts and hypotheses.


We compare thematic synthesis to other methods for the synthesis of qualitative research, discussing issues of context and rigour. Thematic synthesis is presented as a tried and tested method that preserves an explicit and transparent link between conclusions and the text of primary studies; as such it preserves principles that have traditionally been important to systematic reviewing.


The systematic review is an important technology for the evidence-informed policy and practice movement, which aims to bring research closer to decision-making [1,2]. This type of review uses rigorous and explicit methods to bring together the results of primary research in order to provide reliable answers to particular questions [3-6]. The picture that is presented aims to be distorted neither by biases in the review process nor by biases in the primary research which the review contains [7-10]. Systematic review methods are well-developed for certain types of research, such as randomised controlled trials (RCTs). Methods for reviewing qualitative research in a systematic way are still emerging, and there is much ongoing development and debate [11-14].

In this paper we present one approach to the synthesis of findings of qualitative research, which we have called 'thematic synthesis'. We have developed and applied these methods within several systematic reviews that address questions about people's perspectives and experiences [15-18]. The context for this methodological development is a programme of work in health promotion and public health (HP & PH), mostly funded by the English Department of Health, at the EPPI-Centre, in the Social Science Research Unit at the Institute of Education, University of London in the UK. Early systematic reviews at the EPPI-Centre addressed the question 'what works?' and contained research testing the effects of interventions. However, policy makers and other review users also posed questions about intervention need, appropriateness and acceptability, and factors influencing intervention implementation. To address these questions, our reviews began to include a wider range of research, including research often described as 'qualitative'. We began to focus, in particular, on research that aimed to understand the health issue in question from the experiences and point of view of the groups of people targeted by HP&PH interventions (We use the term 'qualitative' research cautiously because it encompasses a multitude of research methods at the same time as an assumed range of epistemological positions. In practice it is often difficult to classify research as being either 'qualitative' or 'quantitative' as much research contains aspects of both [19-22]. Because the term is in common use, however, we will employ it in this paper).

When we started the work for our first series of reviews which included qualitative research in 1999 [23-26], there was very little published material that described methods for synthesising this type of research. We therefore experimented with a variety of techniques borrowed from standard systematic review methods and methods for analysing primary qualitative research [15]. In later reviews, we were able to refine these methods and began to apply thematic analysis in a more explicit way. The methods for thematic synthesis described in this paper have so far been used explicitly in three systematic reviews [16-18].

The review used as an example in this paper

To illustrate the steps involved in a thematic synthesis we draw on a review of the barriers to, and facilitators of, healthy eating amongst children aged four to 10 years old [17]. The review was commissioned by the Department of Health, England to inform policy about how to encourage children to eat healthily in the light of recent surveys highlighting that British children are eating less than half the recommended five portions of fruit and vegetables per day. While we focus on the aspects of the review that relate to qualitative studies, the review was broader than this and combined answering traditional questions of effectiveness, through reviewing controlled trials, with questions relating to children's views of healthy eating, which were answered using qualitative studies. The qualitative studies were synthesised using 'thematic synthesis' – the subject of this paper. We compared the effectiveness of interventions which appeared to be in line with recommendations from the thematic synthesis with those that did not. This enabled us to see whether the understandings we had gained from the children's views helped us to explain differences in the effectiveness of different interventions: the thematic synthesis had enabled us to generate hypotheses which could be tested against the findings of the quantitative studies – hypotheses that we could not have generated without the thematic synthesis. The methods of this part of the review are published in Thomas et al. [27] and are discussed further in Harden and Thomas [21].

Qualitative research and systematic reviews

The act of seeking to synthesise qualitative research means stepping into more complex and contested territory than is the case when only RCTs are included in a review. First, methods are much less developed in this area, with fewer completed reviews available from which to learn, and second, the whole enterprise of synthesising qualitative research is itself hotly debated. Qualitative research, it is often proposed, is not generalisable and is specific to a particular context, time and group of participants. Thus, in bringing such research together, reviewers are open to the charge that they de-contextualise findings and wrongly assume that these are commensurable [11,13]. These are serious concerns which it is not the purpose of this paper to contest. We note, however, that a strong case has been made for qualitative research to be valued for the potential it has to inform policy and practice [11,28-30]. In our experience, users of reviews are interested in the answers that only qualitative research can provide, but are not able to handle the deluge of data that would result if they tried to locate, read and interpret all the relevant research themselves. Thus, if we acknowledge the unique importance of qualitative research, we need also to recognise that methods are required to bring its findings together for a wide audience – at the same time as preserving and respecting its essential context and complexity.

The earliest published work that we know of that deals with methods for synthesising qualitative research was written in 1988 by Noblit and Hare [31]. This book describes the way that ethnographic research might be synthesised, but the method has been shown to be applicable to qualitative research beyond ethnography [32,11]. As well as meta-ethnography, other methods have been developed more recently, including 'meta-study' [33], 'critical interpretive synthesis' [34] and 'metasynthesis' [13].

Many of the newer methods being developed have much in common with meta-ethnography, as originally described by Noblit and Hare, and often state explicitly that they are drawing on this work. In essence, this method involves identifying key concepts from studies and translating them into one another. The term 'translating' in this context refers to the process of taking concepts from one study and recognising the same concepts in another study, though they may not be expressed using identical words. Explanations or theories associated with these concepts are also extracted and a 'line of argument' may be developed, pulling corroborating concepts together and, crucially, going beyond the content of the original studies (though 'refutational' concepts might not be amenable to this process). Some have claimed that this notion of 'going beyond' the primary studies is a critical component of synthesis, and is what distinguishes it from the types of summaries of findings that typify traditional literature reviews [e.g. [32], p209]. In the words of Margarete Sandelowski, "metasyntheses are integrations that are more than the sum of parts, in that they offer novel interpretations of findings. These interpretations will not be found in any one research report but, rather, are inferences derived from taking all of the reports in a sample as a whole" [[14], p1358].

Thematic analysis has been identified as one of a range of potential methods for research synthesis alongside meta-ethnography and 'metasynthesis', though precisely what the method involves is unclear, and there are few examples of it being used for synthesising research [35]. We have adopted the term 'thematic synthesis', as we translated methods for the analysis of primary research – often termed 'thematic' – for use in systematic reviews [36-38]. As Boyatzis [[36], p4] has observed, thematic analysis is "not another qualitative method but a process that can be used with most, if not all, qualitative methods...". Our approach concurs with this conceptualisation of thematic analysis, since the method we employed draws on other established methods but uses techniques commonly described as 'thematic analysis' in order to formalise the identification and development of themes.

We now move to a description of the methods we used in our example systematic review. While this paper has the traditional structure for reporting the results of a research project, the detailed methods (e.g. precise terms we used for searching) and results are available online. This paper identifies the particular issues that relate especially to reviewing qualitative research systematically and then to describing the activity of thematic synthesis in detail.



When searching for studies for inclusion in a 'traditional' statistical meta-analysis, the aim of searching is to locate all relevant studies. Failing to do this can undermine the statistical models that underpin the analysis and bias the results. However, Doyle [[39], p326] states that, "like meta-analysis, meta-ethnography utilizes multiple empirical studies but, unlike meta-analysis, the sample is purposive rather than exhaustive because the purpose is interpretive explanation and not prediction". This suggests that it may not be necessary to locate every available study because, for example, the results of a conceptual synthesis will not change if ten rather than five studies contain the same concept, but will depend on the range of concepts found in the studies, their context, and whether they are in agreement or not. Thus, principles such as aiming for 'conceptual saturation' might be more appropriate when planning a search strategy for qualitative research, although it is not yet clear how these principles can be applied in practice. Similarly, other principles from primary qualitative research methods may also be 'borrowed' such as deliberately seeking studies which might act as negative cases, aiming for maximum variability and, in essence, designing the resulting set of studies to be heterogeneous, in some ways, instead of achieving the homogeneity that is often the aim in statistical meta-analyses.

However you look, qualitative research is difficult to find [40-42]. In our review, it was not possible to rely on simple electronic searches of databases. We needed to search extensively in 'grey' literature, ask authors of relevant papers if they knew of more studies, and look especially for book chapters, and we spent a lot of effort screening titles and abstracts by hand and looking through journals manually. In this sense, while we were not driven by the statistical imperative of locating every relevant study, when it actually came down to searching, we found that there was very little difference in the methods we had to use to find qualitative studies compared to the methods we use when searching for studies for inclusion in a meta-analysis.

Quality assessment

Assessing the quality of qualitative research has attracted much debate and there is little consensus regarding how quality should be assessed, who should assess quality, and, indeed, whether quality can or should be assessed in relation to 'qualitative' research at all [43,22,45]. We take the view that the quality of qualitative research should be assessed to avoid drawing unreliable conclusions. However, since there is little empirical evidence on which to base decisions for excluding studies based on quality assessment, we took the approach in this review to use 'sensitivity analyses' (described below) to assess the possible impact of study quality on the review's findings.

In our example review we assessed our studies according to 12 criteria, which were derived from existing sets of criteria proposed for assessing the quality of qualitative research [46-49], principles of good practice for conducting social research with children [50], and whether studies employed appropriate methods for addressing our review questions. The 12 criteria covered three main quality issues. Five related to the quality of the reporting of a study's aims, context, rationale, methods and findings (e.g. was there an adequate description of the sample used and the methods for how the sample was selected and recruited?). A further four criteria related to the sufficiency of the strategies employed to establish the reliability and validity of data collection tools and methods of analysis, and hence the validity of the findings. The final three criteria related to the assessment of the appropriateness of the study methods for ensuring that findings about the barriers to, and facilitators of, healthy eating were rooted in children's own perspectives (e.g. were data collection methods appropriate for helping children to express their views?).

Extracting data from studies

One issue which is difficult to deal with when synthesising 'qualitative' studies is 'what counts as data' or 'findings'? This problem is easily addressed when a statistical meta-analysis is being conducted: the numeric results of RCTs – for example, the mean difference in outcome between the intervention and control – are taken from published reports and are entered into the software package being used to calculate the pooled effect size [3,51].

Deciding what to abstract from the published report of a 'qualitative' study is much more difficult. Campbell et al. [11] extracted what they called the 'key concepts' from the qualitative studies they found about patients' experiences of diabetes and diabetes care. However, finding the key concepts in 'qualitative' research is not always straightforward either. As Sandelowski and Barroso [52] discovered, identifying the findings in qualitative research can be complicated by varied reporting styles or the misrepresentation of data as findings (as for example when data are used to 'let participants speak for themselves'). Sandelowski and Barroso [53] have argued that the findings of qualitative (and, indeed, all empirical) research are distinct from the data upon which they are based, the methods used to derive them, externally sourced data, and researchers' conclusions and implications.

In our example review, while it was relatively easy to identify 'data' in the studies – usually in the form of quotations from the children themselves – it was often difficult to identify key concepts or succinct summaries of findings, especially for studies that had undertaken relatively simple analyses and had not gone much further than describing and summarising what the children had said. To resolve this problem we took study findings to be all of the text labelled as 'results' or 'findings' in study reports – though we also found 'findings' in the abstracts which were not always reported in the same way in the text. Study reports ranged in size from a few pages to full final project reports. We entered all the results of the studies verbatim into QSR's NVivo software for qualitative data analysis. Where we had the documents in electronic form this process was straightforward even for large amounts of text. When electronic versions were not available, the results sections were either re-typed or scanned in using a flat-bed or pen scanner. (We have since adapted our own reviewing system, 'EPPI-Reviewer' [54], to handle this type of synthesis and the screenshots below show this software.)

Detailed methods for thematic synthesis

The synthesis took the form of three stages which overlapped to some degree: the free line-by-line coding of the findings of primary studies; the organisation of these 'free codes' into related areas to construct 'descriptive' themes; and the development of 'analytical' themes.

Stages one and two: coding text and developing descriptive themes

In our children and healthy eating review, we originally planned to extract and synthesise study findings according to our review questions regarding the barriers to, and facilitators of, healthy eating amongst children. It soon became apparent, however, that few study findings addressed these questions directly and it appeared that we were in danger of ending up with an empty synthesis. We were also concerned about imposing the a priori framework implied by our review questions onto study findings without allowing for the possibility that a different or modified framework may be a better fit. We therefore temporarily put our review questions to one side and started from the study findings themselves to conduct an thematic analysis.

There were eight relevant qualitative studies examining children's views of healthy eating. We entered the verbatim findings of these studies into our database. Three reviewers then independently coded each line of text according to its meaning and content. Figure ​1 illustrates this line-by-line coding using our specialist reviewing software, EPPI-Reviewer, which includes a component designed to support thematic synthesis. The text which was taken from the report of the primary study is on the left and codes were created inductively to capture the meaning and content of each sentence. Codes could be structured, either in a tree form (as shown in the figure) or as 'free' codes – without a hierarchical structure.

Figure 1

line-by-line coding in EPPI-Reviewer.

The use of line-by-line coding enabled us to undertake what has been described as one of the key tasks in the synthesis of qualitative research: the translation of concepts from one study to another [32,55]. However, this process may not be regarded as a simple one of translation. As we coded each new study we added to our 'bank' of codes and developed new ones when necessary. As well as translating concepts between studies, we had already begun the process of synthesis (For another account of this process, see Doyle [[39], p331]). Every sentence had at least one code applied, and most were categorised using several codes (e.g. 'children prefer fruit to vegetables' or 'why eat healthily?'). Before completing this stage of the synthesis, we also examined all the text which had a given code applied to check consistency of interpretation and to see whether additional levels of coding were needed. (In grounded theory this is termed 'axial' coding; see Fisher [55] for further discussion of the application of axial coding in research synthesis.) This process created a total of 36 initial codes. For example, some of the text we coded as "bad food = nice, good food = awful" from one study [56] were:

'All the things that are bad for you are nice and all the things that are good for you are awful.' (Boys, year 6) [[56], p74]

'All adverts for healthy stuff go on about healthy things. The adverts for unhealthy things tell you how nice they taste.' [[56], p75]

Some children reported throwing away foods they knew had been put in because they were 'good for you' and only ate the crisps and chocolate. [[56], p75]

Reviewers looked for similarities and differences between the codes in order to start grouping them into a hierarchical tree structure. New codes were created to capture the meaning of groups of initial codes. This process resulted in a tree structure with several layers to organize a total of 12 descriptive themes (Figure ​2). For example, the first layer divided the 12 themes into whether they were concerned with children's understandings of healthy eating or influences on children's food choice. The above example, about children's preferences for food, was placed in both areas, since the findings related both to children's reactions to the foods they were given, and to how they behaved when given the choice over what foods they might eat. A draft summary of the findings across the studies organized by the 12 descriptive themes was then written by one of the review authors. Two other review authors commented on this draft and a final version was agreed.

Figure 2

relationships between descriptive themes.

Stage three: generating analytical themes

Up until this point, we had produced a synthesis which kept very close to the original findings of the included studies. The findings of each study had been combined into a whole via a listing of themes which described children's perspectives on healthy eating. However, we did not yet have a synthesis product that addressed directly the concerns of our review – regarding how to promote healthy eating, in particular fruit and vegetable intake, amongst children. Neither had we 'gone beyond' the findings of the primary studies and generated additional concepts, understandings or hypotheses. As noted earlier, the idea or step of 'going beyond' the content of the original studies has been identified by some as the defining characteristic of synthesis [32,14].

This stage of a qualitative synthesis is the most difficult to describe and is, potentially, the most controversial, since it is dependent on the judgement and insights of the reviewers. The equivalent stage in meta-ethnography is the development of 'third order interpretations' which go beyond the content of original studies [32,11]. In our example, the step of 'going beyond' the content of the original studies was achieved by using the descriptive themes that emerged from our inductive analysis of study findings to answer the review questions we had temporarily put to one side. Reviewers inferred barriers and facilitators from the views children were expressing about healthy eating or food in general, captured by the descriptive themes, and then considered the implications of children's views for intervention development. Each reviewer first did this independently and then as a group. Through this discussion more abstract or analytical themes began to emerge. The barriers and facilitators and implications for intervention development were examined again in light of these themes and changes made as necessary. This cyclical process was repeated until the new themes were sufficiently abstract to describe and/or explain all of our initial descriptive themes, our inferred barriers and facilitators and implications for intervention development.

For example, five of the 12 descriptive themes concerned the influences on children's choice of foods (food preferences, perceptions of health benefits, knowledge behaviour gap, roles and responsibilities, non-influencing factors). From these, reviewers inferred several barriers and implications for intervention development. Children identified readily that taste was the major concern for them when selecting food and that health was either a secondary factor or, in some cases, a reason for rejecting food. Children also felt that buying healthy food was not a legitimate use of their pocket money, which they would use to buy sweets that could be enjoyed with friends. These perspectives indicated to us that branding fruit and vegetables as a 'tasty' rather than 'healthy' might be more effective in increasing consumption. As one child noted astutely, 'All adverts for healthy stuff go on about healthy things. The adverts for unhealthy things tell you how nice they taste.' [[56], p75]. We captured this line of argument in the analytical theme entitled 'Children do not see it as their role to be interested in health'. Altogether, this process resulted in the generation of six analytical themes which were associated with ten recommendations for interventions.


Six main issues emerged from the studies of children's views: (1) children do not see it as their role to be interested in health; (2) children do not see messages about future health as personally relevant or credible; (3) fruit, vegetables and confectionery have very different meanings for children; (4) children actively seek ways to exercise their own choices with regard to food; (5) children value eating as a social occasion; and (6) children see the contradiction between what is promoted in theory and what adults provide in practice. The review found that most interventions were based in school (though frequently with parental involvement) and often combined learning about the health benefits of fruit and vegetables with 'hands-on' experience in the form of food preparation and taste-testing. Interventions targeted at people with particular risk factors worked better than others, and multi-component interventions that combined the promotion of physical activity with healthy eating did not work as well as those that only concentrated on healthy eating. The studies of children's views suggested that fruit and vegetables should be treated in different ways in interventions, and that messages should not focus on health warnings. Interventions that were in line with these suggestions tended to be more effective than those which were not.


Context and rigour in thematic synthesis

The process of translation, through the development of descriptive and analytical themes, can be carried out in a rigorous way that facilitates transparency of reporting. Since we aim to produce a synthesis that both generates 'abstract and formal theories' that are nevertheless 'empirically faithful to the cases from which they were developed' [[53], p1371], we see the explicit recording of the development of themes as being central to the method. The use of software as described can facilitate this by allowing reviewers to examine the contribution made to their findings by individual studies, groups of studies, or sub-populations within studies.

Some may argue against the synthesis of qualitative research on the grounds that the findings of individual studies are de-contextualised and that concepts identified in one setting are not applicable to others [32]. However, the act of synthesis could be viewed as similar to the role of a research user when reading a piece of qualitative research and deciding how useful it is to their own situation. In the case of synthesis, reviewers translate themes and concepts from one situation to another and can always be checking that each transfer is valid and whether there are any reasons that understandings gained in one context might not be transferred to another. We attempted to preserve context by providing structured summaries of each study detailing aims, methods and methodological quality, and setting and sample. This meant that readers of our review were able to judge for themselves whether or not the contexts of the studies the review contained were similar to their own. In the synthesis we also checked whether the emerging findings really were transferable across different study contexts. For example, we tried throughout the synthesis to distinguish between participants (e.g. boys and girls) where the primary research had made an appropriate distinction. We then looked to see whether some of our synthesis findings could be attributed to a particular group of children or setting. In the event, we did not find any themes that belonged to a specific group, but another outcome of this process was a realisation that the contextual information given in the reports of studies was very restricted indeed. It was therefore difficult to make the best use of context in our synthesis.

In checking that we were not translating concepts into situations where they did not belong, we were following a principle that others have followed when using synthesis methods to build grounded formal theory: that of grounding a text in the context in which it was constructed. As Margaret Kearney has noted "the conditions under which data were collected, analysis was done, findings were found, and products were written for each contributing report should be taken into consideration in developing a more generalized and abstract model" [[14], p1353]. Britten et al. [32] suggest that it may be important to make a deliberate attempt to include studies conducted across diverse settings to achieve the higher level of abstraction that is aimed for in a meta-ethnography.

Study quality and sensitivity analyses

We assessed the 'quality' of our studies with regard to the degree to which they represented the views of their participants. In doing this, we were locating the concept of 'quality' within the context of the purpose of our review – children's views – and not necessarily the context of the primary studies themselves. Our 'hierarchy of evidence', therefore, did not prioritise the research design of studies but emphasised the ability of the studies to answer our review question. A traditional systematic review of controlled trials would contain a quality assessment stage, the purpose of which is to exclude studies that do not provide a reliable answer to the review question. However, given that there were no accepted – or empirically tested – methods for excluding qualitative studies from syntheses on the basis of their quality [57,12,58], we included all studies regardless of their quality.

Nevertheless, our studies did differ according to the quality criteria they were assessed against and it was important that we considered this in some way. In systematic reviews of trials, 'sensitivity analyses' – analyses which test the effect on the synthesis of including and excluding findings from studies of differing quality – are often carried out. Dixon-Woods et al. [12] suggest that assessing the feasibility and worth of conducting sensitivity analyses within syntheses of qualitative research should be an important focus of synthesis methods work. After our thematic synthesis was complete, we examined the relative contributions of studies to our final analytic themes and recommendations for interventions. We found that the poorer quality studies contributed comparatively little to the synthesis and did not contain many unique themes; the better studies, on the other hand, appeared to have more developed analyses and contributed most to the synthesis.


This paper has discussed the rationale for reviewing and synthesising qualitative research in a systematic way and has outlined one specific approach for doing this: thematic synthesis. While it is not the only method which might be used – and we have discussed some of the other options available – we present it here as a tested technique that has worked in the systematic reviews in which it has been employed.

We have observed that one of the key tasks in the synthesis of qualitative research is the translation of concepts between studies. While the activity of translating concepts is usually undertaken in the few syntheses of qualitative research that exist, there are few examples that specify the detail of how this translation is actually carried out. The example above shows how we achieved the translation of concepts across studies through the use of line-by-line coding, the organisation of these codes into descriptive themes, and the generation of analytical themes through the application of a higher level theoretical framework. This paper therefore also demonstrates how the methods and process of a thematic synthesis can be written up in a transparent way.

This paper goes some way to addressing concerns regarding the use of thematic analysis in research synthesis raised by Dixon-Woods and colleagues who argue that the approach can lack transparency due to a failure to distinguish between 'data-driven' or 'theory-driven' approaches. Moreover they suggest that, "if thematic analysis is limited to summarising themes reported in primary studies, it offers little by way of theoretical structure within which to develop higher order thematic categories..." [[35], p47]. Part of the problem, they observe, is that the precise methods of thematic synthesis are unclear. Our approach contains a clear separation between the 'data-driven' descriptive themes and the 'theory-driven' analytical themes and demonstrates how the review questions provided a theoretical structure within which it became possible to develop higher order thematic categories.

The theme of 'going beyond' the content of the primary studies was discussed earlier. Citing Strike and Posner [59], Campbell et al. [[11], p672] also suggest that synthesis "involves some degree of conceptual innovation, or employment of concepts not found in the characterisation of the parts and a means of creating the whole". This was certainly true of the example given in this paper. We used a series of questions, derived from the main topic of our review, to focus an examination of our descriptive themes and we do not find our recommendations for interventions contained in the findings of the primary studies: these were new propositions generated by the reviewers in the light of the synthesis. The method also demonstrates that it is possible to synthesise without conceptual innovation. The initial synthesis, involving the translation of concepts between studies, was necessary in order for conceptual innovation to begin. One could argue that the conceptual innovation, in this case, was only necessary because the primary studies did not address our review question directly. In situations in which the primary studies are concerned directly with the review question, it may not be necessary to go beyond the contents of the original studies in order to produce a satisfactory synthesis (see, for example, Marston and King, [60]). Conceptually, our analytical themes are similar to the ultimate product of meta-ethnographies: third order interpretations [11], since both are explicit mechanisms for going beyond the content of the primary studies and presenting this in a transparent way. The main difference between them lies in their purposes. Third order interpretations bring together the implications of translating studies into one another in their own terms, whereas analytical themes are the result of interrogating a descriptive synthesis by placing it within an external theoretical framework (our review question and sub-questions). It may be, therefore, that analytical themes are more appropriate when a specific review question is being addressed (as often occurs when informing policy and practice), and third order interpretations should be used when a body of literature is being explored in and of itself, with broader, or emergent, review questions.

This paper is a contribution to the current developmental work taking place in understanding how best to bring together the findings of qualitative research to inform policy and practice. It is by no means the only method on offer but, by drawing on methods and principles from qualitative primary research, it benefits from the years of methodological development that underpins the research it seeks to synthesise.

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

Both authors contributed equally to the paper and read and approved the final manuscript.


The authors would like to thank Elaine Barnett-Page for her assistance in producing the draft paper, and David Gough, Ann Oakley and Sandy Oliver for their helpful comments. The review used an example in this paper was funded by the Department of Health (England). The methodological development was supported by Department of Health (England) and the ESRC through the Methods for Research Synthesis Node of the National Centre for Research Methods. In addition, Angela Harden held a senior research fellowship funded by the Department of Health (England) December 2003 – November 2007. The views expressed in this paper are those of the authors and are not necessarily those of the funding bodies.


  • Chalmers I. Trying to do more good than harm in policy and practice: the role of rigorous, transparent and up-to-date evaluations. Ann Am Acad Pol Soc Sci. 2003;589:22–40. doi: 10.1177/0002716203254762.[Cross Ref]
  • Oakley A. Social science and evidence-based everything: the case of education. Educ Rev. 2002;54:277–286. doi: 10.1080/0013191022000016329.[Cross Ref]
  • Cooper H, Hedges L. The Handbook of Research Synthesis. New York: Russell Sage Foundation; 1994.
  • EPPI-Centre . EPPI-Centre Methods for Conducting Systematic Reviews. London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London; 2006.
  • Higgins J, Green S, (Eds) Cochrane Handbook for Systematic Reviews of Interventions 426. 2006. Updated September 2006. Accessed 24th January 2007.
  • Petticrew M, Roberts H. Systematic Reviews in the Social Sciences: A practical guide. Oxford: Blackwell Publishing; 2006.
  • Chalmers I, Hedges L, Cooper H. A brief history of research synthesis. Eval Health Prof. 2002;25:12–37. doi: 10.1177/0163278702025001003.[PubMed][Cross Ref]
  • Juni P, Altman D, Egger M. Assessing the quality of controlled clinical trials. BMJ. 2001;323:42–46. doi: 10.1136/bmj.323.7303.42.[PMC free article][PubMed][Cross Ref]
  • Mulrow C. Systematic reviews: rationale for systematic reviews. BMJ. 1994;309:597–599.[PMC free article][PubMed]
  • White H. Scientific communication and literature retrieval. In: Cooper H, Hedges L, editor. The Handbook of Research Synthesis. New York: Russell Sage Foundation; 1994.
  • Campbell R, Pound P, Pope C, Britten N, Pill R, Morgan M, Donovan J. Evaluating meta-ethnography: a synthesis of qualitative research on lay experiences of diabetes and diabetes care. Soc Sci Med. 2003;56:671–684. doi: 10.1016/S0277-9536(02)00064-3.[PubMed][Cross Ref]
  • Dixon-Woods M, Bonas S, Booth A, Jones DR, Miller T, Sutton AJ, Shaw RL, Smith JA, Young B. How can systematic reviews incorporate qualitative research? A critical perspective. Qual Res. 2006;6:27–44. doi: 10.1177/1468794106058867.[Cross Ref]
  • Sandelowski M, Barroso J. Handbook for Synthesising Qualitative Research. New York: Springer; 2007.
  • Thorne S, Jensen L, Kearney MH, Noblit G, Sandelowski M. Qualitative meta-synthesis: reflections on methodological orientation and ideological agenda. Qual Health Res. 2004;14:1342–1365. doi: 10.1177/1049732304269888.[PubMed][Cross Ref]
  • Harden A, Garcia J, Oliver S, Rees R, Shepherd J, Brunton G, Oakley A. Applying systematic review methods to studies of people's views: an example from public health. J Epidemiol Community Health. 2004;58:794–800. doi: 10.1136/jech.2003.014829.[PMC free article][PubMed][Cross Ref]
  • Harden A, Brunton G, Fletcher A, Oakley A. Young People, Pregnancy and Social Exclusion: A systematic synthesis of research evidence to identify effective, appropriate and promising approaches for prevention and support. London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London; 2006.
  • Thomas J, Sutcliffe K, Harden A, Oakley A, Oliver S, Rees R, Brunton G, Kavanagh J. Children and Healthy Eating: A systematic review of barriers and facilitators. London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London; 2003. accessed 4th July 2008.
  • Thomas J, Kavanagh J, Tucker H, Burchett H, Tripney J, Oakley A. Accidental Injury, Risk-Taking Behaviour and the Social Circumstances in which Young People Live: A systematic review. London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London; 2007.
  • Bryman A. Quantity and Quality in Social Research. London: Unwin; 1998.
  • Hammersley M. What's Wrong with Ethnography? London: Routledge; 1992.
  • Harden A, Thomas J. Methodological issues in combining diverse study types in systematic reviews. Int J Soc Res Meth. 2005;8:257–271. doi: 10.1080/13645570500155078.[Cross Ref]
  • Oakley A. Experiments in Knowing: Gender and methods in the social sciences. Cambridge: Polity Press; 2000.
  • Harden A, Oakley A, Oliver S. Peer-delivered health promotion for young people: a systematic review of different study designs. Health Educ J. 2001;60:339–353. doi: 10.1177/001789690106000406.[Cross Ref]
  • Harden A, Rees R, Shepherd J, Brunton G, Oliver S, Oakley A. Young People and Mental Health: A systematic review of barriers and facilitators. London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London; 2001.
  • Rees R, Harden A, Shepherd J, Brunton G, Oliver S, Oakley A. Young People and Physical Activity: A systematic review of barriers and facilitators. London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London; 2001.
  • Shepherd J, Harden A, Rees R, Brunton G, Oliver S, Oakley A. Young People and Healthy Eating: A systematic review of barriers and facilitators. London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London; 2001.
  • Thomas J, Harden A, Oakley A, Oliver S, Sutcliffe K, Rees R, Brunton G, Kavanagh J. Integrating qualitative research with trials in systematic reviews: an example from public health. BMJ. 2004;328:1010–1012. doi: 10.1136/bmj.328.7446.1010.[PMC free article][PubMed][Cross Ref]
  • Davies P. What is evidence-based education? Br J Educ Stud. 1999;47:108–121. doi: 10.1111/1467-8527.00106.[Cross Ref]
  • Newman M, Thompson C, Roberts AP. Helping practitioners understand the contribution of qualitative research to evidence-based practice. Evid Based Nurs. 2006;9:4–7. doi: 10.1136/ebn.9.1.4.[PubMed][Cross Ref]
  • Popay J. Moving Beyond Effectiveness in Evidence Synthesis. London: National Institute for Health and Clinical Excellence; 2006.
  • Noblit GW, Hare RD. Meta-Ethnography: Synthesizing qualitative studies. Newbury Park: Sage; 1988.
  • Britten N, Campbell R, Pope C, Donovan J, Morgan M, Pill R. Using meta-ethnography to synthesise qualitative research: a worked example. J Health Serv Res Policy. 2002;7:209–215. doi: 10.1258/135581902320432732.[PubMed][Cross Ref]
  • Paterson B, Thorne S, Canam C, Jillings C. Meta-Study of Qualitative Health Research. Thousand Oaks, California: Sage; 2001.
  • Dixon-Woods M, Cavers D, Agarwal S, Annandale E, Arthur A, Harvey J, Katbamna S, Olsen R, Smith L, Riley R, Sutton AJ. Conducting a critical interpretative synthesis of the literature on access to healthcare by vulnerable groups. BMC Med Res Methodol. 2006;6:35. doi: 10.1186/1471-2288-6-35.[PMC free article][PubMed][Cross Ref]
  • Dixon-Woods M, Agarwal S, Jones D, Young B, Sutton A. Synthesising qualitative and quantitative evidence: a review of possible methods. J Health Serv Res Policy. 2005;10:45–53. doi: 10.1258/1355819052801804.[PubMed][Cross Ref]
  • Boyatzis RE. Transforming Qualitative Information. Sage: Cleveland; 1998.
  • Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3:77–101. doi: 10.1191/1478088706qp063oa.[Cross Ref]
  • Silverman D, Ed . Qualitative Research: Theory, method and practice. London: Sage; 1997.
  • Doyle LH. Synthesis through meta-ethnography: paradoxes, enhancements, and possibilities. Qual Res. 2003;3:321–344. doi: 10.1177/1468794103033003.[Cross Ref]
  • Barroso J, Gollop C, Sandelowski M, Meynell J, Pearce PF, Collins LJ. The challenges of searching for and retrieving qualitative studies. Western J Nurs Res. 2003;25:153–178. doi: 10.1177/0193945902250034.[PubMed][Cross Ref]
  • Walters LA, Wilczynski NL, Haynes RB, Hedges Team. Developing optimal search strategies for retrieving clinically relevant qualitative studies in EMBASE. Qual Health Res. 2006;16:162–8. doi: 10.1177/1049732305284027.[PubMed][Cross Ref]
  • Wong SSL, Wilczynski NL, Haynes RB. Developing optimal search strategies for detecting clinically relevant qualitative studies in Medline. Medinfo. 2004;11:311–314.[PubMed]
  • Murphy E, Dingwall R, Greatbatch D, Parker S, Watson P. Qualitative research methods in health technology assessment: a review of the literature. Health Technol Assess. 1998;2[PubMed]
  • Seale C. Quality in qualitative research. Qual Inq. 1999;5:465–478.
  • Spencer L, Ritchie J, Lewis J, Dillon L. Quality in Qualitative Evaluation: A framework for assessing research evidence. London: Cabinet Office; 2003.
  • Boulton M, Fitzpatrick R, Swinburn C. Qualitative research in healthcare II: a structured review and evaluation of studies. J Eval Clin Pract. 1996;2:171–179. doi: 10.1111/j.1365-2753.1996.tb00041.x.[PubMed][Cross Ref]
  • Cobb A, Hagemaster J. Ten criteria for evaluating qualitative research proposals. J Nurs Educ. 1987;26:138–143.[PubMed]
  • Mays N, Pope C. Rigour and qualitative research. BMJ. 1995;311:109–12.[PMC free article][PubMed]
  • Medical Sociology Group Criteria for the evaluation of qualitative research papers. Med Sociol News. 1996;22:68–71.
  • Alderson P. Listening to Children. London: Barnardo's; 1995.
  • Egger M, Davey-Smith G, Altman D. Systematic Reviews in Health Care: Meta-analysis in context. London: BMJ Publishing; 2001.
  • Sandelowski M, Barroso J. Finding the findings in qualitative studies. J Nurs Scholarsh. 2002;34:213–219. doi: 10.1111/j.1547-5069.2002.00213.x.[PubMed][Cross Ref]
  • Sandelowski M. Using qualitative research. Qual Health Res. 2004;14:1366–1386. doi: 10.1177/1049732304269672.[PubMed][Cross Ref]
  • Thomas J, Brunton J. EPPI-Reviewer 30: Analysis and management of data for research synthesis EPPI-Centre software. London: EPPI-Centre, Social Science Research Unit, Institute of Education; 2006.
  • Fisher M, Qureshi H, Hardyman W, Homewood J. Using Qualitative Research in Systematic Reviews: Older people's views of hospital discharge. London: Social Care Institute for Excellence; 2006.
  • Dixey R, Sahota P, Atwal S, Turner A. Children talking about healthy eating: data from focus groups with 300 9–11-year-olds. Nutr Bull. 2001;26:71–79. doi: 10.1046/j.1467-3010.2001.00078.x.[Cross Ref]
  • Daly A, Willis K, Small R, Green J, Welch N, Kealy M, Hughes E. Hierarchy of evidence for assessing qualitative health research. J Clin Epidemiol. 2007;60:43–49. doi: 10.1016/j.jclinepi.2006.03.014.[PubMed][Cross Ref]
  • Popay J. Moving beyond floccinaucinihilipilification: enhancing the utility of systematic reviews. J Clin Epidemiol. 2005;58:1079–80. doi: 10.1016/j.jclinepi.2005.08.004.[PubMed][Cross Ref]
  • Strike K, Posner G. Types of synthesis and their criteria. In: Ward S, Reed L, editor. Knowledge Structure and Use: Implications for synthesis and interpretation. Philadelphia: Temple University Press; 1983.
  • Marston C, King E. Factors that shape young people's sexual behaviour: a systematic review. The Lancet. 2006;368:1581–86. doi: 10.1016/S0140-6736(06)69662-1.[PubMed][Cross Ref]

Volume 9, No. 1, Art. 34 – January 2008

Cultivating the Under-Mined: Cross-Case Analysis as Knowledge Mobilization

Samia Khan & Robert VanWynsberghe

Abstract: Despite a plethora of case studies in the social sciences, it is the authors' opinion that case studies remain relatively under-mined sources of expertise. Cross-case analysis is a research method that can mobilize knowledge from individual case studies. The authors propose that mobilization of case knowledge occurs when researchers accumulate case knowledge, compare and contrast cases, and in doing so, produce new knowledge. In this article, the authors present theories of how people can learn from sets of cases. Second, existing techniques for cross-case analysis are discussed. Third, considerations that enable researchers to engage in cross-case analysis are suggested. Finally, the authors introduce a novel online database: the Foresee (4C) database. The purpose of the database is to mobilize case knowledge by helping researchers perform cross-case analysis and by creating an online research community that facilitates dialogue and the mobilization of case knowledge. The design of the 4C database is informed by theories of how people learn from case studies and cross-case analysis techniques. We present evidence from case study research that use of the 4C database helps to mobilize previously dormant case study knowledge to foster greater expertise.

Key words: case study, cross-case analysis, computer-assisted analysis, knowledge mobilization, researcher, database

Table of Contents

1. Cross-Case Analysis: Introducing the Foresee Database

2. Literature Review

2.1 Learning from and with cases

3. Review of Several Cross-Case Analysis Approaches and Techniques

3.1 Variable-oriented approaches to cross-case analysis

3.2 Case-oriented approaches to cross-case analysis

4. Several Issues for the Case Study Researcher Engaged in Cross-Case Analysis

5. The Foresee Database Project

5.1 Design principles of the 4C database

5.2 Affordances of the 4C database

6. How 4C is Different from Computer-Assisted Qualitative Data Analysis Tools and Online Repositories

7. Conclusion






1. Cross-Case Analysis: Introducing the Foresee Database

Cross-case analysis is a research method that facilitates the comparison of commonalities and difference in the events, activities, and processes that are the units of analyses in case studies.1) Despite a plethora of case studies in the social science literature and archived on web sites, few are adequately mined again by researchers or are known to inform practitioners or policy at a broader level. The expertise embedded within the vast number of case studies in the fields of education and sociology remains relatively dormant. In this paper, we propose cross-case analysis as a mechanism for mining existing case studies so that knowledge from cases can be put into service for broader purposes. To mobilize case knowledge across subject domains and across communities, we introduce the creation of a novel database. The database represents a workspace to perform cross-case analysis and a workspace where expertise can flow in systematic and unexpected ways through the representation, transfer and mobilization of case studies. [1]

Engaging in cross-case analysis extends the investigator's expertise beyond the single case. It provokes the researcher's imagination, prompts new questions, reveals new dimensions, produces alternatives, generates models, and constructs ideals and utopias (STRETTON, 1969). Cross-case analysis enables case study researchers to delineate the combination of factors that may have contributed to the outcomes of the case, seek or construct an explanation as to why one case is different or the same as others, make sense of puzzling or unique findings, or further articulate the concepts, hypotheses, or theories discovered or constructed from the original case. Cross-case analysis enhances researchers' capacities to understand how relationships may exist among discrete cases, accumulate knowledge from the original case, refine and develop concepts (RAGIN, 1997), and build or test theory (ECKSTEIN, 2002). Furthermore, cross-case analysis allows the researcher to compare cases from one or more settings, communities, or groups. This provides opportunities to learn from different cases and gather critical evidence to modify policy. [2]

2. Literature Review

2.1 Learning from and with cases

Assuming that the researcher's learning process parallels the ways in which individuals develop expertise, the authors will, in this section, examine four learning theories that support the notion that cross-case analysis is a method for mobilizing case study knowledge: AUSUBEL, NOVAK, and HANESIAN's (1978) cognitive theory of meaningful learning, KOLODNER's (1993) case-based reasoning, FLYVBJERG's (2001) notions of developing expertise from cases, and DONMOYER's (1990) theory of vicarious learning from case knowledge. These learning theories support the notion that researchers develop expertise from cases, and they conceptualize the processes through which this expertise is cultivated. [3]

KOLODNER and AUSUBEL et al.'s theories primarily emphasize human learning as a cognitive and experiential undertaking and do so while pointing to cognitive processes that are similar to those required for engagement in cross-case analysis. FLYVBJERG and DONMOYER stress the importance of learning from one case to another, arguably emphasizing a form of case-based reasoning, that is, the process of reasoning about the similarities and differences across diverse cases, as key to the development of expertise. Cumulatively, these theories appear to hypothesize that cognition involves cases of experiences and that learning from cases is accomplished by cross-case analysis. The authors extend these hypotheses on learning and suggest that case study researchers can develop expertise through learning from and comparing cases. When the case study researcher makes this comparison public, case knowledge becomes mobilized. [4]

AUSUBEL et al.'s cognitive theory of meaningful learning. AUSUBEL et al.'s cognitive theory of learning (1978) emphasizes that people learn meaningfully by developing cross-connections between related concepts. This allows them to engage in inferential and analogical reasoning. These cross-connections can take the forms of either cognitive assimilation or accommodation of concepts. Assimilation of concepts increases knowledge while preserving the cognitive structure, whereas accommodation modifies existing knowledge to account for the new experience. AUSUBEL et al.'s conception of cross-connections can be applied to cross-case analysis: relating one case to another, building cross-connections between cases, preserving the essence of the original case knowledge while changing the character of the current case, can accumulate and produce new knowledge. [5]

Case-based reasoning. KOLODNER (1993) extends AUSUBEL et al.'s theory of cross-connections to memory. KOLODNER's case-based reasoning (CBR) explains learning as a cognitive process in which the individual interprets a new situation in terms of its relevance to a previous case. KOLODNER further theorizes that the lessons learned from the combination of previous and new cases are encoded and indexed in memory as abstract generalizations. This process of memory storage and retrieval implies that a person will be able to evaluate possible solutions through an indexing process that discriminates among cases. At memory retrieval time, when the person is engaged in a new situation, a memory probe searches through the index for cases that are similar to the new one. KOLODNER describes this probing as a creative process and suggests that the more astute the person is at conceptualizing a situation, the more likely he or she is to find relevant knowledge about previously learned, memorable cases (KOLODNER et al., 2003; SCHANK & BERMAN, 2002). This ability to enlighten oneself develops over time through case-based reasoning. It appears that analyses of a variety of cases are necessary to learn well. [6]

FLYVBJERG's notion of expertise. Drawing heavily upon DREYFUS and DREYFUS' (1988) work on skill acquisition in experts, FLYVBJERG (2001) extends the notion of case-based cognition to experts' ways of reasoning. Experts think quickly, intuitively, holistically, interpretive, and visually. As DREYFUS and DREYFUS explain, "bodily involvement, speed, and an intimate knowledge of concrete cases in the form of good examples are a prerequisite for true expertise" (1988, p.15). According to FLYVBJERG (2001), expertise or virtuosity is intimate knowledge of concrete cases. This intimate knowledge is gained through reflection upon thousands of cases directly, holistically, and intuitively. Case studies are the domain of expertise, which is neither guesswork nor a conscious analytical division of situations into parts and rules but rather, the recognition, interpretation and discrimination of cases and new situations. [7]

DONMOYER's theory of learning from cases. DONMOYER's (1990) conception of generalization reveals how an expert might simultaneously access numerous cases to make a comparison among these cases. DONMOYER suggests that new understanding takes root when an individual begins to generalize across cases that were derived or constructed from different contexts. According to DONMOYER, generalization across cases is not a formal act of generating working hypotheses that are to be tested in new cases. Instead, he views learning from cases as a meaning-making endeavor in which cross-case analysis is essential. DONMOYER suggests that learning from case knowledge can be better characterized as assimilating, accommodating, and integrating case knowledge from previously learned cases. His own example of becoming a better teacher over the years exemplifies this kind of learning. DONMOYER suggests that his development as a teacher was not an effort to consciously test hypotheses in the different schools he taught at but rather, an attempt to learn from individual cases of teaching that he and others experienced over the years. [8]

In sum, learning through cross-case analysis empowers the learner to access the experience of others and thus, to extend their personal experience. These new connections made across cases produce new knowledge and augment existing knowledge and experience. While learning theorists invoke different cognitive structures and processes to explain cross-case analysis, there are the following commonalities:

  • cases represent rich holistic examples of experiences;

  • cases are comparable in relation to patterns of similarities and differences;

  • memorable cases are accessed through memory;

  • comparisons among cases can construct and yield meaningful linkages, and

  • cognitive cross-case analyses are a useful way to produce analogies, make inferences, and develop conditional generalizations for the individual. [9]

Similarly, for researchers who develop expertise through cross-case analysis:

  • cases represent rich examples of cases they have learned or know about;

  • the cases are deemed comparable in relation to patterns of similarities and differences;

  • the cases are accessible;

  • meaningful connections between cases can be made explicit by the researcher, and

  • the researcher can produce and share new knowledge through cross-case analysis. [10]

3. Review of Several Cross-Case Analysis Approaches and Techniques

There are several well-known cross-case analysis approaches and techniques available to the case study researcher. RAGIN (1997) for example delineates between variable and case-oriented research as two approaches to cross-case comparisons. In variable-oriented research, variables take center stage; that is, the outcome observed in the cases varies across observations and causes appear to compete with one another. The cases are selected in advance with an eye toward randomness or the degree to which they represent the general population. The goal is to explain why the cases vary. Variable-oriented approaches to cross-case analysis are a challenge to conduct because fair comparisons are difficult to achieve and the multitude of factors that are associated with social phenomena are often too numerous to disentangle. In case-oriented research, commonalities across multiple instances of a phenomenon may contribute to conditional generalizations (MILES & HUBERMAN, 1994). The researcher can thus demonstrate that the outcomes in the cases selected are in fact enough alike to be treated as instances of the same thing. The central question of interest to the case-oriented researcher is in what ways the cases are alike. Therefore, special emphasis is given to the case itself instead of on variables across cases. Examples that illustrate the complexity of this approach are case studies that focus on the role of violence in schoolyard bullying and national warfare. Both case studies are about violence, but the scale and scope of the violence in the respective contexts are likely incommensurable and difficult to compare or contrast. Still one is immediately attracted to the prospect of crossover and mutual illumination. Thus, in a variable-oriented approach, factors known to be involved in violence, such as resources and perceptions of vulnerabilities, could be used to evaluate both cases independently before comparing factors between a case of schoolyard bullying and a case of war-mongering states to explain and predict violent behavior. On the other hand, in a case-oriented approach, one could conceivably compare two cases of "swarming" in schools with two cases of "swarming"-like behavior in war-mongering nation states to search for or construct similar processes that appear to lead to violent behaviors. [11]

In this section, several variable-oriented and case-oriented approaches that are applicable to cross-case analysis are discussed by drawing upon the more extensive reviews of these approaches by GEORGE and BENNETT (2005) and MILES and HUBERMAN (1994). For variable-oriented cross-case analyses, several well-known research techniques include: MILLS' Methods, Case Survey, and Before-After research design. For case-oriented cross-case analyses, several well-known techniques include:Most different design, Typologies, Multicase Methods, and Process-tracing. [12]

3.1 Variable-oriented approaches to cross-case analysis

MILLS' methods. MILLS' (1843) famous comparative system of logic involves a method of agreement and a method of difference as two potential analytic techniques for comparing cases. The method of agreement identifies a similarity in the independent variable associated with a common outcome in two or more cases. The method of difference identifies independent variables associated with different outcomes. MILLS' methods require eliminating candidate causes for the outcome. In the method of difference for example, the condition that is not present in both cases where the outcomes were different, could be considered a possible causal factor in the variance between outcomes. The factor(s) that survive this systematic process of elimination are inferentially connected to the outcomes. MILLS himself noted some serious obstacles to his comparative system of logic, especially when applied to studies in social science. Social phenomena are often rooted in a complex web of causes, which are difficult if not impossible to isolate as deterministic. That leaves the researcher open to the danger of false positives. GEORGE and BENNETT (2005), who conducted an extensive review of comparative techniques, suggest that MILLS' methods can work if the causal relationship involves only one factor that is either necessary or sufficient for a specified outcome, if all causally relevant variables are identified prior to the analysis, and if cases that represent the full range of possible causal paths are available for study. GEORGE and BENNETT contend that there are few theories in the social sphere that are strong enough to support general claims of necessity or sufficiency for single variables (2005, p.157). [13]

Case survey method. The case survey method (YIN, 1994, 2003) involves gathering evidence from a large set of cases (e.g., 250) so that statistical analyses can be performed on the variables pertinent to all the cases. Case surveys are challenging to carry out because researchers seldom study so many cases and they rarely find perfectly comparable cases. Furthermore, increasing the number of cases often means making assumptions of homogeneity that are simply unjustifiable. An example of a case survey method is a study of the cultural antecedents of procrastination wherein large numbers of individuals from all over the world would be analyzed as separate case studies within a case survey method. [14]

Before-after design. Another method for cross-case analysis is the before-after design. The before-after design offers some level of control by dividing one case into two sub-cases. Some event or critical juncture in a natural setting creates the conditions for a before and after investigation. One of the assumptions on which the before-after design is based, is that only one variable changes, dividing the longitudinal case neatly in two. Determining the change in a variable is difficult unless a careful analysis of all factors involved in the case is conducted over the same period of time. An example of this type of cross-case analysis is the study of online communication in a science course where patterns of communication are analyzed before and after a major course assignment. [15]

3.2 Case-oriented approaches to cross-case analysis

Most different design. Some social scientists have abandoned the quest for controlled comparison in favor of PRZEWORKSI and TEUNE's (1982) most different design. A most-different research design deliberately seeks to compare cases that differ as much as possible in order to find similar processes or outcomes in diverse sets of cases. This case-oriented approach emphasizes diversity in the selection of cases (GEORGE & BENNETT, 2005, p.165). The power of the most different design lies in its ability to extend the lessons learned in single cases to inform another case and to uncover similar processes in unexpected contexts. Cross-case comparisons of school principals and CEO's of large auto companies would be one example of a most different design. While schools and auto companies do not, on the surface, appear to be meaningfully comparable, it may be fruitful to compare the work habits of CEOs who produce cars and their organization techniques with those of school principals who view schools as organizations with students as products. [16]

Typologies. Cross-case comparison can support the creation of clusters or families of phenomena. Sets of cases are categorized into clusters of groups that share certain patterns or configurations. Sometimes the clusters can be ordered or sorted along several dimensions. For example, DENZIN (1989) suggests deconstructing prior conceptions of a particular phenomenon and then collecting multiple cases and bracketing them for essential elements and components across cases. The essential elements are then rebuilt into an ordered whole (e.g., construction of the alcoholic self) and put back into the social context. In another typologizing effort, the pathway to the outcome is inspected and compared among a set of cases. Like process tracing below, the same outcome is theorized according to different pathways. For example, science education reforms that better integrate technology would be considered a sub-class of the general category of educational reforms. Typologies share a specified combination of factors, but these are not necessarily causal, mutually exclusive or exhaustive. GEORGE and BENNETT (2005) argue that a typological regularity can be sought through previously unexamined causal paths or a building block approach. Typologizing supports the construction of theories by identifying the sub-classes of a major phenomenon. [17]

Multicase methods. This method has recently been introduced by STAKE (2006) and focuses on the quintain, which is a common focus (organization, campaign, problem) for a set of case studies. The quintain, for example, might be mega-events, like the Olympic Games or a school district that wishes to incorporate technology at all of its sites. The quintain is comprised of case studies that have both common and unique issues. The common issues address important and complex problems about which disagreement exists. The impacts of mega-events on host regions might be elicited from case studies done at different Olympic sites. Common research questions (e.g., what is the economic impact of enhanced international image of the host region?) tie together all of the case studies. A cross-case analysis of these cases facilitates a greater understanding of the quintain (again mega-events). According to STAKE, after cross-case analysis, researchers can make assertions about the quintain. These assertions are then applied to the individual case studies to determine the extent to which the case studies reflect the quintain. The degree of congruity or disparity speaks to the uniformity of the quintain and the power of cross-case analysis (STAKE, 2006). [18]

Process-tracing. In this method, the progression of events that may have led to an outcome in a single case is traced (GEORGE & BENNETT, 2005). Process-tracing forces the researcher to consider alternative paths through which the outcome could have occurred, and it offers the possibility of mapping out one or more potential causal paths that are consistent with the outcome. Cross-case analysis allows the researcher to develop a typological theory by charting the repertoire of causal paths that reveal given outcomes as well as the conditions under which they occur. In process-tracing, all the intervening steps within a case must be predicted by a hypothesis or else the hypothesis is amended. Process-tracing generally takes the form of a detailed narrative in which the unfolding of a story is theoretically oriented. [19]

In addition to variable and case-oriented approaches, some analytic techniques are worth mentioning, such as stacking, building truth tables, and constructing narrative models. MILES and HUBERMAN (1994) suggest that these techniques are a mixture of variable and case-oriented approaches. These mixed techniques are mentioned here because any of the approaches discussed above can also utilize these techniques. The authors refer to these three techniques as data display and analysis techniques because they help to visualize sets of cases, and they bring case relationships to the surface in ways that invites and facilitates comparison. In the stacking comparable cases technique, a series of cases are displayed in a meta-matrix by fields of interest (MILES & HUBERMAN, 1994). Each case is condensed in a form that permits a systematic visualization and comparison of all the cases at once. [20]

The "qualitative comparative analysis" or QCA technique, developed by RAGIN (1993), allows for the analysis of certain aspects of the case without obscuring it. QCA is based on Boolean analysis where relationships among the cases are built by the use of conjunctions (and, or, not). This approach to synthesizing cases involves a technique that arranges cases in a "truth table" by variable in order to study common causes or outcomes. Conjunctions are utilized to locate relationships within the truth tables. [21]

The third technique discussed here was developed by GOLDSTONE (1997). He suggests that narratives are the keys to cross-case analysis. Narratives can preserve the essence of the case during cross-case analysis. It could also be argued that constructing narrative models helps to facilitate comparison by encapsulating the case as a storyline. [22]

In summary, there are multiple research techniques to conduct cross-case analyses. Variable-oriented approaches to cross-case comparison tend to pay greater attention to the variables across cases rather than the case itself. Variables are compared across cases in order to delineate pathways that may have led to particular outcomes. These pathways are often represented as probabilistic relationships among variables. The complexity and context of individual cases is not at the center of variable-oriented approaches. Case-oriented approaches, on the other hand, such as creating typologies, are more particularistic. This approach can show how a story unfolded in different cases, how researchers can make sense of the original case, or suggest new typologies, classes or families of a social phenomenon. Visualization techniques, such as stacking cases, can be utilized by either approach to invite and show comparison. Advantages of cross-case analysis that emerge from these techniques are:

  • the case content is made available to the researcher in an easily accessible form;

  • cases are clustered and represented in a visual display to facilitate comparison by the researcher and by others;

  • cases are compared in a method that either centers on the case or on the variables, depending on the goal of the researcher, and

  • findings of the case and the cross-case comparison are shared with others. [23]

4. Several Issues for the Case Study Researcher Engaged in Cross-Case Analysis

While there are a number of scholars who suggest that cross-case analysis can enhance a researcher's contribution to theory and method (cf. ECKSTEIN, 2002; RUESCHEMEYER, 2003), there are others who are less optimistic about comparing cases. Counter- arguments stem from an epistemological conviction that case knowledge emerges from a dense descriptive study of the particularities of a case. Comparison, the counter-argument goes, obscures case knowledge including knowledge not germane to the comparison (PEATTIE, 2001). Indeed, there are long-standing tensions between deeply contextualized and particularistic case knowledge and multiple case study research (FOREMAN, 1948; ALLPORT, 1962; MOLENAAR, 2004). To begin to reduce the tensions among idiographic and nomothetic research traditions, case study researchers must recall their original goals for the cross-case analysis. As mentioned previously, goals for engaging in a cross-case analysis can include, for example: further illustration, concept and hypothesis development, prediction, and empathic portrayals. [24]

Researchers' goals notwithstanding, the cross-case analyst will also be confronted with questions about the generalizability of the conclusions emerging from the analysis and the ability of the researcher to justify any comparison beyond the set of cases studied. As suggested by KHAN (2007), positivist notions of generalizability have been largely abandoned or modified in social science and case study scholarship (SCHOFIELD, 1990; DONMOYER, 1990; GUBA & LINCOLN; 1981). Generalizations have been recognized as contextual, having half-lives (CRONBACH, 1975) that require updating (even in experimental research). It is far easier, and more epistemologically sound, simply to give up on the idea of generalization; if generalizations are accepted, they should be as indeterminate, relative, and time and context-bound (LINCOLN & GUBA, 2000, p. 32). [25]

Instead of positivist notions of generalizability, new concepts have emerged to extend and amplify the impact of a single case beyond the case itself (YIN, 2003; BECKER, 1990; SMALING, 2003). For example, GOETZ and LECOMPTE (1984) recognized that the findings from case studies cannot be generalized in a probabilistic sense, but that findings from case studies may still be relevant to other contexts. "Comparability" is a concept they proposed to address the issue of generalizability from a single case or cross-case analysis. Comparability is the degree to which the parts of a study are sufficiently well described and defined that other researchers can use the results of the study as a basis for comparison. "Translatability" is a similar concept but refers to a clear description of one's theoretical stance and research techniques. [26]

While it is not the purpose of this paper to elaborate on idiographic and nomothetic debates or delineate all classes of generalization for the cross-case analyst, we recommend that interested case study researchers explore idiographic generalization (ALLPORT, 1962), analogical generalization (SMALING, 2003), analytic generalization (YIN, 2003), and naturalistic generalization (STAKE, 2005) as alternative forms of generalization that can be invoked to rationalize cross-case analyses. In addition to developing a stance on generalizability, there will be at least three accompanying, practical concerns for case study researchers to attend to before embarking upon their cross-case analysis:

  • preserving the essence of the cases,

  • reducing or stripping the case of context, and

  • selecting appropriate cases to compare [27]

Preserving the uniqueness of cases. SILVERSTEIN (1988) states that cross-case analysis must reconcile the preservation of the uniqueness of the case while attempting to analyze the case across other cases. The concern is that the complexity of meaning (from each case) might get lost when the content is simplified in order to make comparison possible (TESCH, 1990). While comparing multiple case studies holds great potential to inform theory, RUESCHEMEYER cautions that the researcher must "increase the number of cross-case comparisons without losing the advantage of close familiarity with the complexity of cases" (2003, p.323). The authors' stance, and the stance of others (STAKE, 2006), is that it is possible to learn from both the uniqueness and commonality of a case. By providing ample contextualized details of the cases and findings of cross-case analysis, a researcher can conceivably preserve the uniqueness of a case and convey the value of their engagement with a cross-case analysis. [28]

Contextual stripping. In cross-case analysis, the contextualized origins of each case are in danger of being lost as cases are compared, especially if a variable-oriented approach is adopted. However, according to AYRES, KAVANAUGH, and KNAFL (2003), losing some contextual detail may be consistent with the goals of cross-case comparison, which is to identify themes across cases. TESCH (1990) described cross-case comparison as essentially a "decontextualization and recontextualization" of cases. The process is as follows: case study data are separated into units of meaning (decontextualized because they are separated from the individual cases) and then recontextualized as they are later integrated and clustered into themes. The themes, which are a reduced data set, can help to explore relationships. The origin of each unit of meaning is less important than its membership in a group of like units. AYRES et al. (2003) referred to this approach as "moving between across- and within-case comparisons" (2003, p.875). Such a cross-case synthesis, according to these authors, achieves its authenticity in the immersion within individual cases. [29]

In a similar approach to cross-case comparison, KNAFL (as cited in AYRES et al., 2003) reduced the contextual stripping in a cross-case analysis of family management styles during illness. KNAFL first identified general themes that shaped the experience of families dealing with illness (searched for commonalities across accounts). Secondly, she delineated variation within the themes (across individual family members), and thirdly, created a "thematic profile" for each family member and family unit (within case analysis). Finally, she offered a differentiation of family management styles (across families case analysis). Themes such as being a burden ended up playing a role in illness management style. Sub-themes emerged when the accounts of individual family members were compared with that of the family as a unit. Within-case comparisons were represented as narrative case summaries and cross-case comparisons were displayed as a grid using a database manager to identify clusters of families with similar configurations. In both AYRE’s and KNAFL's approaches to cross-case analysis, attempts to preserve the uniqueness and authenticity of the case were successful. [30]

Selection of cases. Generally, in variable-oriented approaches, the number of cases to compare should be high, whereas in the case-oriented approach, the number of cases to compare is generally low (but not less than two). In both instances, the researcher is advised to search for comparable cases until they are satisfied that the search is no longer yielding new insights or until theoretical saturation has been achieved. Variable-oriented researchers support comparison of cases that are fairly similar in order to achieve a level of control that can foster predictability and idiographic or nomothetic generalizations. Case-oriented research can support the comparison of cases that are ostensibly very different. Earlier, the example of a cross-case comparison study of school principals and CEO's was introduced. At face value, such a comparison might be challenging since the contexts and roles are so different. However, a principal focused on cultivating citizenship and academic achievement may have something in common with a CEO who runs a car manufacturing plant and is focused on production of vehicles and car performance. Both are attempting to motivate individuals to produce a set of outcomes in a certain time span. It is possible to imagine both case studies featuring interviews of a sampling of students and workers discussing their perceptions of accomplishment and alienation in regards to their duties and responsibilities. The selections of cases, and their corresponding units of analyses, are an important methodological consideration in case study comparisons and should be related to the overall goals of the case study researcher. [31]

5. The Foresee Database Project

With the above techniques and considerations regarding cross-case analysis in mind, an online database, known as the Foresee or 4C (Cross-Case Comparisons and Contrasts) was developed. The Foresee database utilizes Web 2.0 capacities to bring together case study researchers to perform cross-case analysis, and thus, to mobilize new knowledge. The long-term objectives of the database are, first, to promote cross-case analysis as a research method that facilitates the comparison of commonalities and difference in cases and second, to establish an online research community that facilitates dialogue and the mobilization of case study knowledge. In this section, the design principles and features of the 4C database are described. [32]

5.1 Design principles of the 4C database

The aforementioned assumptions regarding how researchers develop expertise informed the creation of four design principles to guide the development of the database. The four design principles are:

  • analyzing cases from different contexts can build common ground between case study researchers from multiple disciplines and diverse backgrounds;

  • cross-case analysis involves a set of cases that are indexed, accessible, and can be probed visually and conceptually by the researcher;

  • cross-case analysis can be facilitated by constructing meaningful linkages and relationships, and

  • in a cross-case analysis, researchers should attempt to preserve the richness and uniqueness of the case. [33]

These four principles were incorporated in the design of the database. The first and second principles were incorporated by applying the technique of stacking comparable cases. This means that one case is condensed and placed above and below another case or cases in a "meta-matrix" view, where cases are visualized in a table according to set fields. The matrix view offers a first pass visual comparison of cases. The meta-matrix also supports hyperlinking to uncondensed versions of the case to preserve the case in its original form. [34]

The third design principle incorporates RAGIN's qualitative comparative analysis. This method offers an attractive strategy for using conjunctions, and, or, but, which makes it possible to include some case studies and exclude others. For example, case studies that address both "Education" and "Chemistry" can be selected from the database. The third principle also dictates the use of "tags". Tags are personal, adaptable, and descriptive terms that can be applied to a body of information as metadata (CAMERON, 2004; HAMMOND et al., 2005; MATHES, 2004; SACCO, 2004). The ability to tag means a case study researcher could conceivably create a tag (e.g., "media") and apply it to his or her case study data on, for example, public anti-smoking advertisements. Another researcher could tag the same case with the tag "social marketing." In this way, one researcher could gain access to all the researchers who employed the same tag "media." Another researcher could determine that their case study data contains similar parameters and tag this information as "media." Thus, tagging can facilitate cross-case comparisons of media campaigns aimed at reducing smoking or media-based health promotion campaigns. [35]

The fourth design principle draws on PRZEWORKSI and TEUNE's (1982) most different research design, which argues for comparing diverse sets of cases because these could generate unforeseen discoveries. To promote this discovery, the authors opened the database to the possibility of researcher's building personal libraries of cases. In addition, researchers are also required to submit cases. The possibility of building a collective as well as personal library builds capacity by offering cases from many fields of endeavor. There are pragmatic and theoretical reasons for being able to do both, which will be discussed in the next section on the affordances of the 4C database. [36]

5.2 Affordances of the 4C database

Case study researchers can access the 4C database upon registration at The database is currently housed on a university server. Figure 1 shows the splash page each case study researcher encounters once logging in and becoming members of the system.

Figure 1: 4C splash page [37]

Firstly, the 4C database records seven aspects of case studies that are submitted: title, focus of study, purpose, research tools, what was learned, related studies, and tags. These seven aspects, or categories, are based on the outcomes of a user study with case study researchers in 2004, and establish common ground among the 4C collective. The case categories are also congruent with most primary journal publishing requirements. Using our example, the "media" tag fits under the tag case category where it can be accessed and analyzed by other researchers much like a keyword. [38]

4C members can view the collection of submitted and archived case studies as a "list" or as a "meta- matrix" view. Clustering the cases in a central visual display affords what MILES and HUBERMAN (1994) call the "first deep dive" into cross-case analysis; that is, researchers can scroll through the meta-matrix, look across rows or down columns and perform a squint analysis. This gives 4C members the opportunity to scan potential cases for comparison. [39]

Secondly, 4C members can search the database and select candidate cases for comparison by using the search functions and conducting their search by title, author, content, authors' name, or researcher recommendation. Thirdly, once cases have been selected for comparison, 4C members have access to two methods for cross-case analysis that build relationships among cases. A set of comparison tools allows members to use Boolean terms and code multiple cases with tags. [40]

Finally, the 4C database helps to enable the publication of cross-case research by offering a multi-way dialogue forum among prospective researchers as well as the public annotations of case studies. A set of screenshots are included to illustrate these affordances. [41]

Collective Capacity. As Figure 2 depicts, 4C members contribute cases to a collective case study archive. The cases are indexed chronologically as well as by tags. Researchers can gain access to submitted cases and works in progress from a wide range of disciplines, and the public archiving of researchers' cases, tags and researchers' notes facilitates greater learning from cases. The contact information of the researchers who have submitted their case studies to the 4C database is available for other researchers, which enables researchers to further discuss cases and explore research connections.

Figure 2: Case archive in list view [42]

Personalization of database. Researchers can build their own personal library of cases suitable for work on contained research projects (see Figure 3). Researchers can also construct personal notes on each case submitted that are not viewable to the community. Researchers are able to create their own tags for cases that are different or the same as the submitter's tags.

Figure 3: A 4C member's personal library of cases [43]

Visual display. 4C database offers a visual display to view the studies as a "meta- matrix" where each study's text is structured and indexed into separate field or case categories. Figure 4 shows how a visual comparison is supported within a meta-matrix view of 4C case studies.

Figure 4: A meta-matrix view, see the PDF file for an improved version [44]

Advanced search and select tools. 4C's option to compare and contrast the same case categories between different studies with the use of Boolean search terms allows a researcher to find patterns across the database. Figure 5 outlines all the selection choices, and Figure 6 shows how Boolean search terms can be applied to compare cases.

Figure 5: Selection choices

Figure 6: Boolean search [45]

Conceptual and Conjunctive Relationships. 4C's use of tags helps researchers make comparisons. Tags provide not only a way of locating and comparing cases, but user-driven naming of relationships via tags also increases the flexibility of typical databases. The researcher can use terms to link various cases and search case studies by these terms (GRUDIN, 1994; GUERRERO & FULLER, 2001, PAHLEVI & KITAGAWA, 2003; SCHACHTER, N.D.; STAR, 1998; YEE, SWEARINGEN, LI & HEARST, 2003). The database enables the researcher to view, navigate, and subscribe to case content by researchers' tag(s). In addition, the database also makes it possible to see all the tags that have been applied by all the researchers to the case study. One researcher can compare their tags to another researcher's to learn about the kinds of terms that are applied to certain information. Thus, using the 4C database, researchers can create a personal view of all indexed (i.e., tagged) content, attach personalized tags attached to any indexed item and view, navigate, and subscribe to indexed content by researcher, tag(s) or any combination of these. [46]

As Figure 7 illustrates, the researcher is also able to see all tags that were applied by researchers of the case study. Tags for a case study are automatically available to the researcher. Furthermore, the researcher can read through tag lists and find other relevant case studies. Finally, Figure 7 shows related tags that are all the tags that include literacy "and" another tag term. Tagging has the potential to develop meaningful links across cases: the "cases" are indexed when they are stored in memory (or entered into the database). However, accessing this knowledge could also reflect the conditions under which the data are retrieved. The previous experience or case could be reframed in a way that was similar to the current one and retrieved as it was re-conceptualized or "tagged".

Figure 7: The "Literacy" tag "and" other tags [47]

Tutorials and support. The 4C database offers tutorials on how to use the database to conduct cross-case analysis. Although researchers can perform cross-case analysis in multiple ways from every page on the database, the database scaffolds the process of cross-case analysis for researchers by:

  • suggesting a trajectory involving: clustering cases, performing a squint analysis, selecting comparable cases, comparing cases, and publishing;

  • providing icons and a site map to ease navigation through the database;

  • offering a frequently asked questions page, which provides definitions of terms, and

  • including contacts to site administrators. [48]

6. How 4C is Different from Computer-Assisted Qualitative Data Analysis Tools and Online Repositories

The 4C database is different from computer-assisted qualitative data analysis software (CAQDAS) and database repositories such as libraries. Researchers often utilize CAQDAS software to code their data, construct categories and create themes. The result of analyzing data using CAQDAS is coded material that is often organized by an individual computer. Cross-case analysis, if conducted, is often done by hand after the data have been coded and contained in the computer. With the 4C database, however, the researcher can combine numerous case studies on any topic of interest (e.g. science education, urban sustainability). 4C members can utilize the database's distributed functionality to perform cross-case analysis from every page. Furthermore, the 4C database enables researchers to find other researchers with similar interests. 4C can establish dialog among researchers in a community and create an online environment that facilitates the discovery and sharing of case knowledge. In its support of collaboration among case study researchers, the 4C database is different from CAQDAS. [49]

The 4C database also favorably compares to scholarly online research library databases (e.g. ERIC, EDUDATA, CiteSeer, Medline, CINAHL, Web of Science, Canadian Education Fulltext, Pro-Quest Digital Dissertations, and The National Library of Canada) or e-libraries where people can post their research (e.g., SSRN Existing library databases lack user-driven search terms as well as effective ways of facilitating the comparison of case studies. Library searches are generally limited to metadata such as keyword, institution, author, or subject words, and do not adequately support the locating of meaningful case study research, or cross-case analyses. Moreover, traditional indexing methods are used to retrieve and analyze the research studies but these tools do not include research in progress, do not permit uploading and editing of data by the author, or do not involve researchers in building a community of users based on identifying and recommending research. On the other hand, the 4C database supports works in progress and allows further updates of case studies. The 4C database affords researchers with the opportunity to add their perspective or comments to these case studies and share these perspectives with other researchers. The user-driven naming of relationships via tags increases the flexibility and expansion of 4C databases by enabling the researcher to link cases with meaningful terms and to search case studies by these terms. Finally, unlike library repositories, 4C allows different individuals to present and recommend selected case studies of interest on a common problem and facilitates collaboration between these individuals. [50]

The authors know of no other currently available, single, online tool that supports collaboration amongst case study researchers or allows them to create communities of interest, contribute case study data, discover and analyze existing case studies, perform cross-case analyses, recommend case studies to one another, and foster dialogue about case studies. [51]

7. Conclusion

In this paper, the authors suggest that the fundamental power of cross-case analysis emerges from understanding how expertise can be built and shared. Turning to theories of how people learn, we detected a form of cognitive cross-case analysis as a plausible hypothetical process involved in building expertise. We proposed that case study researchers have mobilized their knowledge of the original case when their cross-case analysis is made public. To support the mobilization of case study knowledge, we introduced the Foresee (4C) Database. The design of the Foresee Database was based upon: 1) the above hypotheses on the development of expertise 2) known techniques in cross-case analysis, such as stacking and qualitative comparative analysis, and 3) emerging Web 2.0 capacities, such as tagging and multi-way interactivity, to construct meaningful relationships. [52]

In terms of user perceptions, findings with case study researchers have been encouraging. The authors asked researchers to comment on the 4C database anonymously after using it. Three typical comments were:

"This [cross-case] comparison makes it possible for me to develop expertise regarding home-school literacy practices, it helps refine my concepts, and it helps me think about theory in terms of validity across similar events but in different contexts [i.e., Contexts from different studies in the database]. It let me see patterns between concepts and among data. It afforded me the opportunity to take a closer look at my study, and in particular, to look more closely at my data."

"The comparison of those studies [within the database] definitely brought new insight for me. The [comparison] showed me how all of them were carrying the same idea that there must be some kind of meaning to tobacco use prevention or control program in order for it to be successful."

"The potential value of the cross-case analysis that I looked at involved seeing the notion of processes and practices in a new light. This comparison [with other cases in the database] has allowed me to see that my own study is much more built upon literary practices than I had realized." [53]

More research and application awaits the authors. Having taken steps to locate a theoretical framework and develop a set of design principles, we invite others to join this dialogue on cross-case analysis and knowledge mobilization. [54]



Allport, Gordon (1962). The general and the unique in psychological science. Journal of Personality, 30(3), 405-423.

Ausubel, David; Novak, Joseph & Hanesian, Helen (1978). Educational psychology: A cognitive view. New York, NY: Holt, Rinehart, and Winston.

Ayres, Lioness; Kavanaugh, Karen & Knafl, Kathleen A. (2003). Within-case and across-case approaches to qualitative data analysis. Qualitative Health Research, 13(6), 871-883.

Becker, Howard S. (1990). Generalizing from case studies. In Elliot W. Eisner and Alan Peshkin (Eds.), Qualitative inquiry in education: The continuing debate (pp.233-242). New York, NY: Teachers College Press.

Cameron, Richard (2004). CiteULike About Pager. Retrieved May 27, 2007, from:

Cronbach, Lee J. (1975). Beyond the two disciplines of scientific psychology. American Psychologist, 30,116-127.

Denzin, Norman K. (1989). Interpretive interactionism. In Gareth Morgan (Ed.), Beyond methods: Strategies for social research (pp.129-146). Beverly Hills, CA: Sage Publications.

Donmoyer, Robert (1990). Generalizability and the single case study. In Elliot W. Eisner, & Alan Peshkin (Eds.), Qualitative inquiry in education: The continuing debate (pp.175-200). New York, NY: Teachers College Press.

Dreyfus, Hubert L. & Dreyfus, Stuart E. (1988). Mind over machine: The power of human intuition and expertise in the era of the computer. New York, NY: Free Press.

Eckstein, Harry (2002). Case study and theory in political science. In Roger Gomm, Martyn Hammersley, & Peter Foster (Eds.), Case study method: Key issues, key texts (pp.119-163). London: Sage Publications.

Flyvbjerg, Bent (2001). Making social science matter: Why social inquiry fails and how it can succeed again. Cambridge: Cambridge University Press.

Foreman, Paul G. (1948). The theory of case studies. Social Forces, 26(4), 408-419.

George, Alexander L. & Bennett, Andrew (2005). Case studies and theory development in the social sciences. Cambridge, MA: MIT Press.

Goetz, Judith P. & LeCompte, Margaret D. (1984). Ethnography and qualitative design in education research. Orlando, FL: Academic Press.

Goldstone, Jack A. (1997). Methodological issues in comparative macrosociology. Great Britain: JAI Press.

Grudin, Jonathan (1994). Groupware and social dynamics: Eight challenges for developers. Communications of the ACM, 37(1), 92-155.

Guba, Egon G. & Lincoln, Yvonna S. (1981). Effective evaluation. San Francisco: Jossey-Bass.

Guerrero, Luis A. & Fuller, David A. (2001). A pattern system for the development of collaborative applications. Information and Software Technology, 43(7), 457-467.

Hammond, Tony; Hannay, Timo; Lund,Ben & Scott, Joanna (2005). Social bookmarking tools (I): A general review. Dlib Magazine, 11(4). Retrieved May 27, 2007, from:

Khan, Samia (2007). The case in case-based design of educational software: A methodological interrogation. Educational Technology Research & Development, 1-25.

Kolodner, Janet L. (1993). Case-based reasoning. San Mateo, CA: Morgan.

Kolodner, Janet L.; Camp, Paul J.; Crismond, David; Fasse, Barbara; Gray, Jackie; Holbrook, Jennifer; Puntambekar, Sadhana & Ryan, Mike (2003). Problem-based learning meets case-based reasoning in the middle-school science classroom: Putting learning by design™ into practice. Journal of the Learning Sciences, 12(4), 495-547.

Lincoln, Yvonna S. & Guba, Egon G. (2000). The only generalization is: There is no generalization. In Roger Gomm, Martyn Hammersley, & Peter Foster (Eds.), Case study method (pp.27-44). London: Sage Publications.

Mathes, Adam (2004). Folksonomies cooperative classification and communication through shared metadata. Retrieved September 01, 2006, from:

Miles, Matthew B. & Huberman, A. Michael (1994). Qualitative data analysis. Thousand Oaks, CA: Sage Publications.

Mills, John Stuart (1843). A system of logic. London: John W. Parker.

Molenaar, Peter (2004). A manifesto on psychology as idiographic science: Bringing the person back into scientific psychology, this time forever. Measurement, 2(4), 201-218.

Pahlevi, Said Mirza & Kitagawa, Hiroyuki (2003). TAX-PQ dynamic taxonomy probing and query modification for topic-focused Web search.Proceedings of the Eighth International DASFAA Conference on Database Systems for Advanced Applications, 91-100.

Peattie, Lisa (2001). Theorizing planning. Some comments on Flyvbjerg's rationality and power. International Planning Studies, 6(3), 257-262.

Przeworski, Adam & Teune, Henry (1982). The logic of comparative social inquiry. Malabar, FL: Robert E. Krieger Publishing Co.

Ragin, Charles (1993). Introduction to qualitative comparative analysis. In Thomas Janoski & Alexander Hicks (Eds.), The comparative political economy of the welfare state (pp.299-319). New York: Cambridge University Press.

Ragin, Charles (1997). Turning the tables: How case-oriented research challenges variable- oriented research. Comparative Social Research, 16, 27-42.

Rueschemeyer, Dietrich (2003). Can one or a few cases yield theoretical gains? In James Mahoney & Dietrich Rueschemeyer (Eds.), Comparative historical analysis in the social sciences (pp.305-336). Cambridge, MA: Cambridge University Press.

Sacco, Giovanni Maria (2004). Uniform access to multimedia information bases through dynamic taxonomies. Proceedings of the Sixth IEEE International Symposium on Multimedia Software Engineering, 320-328.

Schachter, Joshua. Retrieved May 27, 2007, from:

Schank, Roger C. & Berman, Tamara (2002). The pervasive role of stories in knowledge & action. In Melanie Green, Jeffrey Srange, & Timothy Brock (Eds.), Narrative impact: Social and cognitive foundations (pp.287-314). Mahwah,NJ: Erlbaum & Associates.

Schofield, Janet W. (1990). Increasing the generalizability of qualitative research. In Elliot W. Eisner & Alan Peshkin (Eds.), Qualitative inquiry in education: The continuing debate (pp.201-232). New York: Teachers College Press.

Silverstein, A. (1988). An Aristotelian resolution of the idiographic versus nomothetic tension. American Psychologist, 43(6), 425-430.

Smaling, Adri (2003). Inductive, analogical, and communicative generalization. International Journal of Qualitative Methods, 2(1), 1-31.

Stake, Robert (2005). Qualitative case studies. In Norman K. Denzin & Yvonna S. Lincoln (Eds.), Qualitative research (3rd ed., pp.433-466). Thousand Oaks, CA: Sage Publications.

Stake, Robert (2006). Multiple case study analysis. New York, NY: Guilford Press.

Star, Susan Leigh (1998). Grounded classification: Grounded theory and faceted classification. Library Trends, 47(2), 218.

Stretton, Hugh (1969). The political sciences: General principles of selection in social science and history. London: Routledge & Kegan Paul.

Tesch, Renata (1990). Qualitative research: Analysis types and software tools. New York, NY: Palmer.

VanWynsberghe, Robert & Khan, Samia (2007). Redefining case study. International Journal of Qualitative Methods, 6(2), 1-10.

Yee, Ka-Ping; Swearingen, Kirsten; Li, Kevin & Hearst, Marti (2003). Faceted metadata for image search and browsing. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 401-408.

Yin, Robert (1994). Case study research: Design and methods (2nd ed.). Thousand Oaks, CA: Sage Publications.

Yin, Robert (2003). Case study research: Design and methods (3rd ed.). Thousand Oaks, CA: Sage Publications.


Samia KHAN is a professor in the Curriculum Studies Department, Faculty of Education, at the University of British Columbia, Canada. Substantive research interests include case study research, knowledge mobilization, and pedagogical and technological innovations that are designed to enhance science education learning, especially among women.


Dr. Samia Khan

Department of Curriculum Studies
Faculty of Education
University of British Columbia
2125 Main Mall
Neville Scarfe Building
Vancouver, BC V6T 1Z4

Phone: 604-822-5296
Fax: 604-822-4714



Robert VANWYNSBERGHE is a professor in the School of Human Kinetics at the University of British Columbia, Canada. Substantive research interests include case study research and community mobilization for the purposes of achieving sustainability and health promotion goals.


Robert VanWynsberghe, PhD

Human Kinetics and Educational Studies
Rm. 156g Aud. Annex A
1924 West Mall
University of British Columbia
Vancouver, BC V6T 1Z2

Phone: 604-822-3580
Fax: 604-822-5884



Khan, Samia & VanWynsberghe, Robert (2008). Cultivating the Under-Mined: Cross-Case Analysis as Knowledge Mobilization [54 paragraphs]. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research, 9(1), Art. 34,

Revised 7/2008