Research Methods as Foundations for Evidence-based Practice in

Loading...

Sample only Oxford University Press ANZ

1 The Science of Words and the Science of Numbers: Research Methods as Foundations for Evidence-based Practice in Health Pranee Liamputtong

Chapter objectives

Key terms

In this chapter you will learn:

Bias

Ontology

• about evidence and evidence-based practice

Convenience sampling

Positivism

• about different research designs in health

Constructivism

Pragmatism

• the nature of qualitative and quantitative approaches

Data saturation

Probability sampling

• the usefulness of mixed methods

Effectiveness/Efficacy

method

• about rigour, reliability and validity in research

Epistemology

Purposive sampling

Ethnography

Qualitative research

Evidence

Quantitative research

Evidence-based practice

Reliability

Knowledge

Research participant

Knowledge acquisition

Rigour

Metasynthesis

Systematic review

Mixed methods

Validity

• about sampling issues

Non-probability sampling

Sample only Oxford University Press ANZ

4

Part I: Methods and Principles

Knowledge: An

Introduction

accepted body of facts or ideas acquired through the use of the senses or reason, or through research methods.

Knowledge acquisition: The most efficient way of ‘knowing something’ is through research findings, which have been gathered through the use of research methods.

Knowledge is essential to human survival. Over the course of history, there have been many ways of knowing, from divine revelation to tradition and the authority of elders. By the beginning of the seventeenth century, people began to rely on a different way of knowing—the research method. (Grinnell et al. 2011a, p. 16)

According to Grinnell and colleagues (2011a, p. 7), knowledge is ‘an accepted body of facts or ideas which is acquired through the use of the senses or reason’. In the old days, we used to believe that the earth was flat and our belief came about through those who were in ‘authority’, who told us so, or because people in our society had always believed that the world was flat. Now we know that the earth is spherical because scientists have travelled into space to observe it from this perspective. However, Grinnell and colleagues argue that the most efficient way of ‘knowing something’ (knowledge acquisition) is through research findings, which have been gathered through the use of research methods. What has knowledge got to do with evidence and evidence-based practice? I contend that it is through our knowledge that evidence can be generated. This evidence can then be used for our practice. Without knowledge, there will not be evidence that we can use. But how can we find knowledge? For scientists and health practitioners, the answer is through research and research methods. According to Grinnell and colleagues (2011b, p. 20), ‘the research method of knowing’ comprises two ‘complementary research approaches’: the qualitative approach and the quantitative approach. Qualitative research relies on ‘qualitative and descriptive methods of data collection’. Data are presented in the form of words, and sometimes as diagrams, or drawings, but  not as numbers. The quantitative approach, on the other hand, ‘relies on quantification in collecting and analysing data and uses descriptive and inferential statistical analyses’. Data obtained in a quantitative study are presented in the form of numbers, not in the form of words, as is the case for the qualitative approach. These two approaches will be discussed later in this chapter.

Evidence and evidence-based practice It is our belief that you must know the basics of research methodology to even begin to use the concept of evidence-based practice effectively. (Grinnell & Unrau 2008, p. v) Evidence: Evidence in the context of EBP is what results from a systematic review and appraisal of all available literature relevant to a carefully designed question and protocol.

This quotation expresses the main reason why this book has been written. Thus it is intended to provide the foundations for evidence-based practice (EBP) in health. As I have suggested, evidence can be derived from knowledge and knowledge can be obtained through research. Evidence, according to Manuel and colleagues (2011, p. 146), is ‘information’ that can be used to support and guide practices, programs and policies in health and social care in order to enhance the health and well-being of individuals, families and communities. For example, you might be interested in depression among young people and the most effective way to assess their risk for suicide and to prevent it. Types of evidence that you may be interested in may include: • perceptions and experiences of depression and suicide among young people • factors that are related to the onset of depression in young people

Sample only Oxford University Press ANZ

5

Chapter 1: The Science of Words and the Science of Numbers

• risk factors and protective factors that are relevant to depression and suicide among young people • evidence-based methods that can be used to carry out an appropriate assessment of suicide risk • strategies or interventions that can be used in practice • prevention programs and policies that can have a positive impact on these health and social problems. As you can see, there are several types of evidence that you can use to find answers to the questions about the health issue in which you are interested. Now it has to be asked which type is the ‘best’ evidence that you can use and how do you obtain this evidence? This depends on the questions you ask. It has been debated among researchers and practitioners that there is no universal way to judge which evidence is the best (Altheide & Johnson 2011). Researchers and practitioners come from different disciplines and surely will have different perspectives on the types of evidence they see as useful or not useful for their research purposes and professional practices (Altheide & Johnson 2011; Manuel et al. 2011). What is seen as the best evidence for some researchers and practitioners may not be seen as such by others. It is at this point that I wish to bring up the issue of evidence-based practice. Fundamentally, ‘evidence-based practice’ in the area of health care refers to the process that includes finding empirical evidence regarding the effectiveness and/or efficacy of various treatment options and then determining the relevance of those options to specific client(s). This information is then considered critically, when developing the final treatment plan for the client or clients. (Mullen et al. 2011, p. 163; see also Chapters 15, 16, 17)

Evidence-based practice: A process that requires the practitioner to find empirical evidence about the effectiveness or efficacy of different

One approach for evaluating evidence within the model of EBP is through a hierarchical ranking system (Manuel et al. 2011, p. 154; see Chapter 17). Within this system, evidence is evaluated according to the research design that was used to generate it. For instance, when evaluating a health care intervention, a well-designed experiment, specifically a randomised controlled trial (RCT), or better, the systematic review of a number of RCTs, is perceived as the ‘gold standard’ (Hawker et al. 2002; Evans 2003; Aoun & Kristjanson 2005; Packer 2011; see Chapters 12, 14, 15, 17, 18). However, the hierarchical ranking system may ignore some of the limitations of RCTs, and neglect observational studies (Aoun & Kristjanson 2005; Manuel et al. 2011; Packer 2011; see also Chapter 12). For instance, confidence in the RCT is based on knowing that the research was correctly undertaken (see Chapter 15), but more often than not, published research using RCTs presents conflicting findings (see Chapter 12). Some researchers argue that a hierarchical approach is based solely on seeing whether the intervention works as intended, or on the measurement of the efficacy of intervention ‘with little attention to the appropriateness and feasibility of the interventions in the real practice world’ (Manuel et al. 2011, p. 154). More importantly, as Packer (2011, p. 37, original emphasis) argues, ‘the gold standard also prevents researchers from studying, let alone questioning, the forms of life in which people find themselves and in which things are found. People are not in fact independently existing entities. We exist together, in shared forms of life.’

Pranee Liamputtong

treatment options and then determine the relevance of that evidence to a particular client’s situation.

Effectiveness/efficacy: A measure used to determine whether the treatment or intervention has an intended or expected outcome. In medicine, however, it refers to the ability of a treatment or intervention to reproduce a desired outcome under ideal circumstances.

Sample only Oxford University Press ANZ

6

Part I: Methods and Principles

Ethnography:

More importantly, within this hierarchical system, qualitative evidence is often placed at the bottom of the hierarchy and grouped with expert opinion (Grypdonck 2006; Savage 2006; Manuel et al. 2011). In this model, the contribution to evidence-based practice of findings from qualitative research is undervalued, and at worst discounted (Gibson & Martin 2003; Aoun & Kristjanson 2005; Grypdonck 2006; Denzin 2009, 2011; Altheide & Johnson 2011). Qualitative research, despite its increasing contributions to the evidence base of health and social care, is still underrepresented in some health care areas that place a high value on evidence from the hierarchical system (Johnson & Waterfield 2004). This is in part, as Gibson and Martin (2003, p. 353) suggest, due to ‘mistaken attempts to evaluate qualitative studies according to the evidence-based hierarchy, where the status of qualitative research is not acknowledged’. Many qualitative researchers argue that this is flawed, as qualitative studies also employ rigorous methods of data collection and analysis (Johnson & Waterfield 2004; Annells 2005; Hammersley 2005; Denzin 2009, 2011). Savage (2006, p. 383), for example, argues that ethnography, one of the qualitative research methods, ‘is particularly valuable because of the attention that it gives to context and its synthesis of findings from different methods’. More importantly, ethnography provides ‘a holistic way of exploring the relationship between the different kinds of evidence that underpin clinical practice’ (see also Chapter 6; Altheide & Johnson 2011).

A research method that focuses on the scientific study of the lived culture of groups of people, used to discover and describe individual social and cultural groups.

It is argued that the hierarchical model of evidence is only one way of organising different types of evidence. It is important for health researchers and practitioners to know this, so that they can evaluate the quality of evidence that can be found with respect to a specific health issue. And no doubt it can be very useful for some health practices, for example in therapeutic science (see Chapters 15, 17 for example). However, Manuel and colleagues (2011, p. 154) believe that ‘the decision on what evidence to use should be placed in context with your research study’. Researchers and practitioners need to consider the relevance and feasibility of evidence and whether the evidence accords with the values and preferences of the clients. And this is what I  advocate in this chapter: that we need to consider different types of evidence and that this evidence can be derived from the findings of different types of research. This book will give readers an understanding of the different methods that researchers and practitioners can use or draw on in producing evidence: qualitative methods (see Part II), quantitative methods (see Part III), mixed methods (see Chapters 20, 21) and collaborative approaches (see Chapter 22). It is worth noting that EBP has emerged from the long-standing commitment among health practitioners to social research and science. But there has been a significant change in how research and practice are related. According to Mullen and colleagues (2011, p. 173), in the past, research and practice were seen as separate activities and/or as the roles of two different professions. Research was undertaken by researchers to add to the knowledge base, which was eventually drawn upon by practitioners to provide evidence on which to base their practice. Now, these differences are blurred and research and practice are often combined. In EBP, many of the practice questions largely resemble the essential parts of research questions: ‘We search for evidence—especially research evidence—to answer our practice questions using established research criteria when the evidence comes from research studies, and we collect data on the processes and outcomes of our interventions’ (p. 173). In evidence-based practice, practitioners need to be clear about what is known and not known about any health problem or health practice that will be ‘best’ for their clients (Mullen et al. 2011). But all too often we know little about particular health problems of some population groups, or about treatment options that are not empirically based. Although there is research evidence

Sample only Oxford University Press ANZ

7

Chapter 1: The Science of Words and the Science of Numbers

that practitioners may find in existing literature, Mullen and colleagues (2011, p. 176) argue that there are still many health issues that remain unknown to us. Currently, EBP does not apply to many of the health issues of certain population groups, for example certain ethnic minorities and indigenous groups, recent immigrants and refugees, gays and lesbians, rural communities, and people with uncommon or particularly challenging health problems. In her analysis of the impact of evidence-based medicine (EBM) on vulnerable or disadvantaged groups, Rogers (2004, p. 141) points out that evidence-based medicine ‘turns our attention away from social and cultural factors that influence health and focuses on a narrow biomedical and individualistic model of health. Those with the greatest burden of ill health are left disenfranchised, as there is little research that is relevant to them, there is poor access to treatments, and attention is diverted away from activities that might have a much greater impact on their health.’ It is clear that there is a need for more research with different groups of people as part of the EBP process. Also much of the EBP focus, in terms of both research and application, has been centred on only a subset of health issues. Research is needed in other fields, in both health issues and practices. More importantly, depending on the research or practice question, practitioners may need evidence other than that which relates to the efficacy of interventions, to inform their practice (Aoun & Kristjanson 2005; Manuel et al. 2011). Evidence that we use in evidence-based practice cannot and should not be based solely on the findings of RCTs. Rather, it should be derived from many sources (Hawker et al. 2002; Shaw 2011). Some health topics or issues are not appropriate for an RCT (see e.g. Aoun & Kristjanson 2005). Fahy (2008, p. 2), for example, contends that most maternity care practices will never be found by RCTs. However, evidence for practice in midwifery is needed so that midwives will be able to help women ‘to make the best decisions for themselves by taking the best available evidence into account’. She also suggests that ‘a more expansive definition of evidence and evidence-based practice’ is needed. Additionally, there are many ethical concerns regarding RCTs (see Chapter 2). For instance, you may be interested in knowing about the meaning and interpretation of body weight because there have been higher rates of diabetes or anorexia nervosa in your city, or you may need to know about the understanding of homelessness among poor families and how they deal with it, because you have noticed that there are increasing numbers of homeless young people in poorer areas of your city. The ‘best’ evidence for these issues will not be generated by RCTs but by qualitative research (see Chapters 8, 16, 18, 22). These scenarios illustrate situations where you need to look for other types of evidence. Therefore, if there is no available evidence that you can find from systematic reviews or from other sources such as the relevant literature, evidence can be obtained by gaining knowledge through your own research. As Shaw (2011, p. 20) contends, ‘“valid scientific knowledge” can take many forms’. In this book, I argue that evidence can be generated by both qualitative and quantitative research (see also Beck 2009). No doubt, most health care providers will trust the so-called ‘hard’ evidence obtained through quantitative approaches such as surveys with closedended questions, clinical measurements and RCTs (see chapters in Part III). As I have pointed out, the quantitative approach is seen as being empirical science and as being more systematic than qualitative research, so the findings of this approach are regarded as more reliable. But I argue that evidence derived from the qualitative approach can help you to understand the issue and to use the findings in your practice. Qualitative research provides evidence that you may not be able to obtain from quantitative research or from a systematic review of quantitative research. Seeley and colleagues (2008), for example, point out that the quantitative part of their research, which involved more than 2000 participants, failed to provide a good understanding of some of their Pranee Liamputtong

Systematic review: A comprehensive identification and synthesis of the available literature on a specified topic. In a systematic review, literature is treated like data.

8

Sample only Oxford University Press ANZ

Part I: Methods and Principles

findings regarding the impact of HIV and AIDS on families. It was only through the life histories of 24 families that they were able to explain these findings in a more meaningful way. Their study clearly points to the importance of qualitative evidence in health care and practice. Indeed, many researchers have argued that ‘qualitative research findings have much to offer evidencebased practice’ (Hawker et al. 2002, p. 1285; see also Grypdonck 2006; Jack 2006; Daly et al. 2007; Meadows-Oliver 2009; Chapters 8, 18). As Sandelowski (2004, p. 1382) puts it, ‘Qualitative research is the best thing to be happening to evidence-based practice’. Within the emergence of evidence-based practice in health care, Grypdonck (2006, p. 1379) contends that qualitative research contributes greatly to the appropriateness of care. She argues that health practitioners need to have a good understanding of what it means to be ill, to live with an illness, to be subject to physical limitations, to see one’s intellectual capacities gradually diminish, or to be healed again, to rise from [near] death after a bone marrow transplant, leaving one’s sick life behind, to meet people who take care of you in a way that makes you feel really understood and really cared for.

Practitioners cannot obtain knowledge from existing literature in order to address these crucial issues of health and illness. Such knowledge can only be gained through the integration of research into their daily work (see Chapter 8, for example). Surely, by gaining a better understanding of the lived experience of patients and clients, health practitioners will be able to provide more sensitive and appropriate care.

Metasynthesis: A generic term that represents the collection of approaches of qualitative research on previous qualitative studies in a field of interest.

I argue here that qualitative enquiry is an essential means for eliciting evidence from diverse individuals, population groups and contexts. In clinical encounter, Knight and Mattick (2006, p. 1084) say this clearly: ‘The inclusion of qualitative research within EBM brings closer the link between individual patients’ perspectives and “scientific” perspectives’. However, there is still a sense of distrust of qualitative research. This is mainly due to a perception that qualitative enquiry is unable to produce useful and valid findings (Hammersley 2008; Torrance 2008, 2011), a perception that stems largely from insufficient understanding of the philosophical framework for qualitative work, which has its focus on meaning and experience, the social construction of reality, and the relationship between the researched and the researcher. Recently, however, we have also witnessed an attempt to synthesise qualitative findings in a form of metasynthesis because this ‘would hold stronger credibility within the evidence-based practice than would individual studies alone’ (Thorne 2009, p. 571). Metasynthesis, according to Zuzelo (2012, p. 500), ‘offers a mechanism to help establish qualitative research as a viable source of evidence for EBP’ (see also Chapter 18). Padgett (2012, p. 193) contends that with the acceptance of metasynthesis of qualitative research in EBP, ‘the pursuit of “what works” in evidence-based practice can be enhanced by examining “what is at work” when individuals and communities experience interventions and report these experiences in their own words’.

Stop and think • Considering what has been discussed above, what is your opinion regarding evidence and evidence-based health care? • Should all evidence-based practice be based on an RCT or quantitative research approach only? Why?

Sample only Oxford University Press ANZ

Chapter 1: The Science of Words and the Science of Numbers

• What type of evidence would you need in your own profession? With colleagues who have a different professional background from you, discuss what evidence would be more appropriate for your work and your prospective clients.

Research designs: which one? Researchers need to consider carefully whether qualitative or quantitative research is best suited to addressing the research problem. Once you have thought this through, you will be able to select a research design that will be appropriate for the questions you ask. For example, if you wish to understand why some young women smoke and you want to learn from them about their perceptions of smoking, gender issues and societal pressure, their needs and concerns about smoking and their body, or if you want to really understand why many working-class men will not stop smoking, can you find your answers by conducting a randomised controlled trial, or a casecontrol study? Will these methods allow you to find useful answers? On the other hand, if you wish to find out how many young women smoke, or what the prevalence of diabetes is among young children in your local area, can these questions be addressed by the use of a qualitative approach? Before you can answer these questions, you will need to understand what each approach can offer you and what it cannot (its limitations). Hence a good understanding of research methods is essential. Often we hear students and novice researchers make comment like, ‘I want to do a qualitative research study because I am not very good with numbers’, or ‘I want to use quantitative research because I am not interested in qualitative research’, or, worse, ‘I do not want to use qualitative research because I don’t like it—too many flowery words and not objective enough’. I would suggest that this is not a good way of going about selecting your approach. You need to find out which approach is the best way for you to find answers to your research questions (or to find evidence for practice), and this can be either a qualitative or a quantitative approach. If you cannot find complete answers (or evidence) using either of these approaches alone, you may need to go further and use a mixed methods design. The choice of research design, according to Bryman (2012, p. 41), must be ‘dovetailed with the specific research question being investigated’. If researchers are interested in how individuals within a specific social group perceive health and illness, a qualitative approach, which allows us to examine how individuals interpret their social world, will be the most appropriate research strategy to use. Also, if researchers are interested in a topic that we know little about, a more exploratory position is preferable. This is when a qualitative approach will serve our needs better, because such approaches are typically associated with the generation of new findings rather than the testing of existing theory (see chapters in Part II). On the other hand, if researchers are interested in finding the causes of a health problem, or its prevalence, for example the rate of diabetes in Australia, a quantitative approach will provide more appropriate answers (see Chapters 13, 14). Another salient matter relevant to the choice of research designs is the nature of the topic and the characteristics of the individuals or groups being researched. For instance, if the researchers need to talk with hard-to-reach individuals or groups such as those engaged in illicit activities, such as violence or drug dealing, or those living with stigmatised illnesses such as mental health problems and HIV/AIDS, or with Indigenous people, it is unlikely that a quantitative approach would allow them to gain the necessary rapport or the confidence of the participants. Pranee Liamputtong

9

Sample only Oxford University Press ANZ

10

Part I: Methods and Principles

Research in practice

A practical consideration in the choice of research approach

for this non-compliance. This is the beginning of her

Emma is a podiatrist and owns

can find her answers. If she wants to ascertain the

her practice. She has treated many competitive athletes who come to her with foot injuries, particularly around the time of major events like the Commonwealth and Olympic Games. She does not know the real prevalence of the injuries in her city, so she cannot say exactly what the rate of injuries would be; she only knows that she has treated many athletes. She would like to know about the rate of injury as she needs to prepare her practice in terms of how many podiatrists she should employ and the purchase of essential equipment. Emma also notices that some athletes do not follow her advice about how to avoid foot injuries, or adhere to her treatments, despite the fact that she has followed the recommendations from a systematic review. This has

research endeavour. From reading literature on sports injuries, Emma realises that there are different ways in which she prevalence of foot injuries among competitive athletes, she would need to use a quantitative approach, which would allow her to determine the number of such injuries in her city. But if she wants to understand why the athletes do not follow her advice or adhere to her treatment plans, she must talk to them and allow them to tell their stories, because this will give her in-depth understanding of their concerns and may help her to develop treatment plans that cater better for their needs. Considering the example above, if you were Emma, how would you go about doing your research in order to find the evidence you need?How would you design your research if you were a public health practitioner, a nurse or a social worker? Discuss your choice of research design.

really puzzled her. She wants to know the reasons

Ontology and epistemology

Ontology refers to the question of whether or not there is a single objective reality.

Epistemology is concerned with the nature of knowledge and how knowledge is obtained.

In any research undertaking, it is crucial that researchers examine the ontological and epistemological positions that underlie the way in which research is undertaken. Ontology refers to the question of whether or not there is a single objective reality (Denzin & Lincoln 2005; Lincoln et al. 2011). Here ‘reality’ refers to the existence of what is real in the natural or social worlds. If we adopt the ontological standpoint of objective reality, we must take a position of objective detachment and ensure that the research process is free from any bias. Researchers who adopt this position would argue that reality can be accurately captured (Grbich 2013). These researchers will adopt a quantitative approach for their research. Other researchers would reject the position of objective reality. They would argue that it is impossible to carry out research in a detached way, that if we wish to understand the realities and experiences of other people, we must acknowledge our own subjectivities, which include our own beliefs, values and emotions, in the process of carrying out research. These researchers will make use of a qualitative approach for their research. Epistemology is concerned with the nature of knowledge and how knowledge is obtained. It begs ‘the question of what is (or should be) regarded as acceptable knowledge in a discipline’. A central concern is ‘the question of whether the social world can and should be studied according

Sample only Oxford University Press ANZ

11

Chapter 1: The Science of Words and the Science of Numbers

to the same principle, procedures, and ethos as the natural sciences’ (Bryman 2012, p. 27). There are five major epistemological paradigms that can be used to explain the nature of knowledge (Guba & Lincoln 1994, 2008; Lincoln et al. 2011). These paradigms give different understandings of what reality is in the natural and social worlds, and how we come to know that reality. In this chapter I shall focus on the two paradigms on which qualitative and quantitative approaches are respectively based: constructivism and positivism. A more detailed discussion of research paradigms can be found in Denzin and Lincoln (2005), Willis (2007), Dickson-Swift et al. (2008a) and Lincoln et al. (2011). See also Chapters 6, 8, 10, 20, 21. Constructivism suggests that ‘reality’ is socially constructed. It is also referred to as interpretivism (Bryman 2012). Constructivist researchers believe that there are multiple truths which are individually constructed (Guba & Lincoln 1994, 2008; Lincoln et al. 2011; Grbich 2013). Reality is seen as being shaped by social factors such as class, gender, race, ethnicity, culture and age (Grbich 2013). To constructivist researchers, reality is not firmly rooted in nature, but is a product of our own making. One of the central beliefs of researchers working within this paradigm is that research is a very subjective process, due to the active involvement of the researcher in the construction and conduct of the research (Grbich 2013). Constructivist researchers also argue that ‘reality is defined by the research participants’ interpretations of their own realities’ (Williams et al. 2011, p. 53). Research situated within this paradigm, as Grbich (2013, p. 8) points out, focuses on ‘exploration of the way people interpret and make sense of their experiences in the worlds in which they live, and how the contexts of events and situations and the placement of these within wider social environments have impacted on constructed understanding’.

Constructivism: An epistemology that suggests that ‘reality’ is socially constructed. Constructivist researchers believe there are multiple truths, individually constructed, and that reality is a product of our own making.

According to Bryman (2012, p. 28), constructivist researchers hold ‘a view that the subject matter of the social sciences—people and their institutions—is fundamentally different from that of the natural sciences’. When the social world is studied, it ‘requires a different logic of research procedure, one that reflects the distinctiveness of humans as against the natural order’ (p. 28). Within this constructivist paradigm, researchers are required to ‘grasp the subjective meaning of social action’ (p. 30). This necessitates the use of research methods that would allow people to articulate the meanings of their social realities, and this requires the use of a qualitative approach. In contrast, positivism is underpinned by the ontological belief that there is an objective reality that can be accessed (Guba & Lincoln 1994; Willis 2007; Lincoln et al. 2011; Grbich 2013). This is often referred to as ‘naïve realism’ (Dickson-Swift et al. 2008a). Positivism is also known as naturalism, logical empiricism, and behaviouralism. It views reality as being independent of our experiences of it, and being accessible through careful thinking, and observing and recording of our experiences (Moses & Knutsen 2007; Bryman 2012). The aim of positivistic enquiry is to explain, predict or control that reality. One of the central ideas of research approaches based on a positivist paradigm is the generation and testing of hypotheses through scientific means (Bryman 2012). According to Grinnell and colleagues (2011c, p. 33), positivism ‘strives toward measurability, objectivity, the reducing of uncertainty, duplication, and the use of standardized procedures’. Knowledge generated through this paradigm is based on ‘objective measurements’ of the real world, and not on the ‘opinions, beliefs, or past experiences’ of individuals. Positivism argues that research must be as ‘objective’ as possible; the things that are being studied must not be affected by the researcher. Positivist researchers attempt to undertake research in such a way that their studies can be duplicated by others. Further, ‘a true-to-the-bone positivist researcher’ will only

Pranee Liamputtong

Positivism views reality as being independent of our experiences of it, and being accessible through careful thinking, and observing and recording of our experiences.

12

Sample only Oxford University Press ANZ

Part I: Methods and Principles

use well-accepted standardised procedures. Research is only regarded as credible when others accept its findings, and before they accept them they must be satisfied that the study is ‘conducted according to accepted scientific standardised procedures’ (p. 65). Differences in ontology and epistemology lead to different data collection methods (Williams et al. 2011). Objective reality can be explored through the data collection method of standardised observation, which is the practice commonly used in research that uses a quantitative approach. However, it is not possible to establish subjective reality through standardised measurement and observation. The only way to find out about the subjective reality of our research participants is to ask them about it, and the answer will come back in words, not in numbers. This is the hallmark of the qualitative approach. Williams and colleagues (2011, p. 53) state that ‘in a nutshell, qualitative research methods produce qualitative data in the form of text. Quantitative research methods produce quantitative data in the form of numbers.’ In summary, constructivism influences qualitative research, whereas positivism dominates quantitative research (Willis 2007). If researchers wish to examine the subjective nature of phenomena, and the multiple realities of those involved in the research, a constructivist paradigm is essential, and of course, this necessitates the use of a qualitative approach. If researchers want to investigate the objective nature of phenomena, a positivistic paradigm is crucial, and hence a quantitative approach is indicated (Williams et al. 2011). It is important to point out here that traditional research methods and designs are heavily influenced by scientific positivism, since it is seen as ‘the crowning achievement of Western civilization’ (Denzin & Lincoln 2008, p. 8). But many constructivist researchers reject the use of positivist assumptions and methods. Positivist methods, for some researchers, are just one way of ‘telling stories about societies or social worlds’. These methods may not be better or worse than any other methods, but they ‘tell different kinds of stories’. Other constructivist researchers, however, believe that the criteria used in positivist science are ‘irrelevant to their work’. They argue that ‘such criteria reproduce only a certain kind of science, a science that silences too many voices’ (Denzin & Lincoln 2005, p. 12). Pragmatism argues that reality exists not only as natural and physical realities, but also as psychological and social realities, which include subjective experience and thought, language and culture.

In this chapter I wish to introduce a third paradigm: pragmatism. This paradigm has become increasingly popular among health researchers from a variety of disciplines. Pragmatists argue that reality exists not only as natural and physical realities, but also as psychological and social realities, which include subjective experience and thought, language and culture. Knowledge, according to pragmatists, is both constructed and based on the reality of the world in which we live and which we experience. As such, pragmatists advocate that researchers should employ a combination of methods that work best for answering their research questions (see Biesta 2010). Moses and Knutsen (2007) contend that this paradigm offers a ‘fully fledged metaphysical position’, which combines the most attractive characteristics of constructivism and positivism. See also Chapter 20. The major push for the methodological pluralism that underlies pragmatism is the belief that knowledge can be generated from diverse theories and sources, and in many ways through different research methods. Hence we must embrace methodological diversity in our research. Methodological pluralism encourages objectives-driven research instead of methods-driven research. As I have indicated above, the reason for this is that certain methods, regardless of their ontological and epistemological positions, may be more suitable for some questions than for others. In order to understand complex social phenomena, methodological pluralism is crucial.

Sample only Oxford University Press ANZ

13

Chapter 1: The Science of Words and the Science of Numbers

Qualitative and quantitative approaches: a comparison

Qualitative research:

Qualitative research is recognised as ‘the word science’ (Denzin 2008, p. 321). It relies heavily on words or stories that people tell us, as researchers (Liamputtong 2013). Qualitative research is research that has its focus on the social world instead of the world of nature. Fundamentally, researching social life differs from researching natural phenomena. In the social world, we deal with the subjective experiences of human beings, and our ‘understanding of reality can change over time and in different social contexts’ (Dew 2007, p. 434). This sets qualitative enquiry apart from researching the natural world, which can be treated as ‘objects or things’. The term qualitative, according to Denzin and Lincoln (2011, p. 8), has its emphasis on ‘the qualities of entities and on processes and meanings that are not experimentally examined or measured in terms of quantity, amount, intensity, or frequency’. Quantitative research, on the other hand, is known as the science of numbers, and it is also referred to as positivist science. For quantitative researchers, the need to be objective and structured is crucial, as quantitative research attempts to measure things and avoid any bias that could influence the findings (Bryman 2012).

in data collection and

Research strategies that emphasise words rather than numbers

Qualitative research is more flexible and fluid in its approach than quantitative research. This has led some researchers to see it as less worthwhile because it is not governed by clear rules. Quantitative researchers have argued that the interpretive nature of qualitative research makes it ‘soft’ science, lacking in reliability and validity, and of little value in contributing to scientific knowledge (Guba & Lincoln 1994; Hammersley 2008; Denzin 2008; Torrance 2008; Denzin & Lincoln 2011). But the interpretive and flexible approach is necessary because the focus of qualitative research is on meaning and interpretation (Liamputtong 2007, 2013). Essentially, qualitative research aims to ‘capture lived experiences of the social world and the meanings people give these experiences from their own perspective’ (Corti & Thompson 2004, p. 327). Because of its flexibility and fluidity, qualitative research is more suited to understanding the meanings, interpretations and subjective experiences of individuals (Liamputtong 2007, 2013; Carpenter & Suto 2008; Denzin & Lincoln 2008; Lincoln et al. 2011). Qualitative enquiry allows the researchers to hear the voices of those who are marginalised in society. The in-depth nature of qualitative methods allows the participants to express their feelings and experiences in their own words. While quantitative research has always been the dominant research approach in the health sciences, in the past decade or so qualitative research has been gradually accepted as a crucial component for increasing our understanding of health. In many areas of health, researchers have argued about the value of interpretive data. In public health in particular, the ‘new public health’ recognises the need to ‘describe’ and ‘understand’ people (Padgett 2012). For example, Baum (2008, p. 180) argues for the need for qualitative methods, since they ‘offer considerable strength in understanding and interpreting complexities’ of human behaviour and health issues. Qualitative research is crucial ‘for coping with complexity and naturalistic settings’. This is reflected in the chapters in Part II of this book. Bryman (2012, pp. 407–9) provides some contrasts between qualitative and quantitative research approaches, which are presented in Table 1.1.

Pranee Liamputtong

analysis. The focus of qualitative research is on the generation of theories.

Quantitative research: Research strategies that emphasise numbers in data collection and analysis. The focus of quantitative research is on the testing of theories.

Bias: A concept used in randomised controlled trials and other positivist research designs. Researchers may unknowingly influence or bias the outcome of a study. Such bias can distort the results or conclusions away from the truth, the result being a poorquality trial that underestimates, or more likely overestimates the benefits of an intervention.

Sample only Oxford University Press ANZ

14

Part I: Methods and Principles

Table 1.1  Comparison of qualitative and quantitative approaches

Research in practice

Qualitative approach

Quantitative approach

Words

Numbers

Participants’ points of view

Researcher’s point of view

Meaning

Behaviour

Contextual understanding

Generalisation

Rich, deep data

Hard, reliable data

Unstructured

Structured

Process

Static

Micro

Macro

Natural settings

Artificial settings

Theory emergent

Theory testing

Researcher close

Researcher distant

Resistance to participating in falls prevention exercise

falls is considered important, many old people do not

Zoe is a physiotherapist who

falls including being careful and taking it slowly.

works in a community health

However, few describe evidence-based approaches

care which looks after old

such as exercise or medication reviews as strategies to

people from the local area.

prevent falls. Most old people think that physiotherapy

She has noticed that there are

and exercise are beneficial in improving physical

many old people who recently

function, mobility, strength and balance. Zoe finds

experienced falls, particularly

that family, client–clinician relationship and personal

Her study reveals that although preventing further believe that falls are preventable or are unsure about it. Most older people can suggest strategies to prevent

those from ethnic communities. To prevent falls,

experience affect their decision-making and exercise

these people come to exercise programs conducted

participation.

by physiotherapists at the centre, but despite many instructions about doing exercise to prevent falls, they seem not to adhere to the recommended exercises. There is no available evidence that Zoe could draw on to improve the situation and she decides to conduct some research to find the answers that might give her a greater understanding about these old people. She carries out her research using in-depth interviewing, one of the most common methods used in qualitative research.

From her study, Zoe recommends a clear explanation of the role of exercise in preventing falls; she says that when engaging this group of older people it is important that clinicians understand the personal motivating and de-motivating factors for such exercise. Adapted from Lam 2012

Sample only Oxford University Press ANZ

15

Chapter 1: The Science of Words and the Science of Numbers

Mixed methods In some ways, the differences between quantitative and qualitative methods involve trade-offs between breadth and depth … Qualitative methods typically produce a wealth of detailed data about a much smaller number of people and cases. (Patton 2002, p. 227)

How then do we combine the depth and the breadth? Mixed methods research offers a way of doing this. In some situations we find that neither a qualitative nor a quantitative approach alone can provide enough information for us to use, so a combination of the two is required (Feilzer 2010; Creswell & Plano Clark 2011; Flyvbjerg 2011; Bryman 2012). This is referred to as a mixed methods research design. According to Flyvbjerg (2011, p. 313), ‘research is problem-driven and not methodology-driven, meaning that those methods are employed that for a given problematic best help answer the research questions at hand’. We may find that the combination of both research approaches will provide the best evidence that we need. According to Bryman (2012, p. 37), although qualitative and quantitative approaches have different ontologies, epistemologies and research strategies, ‘the distinction is not a hard-andfast one: studies that have the broad characteristics of one research strategy may also have a characteristic of the other’. Thus within one research project the two can be combined (Teddlie & Tashakkori 2011; Edmonds & Kennedy 2012; Spicer 2012). Bryman (2012, p. 628) suggests that this strategy ‘would seem to allow the various strengths to be capitalized upon and the weaknesses offset somewhat’. The term ‘mixed methods research’ should not be confused with the combination of methods from within one research approach (Bryman 2012). For example, the combined use of in-depth interviews and focus groups is not mixed methods research, because both these methods belong to the qualitative approach. Similarly, the use of both a questionnaire (with closed-ended questions) and a randomised control trial is not mixed methods research because both methods come from the quantitative approach. Only research that employs both qualitative and quantitative methods such as using focus groups and a questionnaire is classed as mixed methods research. Some researchers, however, may use the term to refer to the combination of different methods from one approach (see Chapter 21, for example). There are different ways in which researchers can combine the methods. Hammersley (1996) proposes three approaches: triangulation, facilitation and complementarity. Triangulation refers to the use of qualitative research to confirm the findings from quantitative research or vice versa. In facilitation, one research approach is used in order to facilitate research using the other approach. When the two approaches are used so that different aspects of an investigation can be articulated, this is referred to as complementarity. You may like to read Bryman (2012) and Teddlie and Tashakkori (2011), who provide useful ways of combining qualitative and quantitative research in a mixed methods design. See also Chapters 20 and 21.

Stop and think You have been asked to find some evidence for the provision of paediatric health care and the effectiveness of falls prevention to families from culturally and linguistically diverse backgrounds (CALD). You have to develop a research project which will allow you to find appropriate evidence for your workplace. Pranee Liamputtong

Mixed methods: A research design that combines research methods from qualitative and quantitative research approaches within a single research study.

16

Sample only Oxford University Press ANZ

Part I: Methods and Principles

• Which research design might provide the best evidence for you? • If you think the first research area (the provision of paediatric health care) should be carried out using a qualitative approach, can you use quantitative research to find evidence too? If you agree, how would you explain the option? • Can these two evidences be found using a mixed methods approach? What would this approach offer you?

Research rigour: trustworthiness and reliability/validity Rigour: Rigorous research is trustworthy and can be relied on by other researchers.

Reliability: The extent to which a measurement instrument is dependable, stable and consistent when repeated under identical conditions.

Validity: The degree to which a scale measures what it is supposed to measure.

Both qualitative and quantitative research approaches have criteria that can be used to evaluate the rigour (authenticity/credibility/strength) of the research. Within the qualitative approach, we use the term ‘trustworthiness’, which refers to the quality of qualitative enquiry (Liamputtong 2013; see also Chapters 4, 8). In health research and practice, trustworthiness means that ‘the findings must be authentic enough to allow practitioners to act upon them with confidence’ (Raines 2011, p. 497). In quantitative research, the concepts of reliability and validity are used. ‘Reliability’ refers to ‘the stability of findings’ and validity represents ‘the truthfulness of findings’ (Carpenter & Suto 2008, p. 148). Reliability is ‘the consistency and trustworthiness of research findings; it is often considered in relation to the issue of whether a finding is reproducible, at other times, and by other researchers’ (Kvale 2007, p. 22). Validity bears upon measurement and is ‘concerned with the integrity of the conclusions that are generated from a piece of research’ (Bryman 2012, p. 47). The most commonly used validity concepts are internal and external validity. Internal validity is related to ‘the issue of whether a method investigates what it purports to investigate’ (Kvale 2007, p. 22), while external validity relates to ‘whether the results of a study can be generalised beyond the specific research context’ (Bryman 2012, p. 47). The attainment of validity in quantitative research is based on strict observance of the rules and standards of the approach. Thus it follows that attempting to apply those rules to qualitative research becomes problematic. Angen (2000, p. 379) contends that when qualitative research is judged by the validity criteria used in the quantitative approach, it may be seen as ‘being too subjective, lacking in rigour, and/or being unscientific’. As a consequence, qualitative research may be denied legitimacy. The concepts of validity and reliability are seen as incompatible with the ontological and epistemological foundations of qualitative research (Carpenter & Suto 2008, p. 148). Since qualitative research is descriptive and unique to a specific historical, social and cultural context (Johnson & Waterfield 2004), it cannot be repeated in order to establish reliability. Qualitative researchers hold the view that reality is socially constructed by an individual, and while this socially constructed reality cannot be measured, it can be interpreted. For qualitative research, understanding cannot be separated from context. Hence qualitative data cannot be ‘tested for validity’ using the same rules and standards, which are based on ‘assumptions of objective reality and positivist neutrality’ (Johnson & Waterfield 2004, pp. 122–3; see also Angen 2000). Qualitative researchers have, however, developed some criteria that can be used to judge the trustworthiness of their research. Here I refer to the work of Lincoln and Guba (1985, 1989), who propose four criteria that many qualitative researchers have adopted; these can be used ‘as a

Sample only Oxford University Press ANZ

Chapter 1: The Science of Words and the Science of Numbers

translation of the more traditional terms associated with quantitative research’ (Carpenter & Suto 2008, p. 149). Hence credibility equates to internal validity, and transferability to external validity, dependability to reliability, and confirmability to objectivity (see also Padgett 2008; Raines 2008; Creswell 2012; Bryman 2012; Liamputtong 2013). Credibility relates to the question ‘Can these findings be regarded as truthful?’ (Raines 2011, p. 455), or ‘How believable are the findings?’ (Bryman 2012, p. 49). It scrutinises the matter of ‘fit’ between what the participants say and the representation of these viewpoints by the researchers (Padgett 2008). Credibility asks whether ‘the explanation fits the description and whether the description is credible’ (Tobin & Begley 2004, p. 391). Transferability (or applicability) relates to the question ‘To what degree can the study findings be generalised or applied to other individuals or groups, contexts, or settings?’ or ‘Do the findings apply to other contexts?’ (Bryman 2012, p. 49). It attempts to establish the ‘generalisability of inquiry’ (Tobin & Begley 2004, p. 392). Transferability pertains to ‘the degree to which qualitative findings inform and facilitate insights within contexts other than that in which the research was conducted’ (Carpenter & Suto 2008, pp. 149–50; see also Padgett 2008). Dependability raises questions about whether the research findings ‘fit’ the data that have been collected (Carpenter & Suto 2008), or ‘are the findings likely to apply at other times’ (Bryman 2012, p. 49). Dependability ‘addresses the consistency or congruency of the results’ (Raines 2011, p. 456). It is gained through an auditing process, which requires the researchers to ensure that ‘the process of research is logical, traceable and clearly documented’ (Tobin & Begley 2004, p. 392). Confirmability asks if the researcher has ‘allowed his or her values to intrude to a high degree?’ (Bryman 2012, p. 49). It attempts to show that the findings and the interpretations of the findings do not derive from the imagination of the researchers, but are clearly linked to the data. Confirmability, according to Lincoln and Guba (1985, p. 290), is ‘the degree to which findings are determined by the respondents and conditions of the inquiry and not by the biases, motivations, interests or perspectives of the inquirer’. Table 1.2 compares rigour criteria employed in qualitative research with those used in quantitative research. Table 1.2  Rigour criteria employed in qualitative and quantitative research Qualitative research

Quantitative research

Credibility

Internal validity

Transferability

External validity

Dependability

Reliability

Confirmability

Objectivity Source: Carpenter & Suto 2008, p. 149

Sampling issues Here I discuss two salient issues in relation to sampling: sampling methods and sample size.

Pranee Liamputtong

17

18

Sample only Oxford University Press ANZ

Part I: Methods and Principles

Sampling methods Probability sampling method: The probability of a participant being selected is known in advance. The intent is to generalise the findings for the sample to the population from which it was taken.

Research participant: A person who agrees to take part in the study on equal terms. Non-probability sampling: The probability of a potential research participant being selected is not known in advance. The findings cannot be generalised to a larger group of people. Purposive sampling looks for cases that will be able to provide rich or in-depth information about the issue being examined, not a representative sample as in quantitative research. Convenience sampling allows researchers to find individuals who are conveniently available and willing to participate in a study.

Issues in sampling methods centre around whether the sample is based on a probability or a non-probability method. Probability sampling methods ‘are those in which the probability of an element being selected is known in advance’ (Schutt 2011, p. 226; Seale 2012b). In research involving people, an element means a research participant. Within these methods, elements are randomly selected and hence there should be no systematic bias as ‘nothing but chance determines which elements are included in the sample’ (Schutt 2011, p. 226). Because of this characteristic, probability sample methods are important in quantitative research where, in most cases, the intent is to generalise the findings for the sample to the population from which the sample was taken (see also Chapters 11–15, 25, 26). The four most common methods for drawing random samples include simple random sampling, systematic random sampling, stratified random sampling, and cluster random sampling (Schutt 2011; Seale 2012b; see Chapter 15). For non-probability sampling methods, on the other hand, the likelihood of a potential research participant being selected is not known in advance (Schutt 2011; Seale 2012b). Additionally, random selection procedures commonly employed in probability sampling are not used in non-probability sampling methods. The latter do not provide ‘representative samples’ for the populations from which they are drawn, so the findings cannot be generalised to a larger group of people. However, these methods are useful for research questions that do not need to involve large populations, and particularly for qualitative research projects (Johnson & Waterfield 2004; Schutt 2011; Seale 2012b). Qualitative researchers therefore usually rely on non-probability sampling methods. Since qualitative research is concerned with in-depth understanding of the issue or issues under examination, it relies heavily on individuals who are able to provide information-rich accounts of their experiences. It usually involves a small number of individuals. Morse (2007, p. 530, original emphasis) contends that ‘qualitative researchers sample for meaning, rather than frequency. We are not interested in how much, or how many, but in what’. Qualitative research aims to examine a ‘process’ or the ‘meanings’ that people give to their own social situations. It does not require a generalisation of the findings as in positivist science (Hesse-Biber & Leavy 2011). Qualitative research also relies heavily on purposive sampling strategies (Teddlie & Yu 2007; Hesse-Biber & Leavy 2011; Bryman 2012; Liamputtong 2013). Purposive sampling is a deliberate selection of specific individuals, events or settings because of the crucial information they can provide, which cannot be obtained as adequately through other channels (Carpenter & Suto 2008). For example, in research that is concerned with how cancer patients cope with pain, purposive sampling will require the researcher to find participants who have pain, instead of randomly selecting any cancer patients from an oncologist’s patient list (Padgett 2008). The powers of purposive sampling techniques, Patton (2002, p. 230, original emphasis) suggests, ‘lie in selecting information-rich cases for study in depth’. Information-rich cases are individuals or events or settings from which researchers can learn extensively about issues they wish to examine. See also Chapters 5, 7. Another sampling method commonly adopted in qualitative research is convenience sampling. This method allows researchers to find individuals who are conveniently available and willing to participate in a study. Convenience sampling is crucial when it is difficult to find individuals who meet some specified criteria such as age, gender, ethnicity or social class. This may happen more often in research that requires the conduct of fieldwork, such as ethnography. Researchers need to find key informants who are able to provide in-depth information on the research issues and site. Often researchers make decisions on the basis of ‘who is available,

Sample only Oxford University Press ANZ

19

Chapter 1: The Science of Words and the Science of Numbers

who has some specialized knowledge of the setting, and who is willing to serve in that role’ (Hesse-Biber & Leavy 2011, p. 46; see also Bryman 2012; Liamputtong 2013).

Sample size The question of sample size is considered differently in qualitative and quantitative approaches. A crucial point in qualitative research is to select the research participants meaningfully and strategically, instead of attempting to make statistical comparisons or to ‘create a representative sample’ (Carpenter & Suto 2008, p. 80; see also Patton 2002). Hence the important question to ask is whether the sample provides data that will allow the research questions or aims to be thoroughly addressed (Mason 2002; see also Chapter 7). The focus of decisions about sample size in qualitative research is on ‘flexibility and depth’. A fundamental concern of qualitative research is quality, not quantity. Qualitative researchers do not intend to maximise the breadth of their research (Padgett 2008; Liamputtong 2013). In qualitative research, no set formula is rigidly used to determine the sample size, as is the case for quantitative research (Morse 1998; Patton 2002). The sampling process is flexible and, at the commencement of the research, the number of participants to be recruited is not definitely known. However, data saturation, a concept associated with grounded theory, is used by qualitative researchers as a way of justifying the number of research participants, and this is established during the data collection process. Saturation is considered to occur when little or no new data is being generated (Padgett 2008; Liamputtong 2013). The sample is adequate when ‘the emerging themes have been efficiently and effectively saturated with optimal quality data’ (Carpenter & Suto 2008, p. 152), and when ‘sufficient data to account for all aspects of the phenomenon have been obtained’ (Morse et al. 2002, p. 12). See also Chapter 7. In quantitative research, sample sizes tend to be larger than those of qualitative research. Researchers have more confidence about generalising their results if they have larger samples. Often, during the planning stage of their research, quantitative researchers attempt to determine how large a sample they must have in order to achieve their purposes. As Schutt (2011, p. 239) points out, quantitative researchers must ‘consider the degree of confidence desired, the homogeneity of the population, the complexity of the analysis they plan, and the expected strength of the relationships they will measure’. Generally, quantitative researchers can use the following criteria when considering their sample size (Schutt 2008): • The larger the sample size, the less the sampling error. • Samples of more diverse populations need to be larger than samples of more homogeneous populations. • If only a few variables are to be examined, a smaller sample will suffice, but if a more complex analysis involving sample subgroups is required, then a larger sample will be needed. • If the researchers wish to test hypotheses, and expect very strong effects, they will need a smaller sample size to find these effects, but if they expect smaller effects, a larger sample is required. Sample size can be estimated by using existing tables (Peat 2001), or calculated using relevant formulae (Friedman et al. 1998; Seale 2012b). Ideally, more precise estimation of the necessary sample size should be carried out by the use of the statistical power analysis method (Seale 2012b). This analysis allows ‘a good advance estimate of the strength of the hypothesized relationship

Pranee Liamputtong

Data saturation occurs when little or no new data is being generated and new data fits into the categories already developed.

Sample only Oxford University Press ANZ

20

Part I: Methods and Principles

in the population’ (Schutt 2011, p. 239). However, it is a complicated analysis and it may require researchers to work with a statistician to determine the size of their research sample. See also Chapters 15, 20.

Research in practice

Obtaining evidence from women living with HIV/AIDS in Thailand

in participating. We also enlisted the assistance of

I would like to give readers a

observations were conducted with 26 Thai women. We

reflective practice example from my own research that I conducted collaboratively with colleagues from two universities in Thailand (see Liamputtong et al. 2009, 2012). Thai women are now experiencing a high prevalence of HIV and AIDS. In this study, we examined the women’s perspectives on community attitudes towards women currently living with HIV/AIDS. We also looked at strategies employed by women in order to deal with any stigma and discrimination they might feel or experience in their communities. Last, we examined the reasons that women had for participating in drug/vaccine trials. A qualitative method was adopted in this study

leaders of two HIV and AIDS support groups to access the women in this study. In-depth interviews and some participant interviewed the women in places that they selected. Most often, the interviews were done in a café or in a shopping mall. Since the women wished to preserve their confidentiality and identities as HIV persons, they did not wish us to interview them in their own homes. Interviews were conducted in the Thai language to maintain as much as possible of the subtlety and any hidden meanings of the participant’s statements. Before the study began, ethical approval was obtained from the Faculty of Health Sciences Human Ethics Committee, La Trobe University, Australia, and from the Ethics Committee at Chulalongkorn University, Thailand. Each participant’s consent to participate in the study was sought and the participants were asked to sign a consent form. Each interview took between one

because it enabled us to examine the interpretations

and two hours. Each participant was paid 200 Thai baht

and meanings of HIV/AIDS within the women’s

as compensation for their time spent in taking part in

perspectives. The strength of using such a method is

the study.

that it has a holistic focus, which allows for flexibility and also allows the participants to raise issues and topics that may not have been included by the researcher. A purposive sampling technique was adopted for

With permission from the participants, interviews were tape-recorded. The tapes were then transcribed in Thai, for data analysis. The in-depth data were analysed using a thematic analysis. All transcripts were coded, and emerging themes were subsequently identified and

this research; that is, only Thai women who had HIV/

presented in the results section of the report on the

AIDS and who were participating, or had participated,

research.

in HIV clinical trials, and female drug users who had been participating in vaccine trials, were approached to participate. The sensitivity of this research guided our discussion of how we would approach the women and invite them to take part. We directly contacted potential participants ourselves only after they were introduced by our network or our gatekeepers. For the same reason, we relied on snowball sampling; that is, our participants suggested others who were interested

As you can see, there are many issues we need to consider in carrying out a piece of research, not only which approach and which method to use, but who will be our research participants, how we will find them and how many we need for our project. Also, ethical issues requiring consideration need to be identified, and we need to consider how we will make sense of the data we have collected, and how we will present this data and its analysis. All these matters are covered in this book.

Sample only Oxford University Press ANZ

Chapter 1: The Science of Words and the Science of Numbers

Summary Neither quantitative nor qualitative methodology is in any ultimate sense superior to the other. The two approaches exist along a continuum on which neither pole is more ‘scientific’ or more suited to … knowledge development. (Williams et al. 2011, p. 65)

In this chapter I have introduced the concept of evidence and evidence-based practice in health. I have argued that in many situations and for many health issues, researchers and practitioners need to find the ‘best’ evidence, and this may require us to carry out a research study to find our answers. I have provided readers with firm foundations for carrying out research in health. I have suggested that researchers should not favour one method over another based on their own preferences. Rather, we need to consider carefully the research questions to which we wish to find answers. Qualitative and quantitative research approaches, as Williams and colleagues (2011, p. 65) contend, ‘each have their special uses’. Rather than asking which approach is best, it would be more appropriate for us to ask ‘under what conditions each approach is better than the other in order to answer a particular research question’ (Williams et al. 2011, p. 65). This is what I have advocated in this chapter. In summary, I argue that knowledge is essential in the era of evidence-based practice in health care. Without knowledge, evidence cannot be generated. Without ‘appropriate’ evidence, our practice may not be applicable or suitable to those who health care providers/ practitioners need to serve.

Tutorial exercises 1 You have been asked by your superior to find the ‘best’ evidence that can be used to develop culturally sensitive maternal and child health services for Indigenous Australians. How would you find this ‘best’ evidence? Discuss various types of evidence that you could obtain. 2 There has been a good deal of discussion in your local area about young people, who are seen to be likely to engage in risky health-related behaviour, such as smoking heavily, driving very fast, and not paying attention to their diet. You want to understand why young people tend to take such health risks. Which research approach (qualitative or quantitative) is likely to give you greater in-depth understanding of their life, the meaning they attach to risk-taking behaviour and their lived experiences of risk? Discuss. 3 You want to ascertain the prevalence of risk-taking behaviour among young people in your city. What approach will provide you with an estimate of this prevalence and how will you go about doing this? Discuss.

Pranee Liamputtong

21

22

Sample only Oxford University Press ANZ

Part I: Methods and Principles

4 As you need to design a research study that will provide the best answers that you can find, what important issues do you need to consider? Please write up a short account of your proposed research, taking into account salient issues that have been discussed in this chapter.

Further reading Aoun, S.M. & Kristjanson, L.J. (2005). Evidence in palliative care research: How should it be gathered? Medical Journal of Australia, 183(5), 264–6. Denzin, N.K. (2009). The elephant in the living room: Or extending the conversation about the politics of evidence. Qualitative Research, 9(2), 139–60. Flemming, K. (2010). The use of morphine to treat cancer-related pain: A synthesis of quantitative and qualitative research. Journal of Pain and Symptom Management, 39, 139–54. Gibson, B.E. & Martin, D.K. (2003). Qualitative research and evidence-based physiotherapy practice. Physiotherapy, 89, 350–58. Grypdonck, M.H.F. (2006). Qualitative health research in the era of evidence-based practice. Qualitative Health Research, 16(10), 1371–85. Hammell, K.W. & Carpenter, C. (2004). Qualitative research in evidence-based rehabilitation. Edinburgh: Churchill Livingstone. Hawker, S., Payne, S., Kerr, C., Hardey, M. & Powell, J. (2002). Appraising the evidence: Reviewing disparate data systematically. Qualitative Health Research, 12(9), 1284–99. Johnson, R. & Waterfield, J. (2004). Making words count: The value of qualitative research. Physiotherapy Research International, 9(3), 121–31. Liamputtong, P. (2013). Qualitative research methods, 4th edn. Melbourne: Oxford University Press. Liamputtong, P., Haritavorn, N. & Kiatying-Angsulee, N. (2012). Living positively: The experiences of Thai women in central Thailand. Qualitative Health Research, 22(4), 441–51. Malpass, A., Shaw, A., Sharp, D., Walter, F., Feder, G., Ridd, M., et al. (2009). ‘Medication career’ or ‘moral career’? The two sides of managing antidepressants: A meta-ethnography of patients’ experience of antidepressants. Social Science & Medicine, 68, 154–68. Mullen, E.J., Bellamy, J.L. & Bledsoe, S.E. (2011). Evidence-based practice. In R.M. Grinnell & Y.A. Unrau (eds), Social work research and evaluation: Foundations of evidence-based practice, 9th edn. New York: Oxford University Press, 160–77. Spicer, N. (2012). Combining qualitative and quantitative methods’. In C. Seale (ed.), Researching society and culture, 3rd edn. London: Sage, 479–93. Tashakkori, A., & Teddlie, C. (2010). The Sage handbook of mixed methods in social and behavioural research. Thousand Oaks, CA: Sage.

Websites www.womenandhealthcarereform.ca/en/work_evidence.html This website provides useful discussions on evidence and women’s health care. It argues that ‘because women are not all the same, changes to the health care system may variously affect the health, well-being and work of particular groups of women. This means that when evidence is

Loading...

Research Methods as Foundations for Evidence-based Practice in

Sample only Oxford University Press ANZ 1 The Science of Words and the Science of Numbers: Research Methods as Foundations for Evidence-based Practic...

474KB Sizes 3 Downloads 0 Views

Recommend Documents

Evidencebased practice models for organizational change: overview
Jul 23, 2012 - usefulness of six evidence-based practice models frequently discussed in the literature. ... articles on

Methods in educational research : from theory to practice
empower people to take political action and use their political voice to change and improve their place in society. ....

Code of Practice for Foundations - Buildings Department
Chairman. : Ir K K Choy. Buildings Department. Members. : Ir Paul T C Pang. Buildings Department. Ir W W Li. Architectur

Methods for research paper videos
2 days ago - ... research papers pdf walt disney world essay profit maximization vs wealth maximization essays about edu

Mixed Methods in Educational Research
Programme. Norwegian Educational Research towards 2020 - UTDANNING2020. Mixed Methods in. Educational Research. Report f

RESEARCH METHODS IN COMMUNICATION - Padlet
Feb 19, 2017 - Pada pandangan saya, kaedah penyelidikan komunikasi ini adalah cara ataupun teknik untuk seseorang mengka

Research Methods in Psychology - Wikibooks
1 Approaches in Psychology Research. 1.1 Nomothetic (Quantitative Approach); 1.2 Idiographic (Qualitative Approach). 2 R

Research Methods - Genome Research
Dec 12, 2016 - Mahmoud Dahdouli, David Rio Deiros, Jennifer E. Below, William Salerno, Laura Cox, Guoping Fan,. Betsy Fe

CAGED in Practice - UR Research
Oct 30, 2009 - simple rules allow chords to be placed in five different positions per octave. 1 Readers' Guide and Motiv

Methods for Repairing Concrete Foundations - The Concrete Network
The two most common methods of this type of repair are slabjacking and hydraulic jacking (also known as piering). In a s