About the Author(s)


Jakobus M. Louw Email symbol
Department of Family Medicine, School of Medicine, Faculty of Health Sciences, University of Pretoria, Pretoria, South Africa

Tessa S. Marcus symbol
Department of Family Medicine, School of Medicine, Faculty of Health Sciences, University of Pretoria, Pretoria, South Africa

Johannes F.M. Hugo symbol
Department of Family Medicine, School of Medicine, Faculty of Health Sciences, University of Pretoria, Pretoria, South Africa

Citation


Louw JM, Marcus TS, Hugo JFM. How to measure person-centred practice – An analysis of reviews of the literature. Afr J Prm Health Care Fam Med. 2020;12(1), a2170. https://doi.org/10.4102/phcfm.v12i1.2170

Research Project Registration:

Project Number: REC 4/21/09/16

Review Article

How to measure person-centred practice – An analysis of reviews of the literature

Jakobus M. Louw, Tessa S. Marcus, Johannes F.M. Hugo

Received: 11 June 2019; Accepted: 14 Nov. 2019; Published: 04 Mar. 2020

Copyright: © 2020. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Background: Facilitation and collaboration differentiates person-centred practice (PcP) from biomedical practice. In PcP, a person-centred consultation requires clinicians to juggle three processes: facilitation, clinical reasoning and collaboration. How best to measure PcP in these processes remains a challenge.

Aim: To assess the measurement of facilitation and collaboration in selected reviews of PcP instruments.

Methods: Ovid Medline and Google Scholar were searched for review articles evaluating measurement instruments of patient-centredness or person-centredness in the medical consultation.

Results: Six of the nine review articles were selected for analysis. Those articles considered the psychometric properties and rigour of evaluation of reviewed instruments. Mostly, the articles did not find instruments with good evidence of reliability and validity. Evaluations in South Africa rendered poor psychometric properties. Tools were often not transferable to other socio-cultural-linguistic contexts, both with and without adaptation.

Conclusion: The multiplicity of measurement tools is a product of many dimensions of person-centredness, which can be approached from many perspectives and in many service scenarios inside and outside the medical consultation. Extensive research into the myriad instruments found no single valid and reliable measurement tool that can be recommended for general use. The best hope for developing one is to focus on a specific scenario, conduct a systematic literature review, combine the best items from existing tools, involve multiple disciplines and test the tool in real-life situations.

Keywords: review; psychometric properties; measurement instruments; person-centeredness; patient-centeredness.

Introduction

The applicability, implementation and measurement of person-centred practice (PcP) need to be carefully considered as part of a drive towards universal health coverage, as it brings with it a number of benefits (Table 1), particularly improved patient health outcomes,1,2,3,4 as well as a reduction in healthcare provider workload and healthcare service delivery costs.5,6 To ensure that these benefits are realised through training, there is a need to accurately measure PcP and that such measurement is based on a well-understood conceptual framework.

TABLE 1: Benefits of person-centred practice.

Person-centredness and patient-centredness are used interchangeably here11,12 because of an absence of a universally agreed definition and conceptual similarities described previously.13

‘The clinician as juggler’ used to teach consultation skills at the University of Pretoria14 relates well to other frameworks of PcP (Figure 1). The metaphor describes three processes that the clinician has to manage concurrently – facilitation (listening), clinical reasoning (thinking) and collaboration (shared decision-making). The clinician juggling three balls helps us understand the simultaneity and interplay between the three processes.14,15

FIGURE 1: Patient-centred care: Interactive components and key dimensions as related to the three processes of consultation.

The clinician must be constantly aware of where each process is, its trajectory and how next to interact with it. The position and trajectory of each process also informs the clinician as to what to do with the others.14,15 In this way, he or she brings together clinical expertise and experience with patients’ ideas (Figure 2).17

FIGURE 2: Facilitation, clinical reasoning and collaboration in the consultation.

As illustrated in Figure 1, concepts such as ‘patient-as-person’,16 ‘exploring the patient’s illness experience’ and ‘understanding the whole person’7 manifest themselves in the process of facilitation. Facilitation (caring) is a prerequisite for collaboration. Measuring collaboration may, therefore, indirectly also measure facilitation.

The process of collaboration in the consultation is related to the concepts of ‘sharing power and responsibility’, ‘therapeutic alliance’,16 ‘finding common ground’ and, to some extent, ‘enhancing the patient-doctor relationship’7 (Figure 1). Collaboration can be measured by the degree to which the clinician explains the risks and side effects of management options, explores the patient’s questions and expectations, and plans with the patient so that he or she understands and is willing and able to follow it. Because competency in clinical reasoning is the foundation of collaboration with a patient, collaboration can serve as an indirect measure of clinical reasoning. Collaboration is thus an outcome of PcP.18

The discovery of a patient’s perspective and shared control of the consultation are in fact the two features that distinguish a person-centred consultation from a traditional biomedical consultation.19 Research suggests that it is patients’ perceptions of PcP that correlate best with improved health outcomes associated with PcP.3,5,9,20 This is because an adequate biopsychosocial understanding enables the clinician and the patient to consider relevant and possible management options within the patient’s specific context and preference, thereby saving valuable time in the consultation, ensuring patient-relevant solutions and better contributing to health and treatment outcomes.

Measuring person-centredness is difficult,21,22 evidenced by the sheer volume of measurement tools developed, published and evaluated in various contexts. Many of these measure subcomponents of person-centred care, while several attempt to measure the concept as a whole. Some are specifically designed to evaluate a single visit to a healthcare practitioner, while others try to measure person-centredness over a period of time.22

While numerous reviews of instruments have been performed, the aim of this article was to assess the measurement of facilitation and collaboration in selected reviews of PcP instruments, as these are elemental components in all frameworks of person-centred consultations.13

Methods

Literature searches were conducted from 01 January 2000 to 02 May 2019 in Ovid Medline and Google Scholar. Search terms used include patient-centredness, patient-centred, person-centredness, person-centred combined with measurement tools or instruments, evaluate or evaluation, and assessment. The search yielded 13 548 articles in Ovid Medline, 83 of which were English language review articles with structured abstracts applicable to adults. References in and citations of relevant articles were screened to identify additional review articles. The first author screened review articles by their titles. Inclusion criteria were comparison of instruments that measure person- or patient-centredness in the medical consultation. Exclusion criteria were being in a language other than English, not being review articles, not comparing measurement instruments, no structured abstract, not referring to adult patients and an exclusive focus on a specific disease (e.g., epilepsy) or discipline such as gerontology, oncology and palliative care.

Eligible review articles were then thematically analysed by the first author to specifically consider the measurement of facilitation and collaboration in the medial consultation, as well as the psychometric properties of the instruments reviewed. Measurement items in preferred tools identified in the review articles were classified by the first author as related to collaboration, facilitation or clinical reasoning. For the items from the first tool so analysed two experienced family physicians (the third author and another) reviewed this classification of measurement items. Differences were discussed until consensus was reached.

Ethical considerations

This review is part of a PhD thesis entitled “Learning of person-centred practice amongst clinical associate students at the University of Pretoria”. It was approved by the Research Ethics Committee of the Faculty of Health Sciences, University of Pretoria, reference number 128/2013.

Results

Nine review articles published in the period 2010–2018 were identified (Figure 3). One of these was a rapid review, listing and classifying 160 tools to measure person-centred care without evaluating their quality.23 In the remaining eight articles, 129 measurement tools were reviewed. Two of the tools appeared in three reviews and 11 in two reviews, while the remaining 116 were only included once in a review. The analyses by Edvardsson et al.24 and Wilberforce et al.25 were subsequently also excluded as they reviewed tools that measure the person-centredness of the care environment of people with dementia and older people, but not of medical consultations.

FIGURE 3: Search and selection of articles.

This analysis is based on the remaining six review articles20,22,26,27,28,29 in which measurement instruments of PcP in the medical consultation were included. The number of tools reviewed per article varied from 12 to 40. The six reviews are summarised in Table 2 and discussed below.

TABLE 2: Summary of six review articles.

Three22,26,27 of the six reviews used the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) checklist30 to evaluate the methodological quality of each study reviewed, while one20 used a modified version of the Standards for Reporting of Diagnostic Accuracy (STARD) scale and another28 used the National Institutes of Health (NIH) Quality Assessment Tool.

The standard of assessment in evaluating studies of measurement instruments is clearly higher in the later reviews than in the earlier ones. Not only do authors compare the psychometric properties of the various instruments, but they also consider the methodological rigour of the studies that measured those properties. Gärtner et al.27 used an adapted scale from the Cochrane Back Group to synthesise both aspects into one rating (Table 3).38,39 This made it possible to rate each measurement property (e.g. internal consistency, reliability, measurement error, content validity and structural validity) of each measurement instrument.

TABLE 3: Quality synthesis.

Gärtner et al.27 ascribe the lack of good evidence on the measurement qualities of instruments both to a failure to study their measurement properties and to the poor methodological quality of validation studies. They argue that this does not mean that existing instruments are necessarily of poor quality, only that their quality is often unknown.27 Many measurement instruments fail to define the concept that is being measured clearly, and this affects the comparability of results.27,40

Most tools have been developed in first-world countries. Of the few tested in Africa, the Physician–Patient Communication Behaviours scale was developed by adapting 19 statements from a matched-pair instrument for local use in Kenya. Patients at anti-retroviral treatment clinics responded to these on a Likert scale. Thirteen statements were found to be reliable and useful in that setting. Another, the Measure of Processes of Care (MPOC) developed in Canada, was tested in seven countries including South Africa. It measures family-centred care provided to children with chronic conditions over the past year by asking parents or caregivers to respond to questions on a Likert scale. After adaptation for resource-poor settings in South Africa (MPOC–22 [SA]),41 it was found to be neither reliable nor valid. Of the 22 items tested, the eight that reached an acceptable degree of reliability and validity formed the basis for MPOC–8 (SA), which needs to be studied further. The validity and reliability of the Patient–Practitioner Orientation Scale was found to be poor when evaluated with South African medical students.40

Both Zill et al.26 and Brouwers et al.22 reviewed the Questionnaire on Quality of Physician–Patient Interaction (QQPPI).32 They concurred that the internal consistency and construct validity methodology was good, while that for reliability was poor. However, there was some divergence in their assessment of the methodology for measuring content validity. Zill et al.26 rated it as poor and Brouwers et al.22 as fair.

The Patient Feedback Questionnaire on Communication Skills (PFC)34 received three positive ratings with excellent methodological scores for validity.18 Reliability has not been tested. However, a study evaluating the PFC28 was itself rated on the NIH Quality Assessment Tool for Observational Cohort and Cross Sectional Studies, as ‘poor’ (3/14) with a high risk of bias.

Gärtner et al.27 found that only seven of 40 measurement instruments had moderate to strong evidence of positive performance on at least one aspect of each of validity and reliability. Of these, only the Facilitation of Patient Involvement in Care (FPI) is in English and only three (non-English) instruments had no negative scores on other measurement properties.

The Doctor–Patient Communication (DPC) scale of Sustersic et al.29 for acute conditions has 13 items with good internal consistency. It is an adaptation of items from 22 measurement tools identified by them in a systematic review and elaborated through a multidisciplinary informed theoretical model.

Many of the tools use similar items to measure PcP. Broadly, they can be grouped into those that relate to facilitation, clinical reasoning and collaboration.

As Table 2 shows, the internal consistency of the better-performing tools is greater when they focus mostly on either facilitation or collaboration. Thus, the four with more than 75% of their items measuring either facilitation or collaboration reported Cronbach’s alpha values above 0.9. Of the six tools with a greater balance of facilitation and collaboration measures, three had Cronbach’s alpha values below 0.75. This finding may be an indication that facilitation and collaboration are not directly correlated. In other words, an increase in one may not be accompanied by an increase in the other. Or, equally, that some clinicians may practise one construct more while others practise the other more. Measurement tools that try to measure both may therefore suffer from poor internal consistency.

Implications and recommendations

In the six reviews of instruments to measure PcP as a whole or its components, only one commits to a single measurement tool (Doctor Interpersonal Skills Questionnaire [DISQ]) as having better evidence of being valid and reliable than others.28

On the basis of her rapid review of instruments available to measure PcP, de Silva23 concludes that there is no agreement on a single best measure that covers all aspects of person-centred care. Instead, she recommends combining and testing various measurement methods and tools locally to determine their local usefulness.

Reviews call for more studies with adequate methodological rigour to evaluate the psychometric properties of measurement instruments. Three22,26,27 that used the COSMIN checklist recommend its use while one22 found it to be in need of further development and testing.

Rather than developing new instruments, the reviews recommend that researchers focus on refining existing measurement instruments to improve their validity, reliability, generalisability, responsiveness, comprehensibility and feasibility. In this, attention needs to be paid to aspects of interpretability in different contexts22,25 by different practitioners.28 Given the association between better health outcomes and patients’ perceptions of patient-centredness,3,5,9,20 instrument development also requires inputs from patients and their families.24,25 Also, even with excellent translation methods, measurement instruments need to be adapted for and tested in new socio-cultural environments before they are used.40,41

In general, instruments should measure the quality of both facilitation and collaboration in the medical consultation, even where combining the two may reduce internal consistency. Furthermore, there is a need to study the reliability and validity of subscales in the instruments, not only of the overall instrument.

In choosing among the 12 tools (Table 2), PcP researchers need to take account of what they seek to measure (facilitation, collaboration or both), who will rate the PcP, and the context, language and population, etc. More than 75% of items in the DISQ, Patient-Centred Behaviour Coding Instrument (PBCI) and Consultation and Relational Empathy (CARE) measure relates to facilitation while more than 75% of items in the nine-item Shared Decision-Making tool (SDM-Q-9) and FPI relate to collaboration. Only the Patient-Centred Observation Form (PCOF), Set the stage, Elicit information, Give information, Understand the patient’s perspective, and End the encounter (SEGUE) and PBCI are designed to be completed by observers, and the rest by the patient. Most tools are only available and validated in English. Some have been translated into other languages but often lost reliability in the process.

Further research into the measurement properties of existing instruments to measure PcP should be guided by the COSMIN checklist. Reviewers of such research should preferably report both the measurement properties and the strength of the evidence for them in a single, well-defined scale.

Should new instruments be needed for specific scenarios or socio-cultural-linguistic contexts, the concept to be measured should firstly be clearly defined before well-performing items from existing instruments can be selected with input from patients, families and experts from various disciplines. For developing a valid and reliable measurement tool, the methodology of Sustersic et al.29 can be considered. They focussed on a specific scenario, conducted a thorough systematic literature review of existing applicable tools, combined the best items from such tools, involved multiple disciplines to select and adapt items and tested their new tool in real-life situations.

Limitations

Because our initial search strategy was limited to two databases, it is possible that some applicable reviews were not identified for this article. However, screening references in and citations of review articles did identify several appropriate reviews.

The first author classified the various items of the measurement tools as pertaining to clinical reasoning, facilitation or collaboration. Only for one tool (SEGUE) was this classification verified by two other experts.

A limitation identified in the tools reviewed was that the voice of patients themselves is usually not included in the development of PcP measurement tools. It seems logical that the best person to measure person-centredness of any healthcare service would be the patient – the one for whom the service exists – because the patient is the one experiencing the person-centredness (or not) of the service and because greater perceptions of person-centredness have a stronger association with improved patient outcomes.3,5,9,20 However, account also has to be taken of the fact that patients often rate the service (or actually the providers) highly, in part because they are dependent on the service and may feel vulnerable (fear retribution) and in part because of social desirability, as they just want to be nice and avoid making uncomfortable but true assessments. This limitation notwithstanding, the fact that patients are rarely involved in the development of measurement instruments is a serious ommission.25

Conclusion

The multiplicity of measurement tools is a product of many dimensions of person-centredness that can be measured from many perspectives (patients, family, clinicians and observers) and in many service scenarios inside and outside the medical consultation. In addition, tools are often not transferable to other socio-cultural-linguistic contexts, both with and without adaptation.

In spite of extensive research, there is no single valid and reliable measurement tool that can be recommended for general use. Instruments focussed on patients’ perceptions of PcP may be more useful in outcomes research,3,5,9,20 whereas instruments completed by peers or facilitators of learning may be more useful in teaching.42

Many tools are developed – often by the same authors – but few are studied extensively in terms of their psychometric properties and usefulness for research on and teaching of person-centredness. Often, a tool is developed, evaluated and then abandoned. This leaves us without measurement tools for which we have good evidence – repeated in several studies – of all their properties. Some are valid, others are reliable, while others are neither. Many are untested.

Using the COSMIN checklist can increase the quality of research even though researchers may sometimes differ in their application of the standard.

Acknowledgements

The authors thank Susan Scheepers (University of Pretoria Medical Library) for sourcing articles, Nicoleen Smit (Department of Family Medicine Research Technician) for manuscript preparation and Nina Honiball for graphic design of Figure 2.

Competing interests

The authors have declared that no competing interest exists.

Authors’ contributions

J.M. Louw searched the databases, reviewed the literature and wrote the article. T.S. Marcus and J.F.M. Hugo contributed to concept development and reviewed and edited the article.

Funding information

This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.

Data availability statement

Data sharing is not applicable to this article as no new data were created or analysed in this study.

Disclaimer

The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of any affiliated agency of the authors.

References

  1. Mead N, Bower P. Patient-centred consultations and outcomes in primary care: A review of the literature. Patient Educ Couns. 2002;48(1):51–61. https://doi.org/10.1016/S0738-3991(02)00099-X
  2. McMillan SS, Kendall E, Sav A, et al. Patient-centered approaches to health care: A systematic review of randomized controlled trials. Med Care Res Rev. 2013;70(6):567–596. https://doi.org/10.1177/1077558713496318
  3. Little P, Everitt H, Williamson I, et al. Observational study of effect of patient centredness and positive approach on outcomes of general practice consultations. BMJ. 2001;323:908–911. https://doi.org/10.1136/bmj.323.7318.908
  4. Olsson L-E, Jakobsson Ung E, Swedberg K, Ekman I. Efficacy of person-centred care as an intervention in controlled trials – A systematic review. J Clin Nurs. 2013;22(3–4):456–465. https://doi.org/10.1111/jocn.12039
  5. Stewart M, Brown JB, Donner A, et al. The impact of patient-centered care on outcomes. J Fam Pract. 2000;49(9):796–804.
  6. De Silva D. Evidence: Helping people help themselves [homepage on the Internet]. c 2011 [cited 2015 Mar 17]. Available from: http://www.health.org.uk/publications/evidence-helping-people-help-themselves/
  7. Stewart M. Reflections on the doctor-patient relationship: From evidence and experience. Br J Gen Pr. 2005;55(519):793–801.
  8. Morgan S, Yoder LH. A concept analysis of person-centered care. J Holist Nurs. 2012;30(1):6–15. https://doi.org/10.1177/0898010111412189
  9. Doyle C, Lennox L, Bell D. A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. BMJ Open. 2013;3(1):1–18. https://doi.org/10.1136/bmjopen-2012-001570
  10. Arbuthnott A, Sharpe D. The effect of physician-patient collaboration on patient adherence in non-psychiatric medicine. Patient Educ Couns. 2009;77(1):60–67. https://doi.org/10.1016/j.pec.2009.03.022
  11. Van Weel-Baumgarten EM, Brouwers MH. Best evidence teaching of person-centred basic communication skills: A reflection. Int J Pers Cent Med. 2011;1(1):35–38. https://doi.org/10.5750/ijpcm.v1i1.19
  12. Entwistle VA, Watt IS. Treating patients as persons: A capabilities approach to support delivery of person-centered care. Am J Bioeth. 2013;13(8):29–39. https://doi.org/10.1080/15265161.2013.802060
  13. Louw JM, Marcus TS, Hugo JF. Patient- or person-centred practice in medicine? - A review of concepts. Afr J Prim Heal Care Fam Med. 2017;9(1):7. https://doi.org/10.4102/phcfm.v9i1.1455
  14. Hugo J, Couper I. The consultation: A juggler’s art. Educ Prim Care. 2005;16(5):597–604.
  15. Hugo J, Couper ID. Teaching consultation skills using juggling as a metaphor. SA Fam Pr. 2006;48(5):5–7. https://doi.org/10.1080/20786204.2006.10873385
  16. Mead N, Bower P. Patient-centredness: A conceptual framework and review of the empirical literature. Soc Sci Med. 2000;51(7):1087–1110. https://doi.org/10.1016/S0277-9536(00)00098-8
  17. Weston WW. Informed and shared decision-making: The crux of patient-centred care. JAMC. 2001;165(4):438–439.
  18. Barry MJ, Edgman-Levitan S. Shared decision making – The pinnacle of patient-centered care. N Engl J Med. 2012;366(9):780–781. https://doi.org/10.1056/NEJMp1109283
  19. Illingworth R. What does ‘patient-centred’ mean in relation to the consultation? Clin Teach. 2010;7(2):116–120. https://doi.org/10.1111/j.1743-498X.2010.00367.x
  20. Hudon C, Fortin M, Haggerty JL, Lambert M, Poitras M-E. Measuring patients’ perceptions of patient-centered care: A systematic review of tools for family medicine. Ann Fam Med. 2011;9(2):155–164. https://doi.org/10.1370/afm.1226
  21. Zandbelt LC, Smets EM a, Oort FJ, De Haes HCJM. Coding patient-centred behaviour in the medical encounter. Soc Sci Med. 2005;61(3):661–671. https://doi.org/10.1016/j.socscimed.2004.12.006
  22. Brouwers M, Rasenberg E, van Weel C, Laan R, van Weel-Baumgarten E. Assessing patient-centred communication in teaching: A systematic review of instruments. Med Educ. 2017;51(11):1103–1117. https://doi.org/10.1111/medu.13375
  23. De Silva D. Helping measure person centred care: A review of evidence about commonly used approaches and tools used to help measure person-centred care [homepage on the Internet]. c2014 [cited 2015 Mar 13]. Available from: http://www.health.org.uk/publications/helping-measure-person-centred-care/#
  24. Edvardsson D, Innes A. Measuring person-centered care: A critical comparative review of published tools. Gerontologist. 2010;50(6):834–846. https://doi.org/10.1093/geront/gnq047
  25. Wilberforce M, Challis D, Davies L, Kelly MP, Roberts C, Loynes N. Person-centredness in the care of older adults: A systematic review of questionnaire-based scales and their measurement properties. BMC Geriatr. 2016;16(1):63. https://doi.org/10.1186/s12877-016-0229-y
  26. Zill JM, Christalle E, Müller E, Härter M, Dirmaier J, Scholl I. Measurement of physician-patient communication – A systematic review. PLoS One. 2014;9(12):e112637. https://doi.org/10.1371/journal.pone.0112637
  27. Gärtner FR, Bomhof-Roordink H, Smith IP, Scholl I, Stiggelbout AM, Pieterse AH. The quality of instruments to assess the process of shared decision making: A systematic review. PLoS One. 2018;13(2):e0191747. https://doi.org/10.1371/journal.pone.0191747
  28. Al-Jabr H, Twigg MJ, Scott S, Desborough JA. Patient feedback questionnaires to enhance consultation skills of healthcare professionals: A systematic review. Patient Educ Couns. 2018;101(9):1538–1548.32. https://doi.org/10.1016/j.pec.2018.03.016
  29. Sustersic M, Gauchet A, Kernou A, et al. A scale assessing doctor-patient communication in a context of acute conditions based on a systematic review. PLoS One. 2018;13(2):1–16. https://doi.org/10.1371/journal.pone.0192306
  30. Mokkink LB, Terwee CB, Patrick DL, et al. The COSMIN checklist for assessing the methodological quality of studies on measurement properties of health status measurement instruments: An international Delphi study. Qual Life Res. 2010;19(4):539–549. https://doi.org/10.1007/s11136-010-9606-8
  31. Louw JM, Hugo JF. Learning person-centred consultation skills in clinical medicine: A single blind randomised controlled trial. S Afr Fam Pract. In press 2020.
  32. Bieber C, Müller KG, Nicolai J, Hartmann M, Eich W. How does your doctor talk with you? Preliminary validation of a brief patient self-report questionnaire on the quality of physician-patient interaction. J Clin Psychol Med Settings. 2010;17(2):125–136. https://doi.org/10.1007/s10880-010-9189-0
  33. Chesser A, Reyes J, Woods NK, Williams K, Kraft R. Reliability in patient-centered observations of family physicians. Fam Med. 2013;45(6):428–432.
  34. Reinders ME, Blankenstein AH, Knol DL, de Vet HCW, van Marwijk HWJ. Validity aspects of the patient feedback questionnaire on consultation skills (PFC), a promising learning instrument in medical education. Patient Educ Couns. 2009;76(2):202–206. https://doi.org/10.1016/j.pec.2009.02.003
  35. Greco M, Cavanagh M, Brownlea A, McGovern J. The doctor’s interpersonal skills questionnaire (DISQ): A validated instrument for use in GP training. Educ Gen Pract. 1999;10:256–264.
  36. Martin LR, DiMatteo MR, Lepper HS. Facilitation of patient involvement in care: Development and validation of a scale. Behav Med 2001;27(3):111–120. https://doi.org/10.1080/08964280109595777
  37. Scholl I, Kirston L, Dirmaier J, Harter M. Comparing the nine-item shared decision-making questionnaire to the OPTION scale – An attempt to establish convergent validity. Health Expect. 2012;18(1):137–150. https://doi.org/10.1111/hex.12022
  38. Van Tulder M, Furlan A, Bombardier C, Bouter L, Editorial Board of the Cochrane Collaboration Back Review Group. Updated method guidelines for systematic reviews in the cochrane collaboration back review group. Spine (Phila Pa 1976). 2003;28(12):1290–1299. https://doi.org/10.1097/01.BRS.0000065484.95996.AF
  39. Schellingerhout JM, Verhagen AP, Heymans MW, Koes BW, de Vet HC, Terwee CB. Measurement properties of disease-specific questionnaires in patients with neck pain: A systematic review. Qual Life Res. 2012;21(4):659–670. https://doi.org/10.1007/s11136-011-9965-9
  40. Archer E, Bezuidenhout J, Kidd M, Van Heerden BB. Making use of an existing questionnaire to measure patient-centred attitudes in undergraduate medical students: A case study. African J Heal Prof Educ. 2014;6(2):150. https://doi.org/10.7196/ajhpe.351
  41. Saloojee GM, Rosenbaum PR, Westaway MS, Stewart A V. Development of a measure of family-centred care for resource-poor South African settings: The experience of using a modified version of the MPOC-20. Child Care Health Dev. 2009;35(1):23–32. https://doi.org/10.1111/j.1365-2214.2008.00914.x
  42. Louw JM, Marcus TS, Hugo JF. A capability approach analysis of student perspectives on a medical consultation QI process. Afr J Health Prof Educ. In press 2020.


Crossref Citations

No related citations found.