Connect
MJA
MJA

Improving communication when seeking informed consent: a randomised controlled study of a computer-based method for providing information to prospective clinical trial participants

Asuntha S Karunaratne, Stanley G Korenman, Samantha L Thomas, Paul S Myles and Paul A Komesaroff
Med J Aust 2010; 192 (7): 388-392.
Published online: 5 April 2010

Abstract

Objective: To assess the efficacy, with respect to participant understanding of information, of a computer-based approach to communication about complex, technical issues that commonly arise when seeking informed consent for clinical research trials.

Design, setting and participants: An open, randomised controlled study of 60 patients with diabetes mellitus, aged 27–70 years, recruited between August 2006  and October 2007 from the Department of Diabetes and Endocrinology at the Alfred Hospital and Baker IDI Heart and Diabetes Institute, Melbourne.

Intervention: Participants were asked to read information about a mock study via a computer-based presentation (n = 30) or a conventional paper-based information statement (n = 30). The computer-based presentation contained visual aids, including diagrams, video, hyperlinks and quiz pages.

Main outcome measures: Understanding of information as assessed by quantitative and qualitative means.

Results: Assessment scores used to measure level of understanding were significantly higher in the group that completed the computer-based task than the group that completed the paper-based task (82% v 73%; P = 0.005). More participants in the group that completed the computer-based task expressed interest in taking part in the mock study (23 v 17 participants; P = 0.01). Most participants from both groups preferred the idea of a computer-based presentation to the paper-based statement (21 in the computer-based task group, 18 in the paper-based task group).

Conclusions: A computer-based method of providing information may help overcome existing deficiencies in communication about clinical research, and may reduce costs and improve efficiency in recruiting participants for clinical trials.

It is well recognised that there are significant deficiencies in the current process for obtaining informed consent for participation in clinical research.1-7 Various attempts have been made to enhance participant understanding, with limited success.1,8-12 Some strategies have resulted in improvements in information transmission to and retention by not only study participants but also patients in general.9,10,13-19 However, complex methods of information provision, such as multimedia methods, may cause confusion and thereby reduce understanding.20,21

Studies of the provision of information in clinical research have usually adopted the perspectives of researchers and regulatory bodies, rather than those of participants.22 There have been a few exceptions;23-26 for example, a comparison of an information statement developed by participants with an information statement developed by researchers showed that the former was associated with greater participant understanding.26 We therefore sought to assess the efficacy of a computer-based method of communicating information to prospective clinical trial participants, with the aim of improving participant understanding.

Methods
Study design

We used an open, randomised controlled study to compare two reading tasks — an interactive, user-friendly, computer-based presentation (the intervention) and a conventional paper-based statement (the control) — describing a mock study entitled Right Heart Catheterisation to Monitor Heart Attack Complications in Diabetic Patients.

The mock study included three procedures: right heart catheterisation, a euglycaemic clamp test to measure insulin sensitivity, and blood sampling for genetic testing. This design purposefully involved presentation of a range of complex issues, including technical information and potential risks of procedures.

The computer-based task was provided via a computer at the Alfred Hospital, and both the computer-based and paper-based tasks were provided in the same room to standardise the study environment.

Ethics approval was obtained from the Alfred Human Research Ethics Committee.

Inclusion and exclusion criteria

English-speaking, computer-literate patients with diabetes aged 18–70 years with self-defined English literacy and competency in computer use were eligible to participate. Participation was restricted to those who could travel to the hospital to take part in the study.

Recruitment

Participants were recruited between August 2006 and October 2007 from the Department of Diabetes and Endocrinology at the Alfred Hospital and the Baker IDI Heart and Diabetes Institute in Melbourne. The study was explained over the phone, and an information statement was mailed to all who were interested in participating. The mock nature of the study was explained over the phone, and also in person on the day of participation (before obtaining written consent and providing the reading task).

To allow for interim analysis, one of us (A S K) randomly assigned participants in blocks of 10 to the intervention and control arms of the study. Thus, of every 10 participants, five were assigned to the intervention group and five to the control group. An overview of the study protocol is shown in Box 1.

Paper-based statement

The paper-based statement was based on a typical information statement approved by the Alfred Human Research Ethics Committee. It was five pages (2044 words) in length, was attached to separate forms for consent and revocation of consent, and contained sections entitled: Introduction; Study sponsor; Purpose and background; Procedures; Possible benefits; Possible risks; Other treatments; Privacy, confidentiality and disclosure of information; New information arising during project; Results; Further information; Other issues or complaints; Participation is voluntary; Reimbursement for your costs; Ethical guidelines; Injury; Compensation; and Termination of the study. The reading level of the text was grade 10 — a reading age of 14–16 years.

Computer-based presentation

The computer-based presentation was an interactive program that was displayed on a 17-inch computer monitor. Its structure and substantive content were identical to those of the paper-based statement. However, the computer-based presentation also included interactive, explanatory features; these included text boxes linked to keywords, to providing further explanation, hyperlinks to diagrammatic and pictorial presentations of procedures, and a video clip of a live right heart catheterisation procedure. The information was presented in concise sections, separated, at intervals, by a quiz (Box 2). Participants could move forward and backward through each page or skip to a specific page by clicking on a side panel. The text size was larger than in the paper-based statement, and the text was presented on a coloured background.

A questionnaire was included to establish whether inclusion and exclusion criteria were appropriately satisfied. Each quiz that occurred at the end of a section was presented in a simple, multiple-choice or true/false format, which participants answered by clicking on the options provided. This enabled participants to self-assess their understanding of the information in the presentation. If they answered correctly, they were transferred to the next section; and if they answered incorrectly, they were encouraged to re-read the previous sections.

Sample size

On the basis of an earlier study,8 the sample size was initially estimated as 100 for a power of 0.8, with an expected difference in means of 5% given a putative population standard deviation of 8.8. However, when variance was verified on the basis of the first 10 participants (SD, 7.6) the sample size was re-estimated as 60, to attain the same difference in means and power.

Measures and end points

Participants’ levels of understanding were assessed using quantitative and qualitative measures. The key quantitative measure of understanding, and primary end point, was the percentage of correct answers to questions in a paper-based questionnaire (in multiple-choice, true/false and yes/no format) that was administered to both groups at completion of the reading task. This consisted of 43 questions: eight demographic questions (personal characteristics such as age and sex), 26 assessment questions, one question about hypothetical participation and eight distracter questions.

Assessment questions covered the purpose of the mock study, the sponsor, study procedures, benefits, risks, privacy and confidentiality, contact details, voluntariness, injury and reimbursement; answers to all these questions could be found verbatim in the computer-based presentation and the paper-based statement. The difference in understanding between the two groups was determined by comparing overall assessment scores as well as performance on individual assessment questions.

The time taken to complete each reading task was also measured. Unlike other studies that have compared a standard form to other procedures,12,27,28 time restrictions were not imposed. This was to respect different reading abilities.

Participants’ levels of understanding and appraisal of methods were measured qualitatively via a semi-structured one-to-one interview, consisting of 11 common questions and three questions that were specific to the particular group. The interviews, which allowed individuals to comment on their experience, were audiotaped and transcribed in de-identified form.

Analysis

Scores from the questionnaire were analysed by an independent samples t test. Qualitative data were analysed thematically by grouping quotes into categories (themes) of experiences and preferences.

Results
Characteristics of participants

Data were gathered from 60 participants (30 assigned to the computer-based task, and 30 assigned to the paper-based task), whose characteristics are summarised in Box 3. Most were male (42/60), most used computers on a daily basis (50/70), and 40% were working full-time (24/60). The mean age was 52.0 years (range, 27–70 years). In the group that completed the paper-based task, 21 of 30 had completed tertiary education, compared with 15 of 30 in the group that completed the computer-based task. All participants were fluent in spoken English, and all but one were fluent in written English.

The average time taken to read the information for those who completed the computer-based task was 6 minutes longer than that for the group that completed the paper-based task (mean [range], 19 [9–33] minutes v 13 [6–32] minutes; P < 0.001).

Quantitative assessment of participant understanding

The percentage of correct answers used to assess understanding was based on 23 of the 26 assessment questions (three assessment questions that were answered correctly by > 90% of participants were excluded from the analysis). The average percentage of correct answers for the group that completed the computer-based task was significantly higher than that of the group that completed the paper-based task (82% v 73%; P = 0.005 [two-tailed t test]).

Frequencies of percentages of correct answers for both groups are shown in Box 4. These scores were clearly different in the two groups — scores of participants who completed the computer-based task were skewed towards the higher percentages, and scores of participants who completed the paper-based task were lower and more spread out. The group that completed the computer-based task had a highest individual score of 96% (two participants with 22 correct answers) and a lowest of 65% (two participants with 15 correct answers), compared with 91% (5 participants with 21 correct answers) and 17% (one participant with four correct answers), respectively, in the group that completed the paper-based task. The participant who was not fluent in written English achieved the highest assessment score in the group that completed the paper-based task. Multivariate analysis showed no correlation between percentages of correct answers and age or sex.

The group that completed the computer-based task answered several questions significantly better than the group that completed the paper-based task, including questions about procedures (28% v 19% correct answers; P = 0.005), the site of catheter insertion (30% v 25%; P = 0.02), privacy (27% v 16%; P = 0.006), method of disclosing study results (29% v 21%; P = 0.01), contact persons (27% v 18%; P = 0.02) and compensation in the event of possible injury (30% v 24%; P = 0.04). Two questions, regarding possible side effects to the heart and lungs, were answered equally well by both groups. The group that completed the paper-based task performed slightly better in four questions, which were about study sponsor, a minor procedure, benefits of participation and withdrawal from the study.

Participants’ interest in taking part in the mock study

Significantly more participants in the group that completed the computer-based task expressed interest in taking part in the mock study, if it were real (23 v 17 participants; P = 0.01).

Qualitative assessment of participant understanding and appraisal of methods

The interviews undertaken after participants completed the reading tasks were 3–18 minutes in duration. A selection of representative quotes from the interviews is presented in Box 5. The computer-based task received positive feedback, especially about its presentation and special features. Participants stated that these characteristics made them feel better informed and better able to make a decision about being involved in the study. Participants in the group that completed the paper-based task stated that they found the information difficult to understand, and made negative comments about the length and presentation of the document. After verbally explaining to participants the nature of the other form of information delivery tested in this study, more participants from both groups stated that they believed they would find a computer-based presentation easier to understand (21 in the computer-based task group, 18 in the paper-based task group).

Discussion

This study has shown that a computer-based approach to communicating information about clinical research to prospective trial participants can improve the consent process, compared with a conventional approach using a paper-based statement. Four key findings related to this improvement.

First, we found a major difference between the groups in the understanding of the more complex details of the study. Other studies examining the efficacy of multimedia consent processes in enhancing understanding of clinical trials have shown limited success.29-31

Second, participants who completed the computer-based task felt more comfortable in making a decision about being involved in the study. Building trust between researchers and participants is a cornerstone of any study, and feeling informed about a study may help improve these relationships and allay participant anxiety about taking part in a study.23 The quizzes within the computer-based task allowed participants to self-assess their understanding and affirm, for themselves, their eligibility to participate in it. This could not only save researchers valuable time in explaining study procedures11 but also facilitate an appropriate emphasis on issues of special concern to individual participants23 without rushing the consent process. This feature may benefit mass screening programs where large numbers of individuals can self-assess their understanding and also self-select themselves as potential participants, in addition to being contacted by researchers to take part in a study.

Third, the overall lower assessment scores in the group that completed the paper-based task raise concerns about participants’ levels of understanding when this conventional system is used. Further, the wide range of these scores in this group suggests variability in understanding among participants enrolling in research studies with paper-based information statements.9 Participants in the group that completed the computer-based task received and understood uniform and complete information presented in an attractive manner. This is likely to be of significant advantage in multicentre trials, where a computer-based approach could be employed to uniformly and reliably communicate with participants at many locations.

Fourth, participants in the group that completed the computer-based task were more likely to indicate a willingness to participate in the mock study (if it were real). This could indicate a benefit in recruiting (and perhaps even retaining) study participants.8

The computer-based approach was associated with three possible drawbacks. First, a computer-based approach is intrinsically more expensive and time consuming to set up and administer than a paper-based approach and it may not always be clear that the advantages will justify the additional costs. However, ethics committees could provide an online template — as with paper-based statements — from which study teams could create multimedia statements in a timely and affordable way. Templates could ensure that the quality of multimedia statements remains high and includes essential information. It may therefore be most appropriate to start implementing the multimedia option in large-scale, multicentre studies — these are more likely to have sufficient resources to implement a multimedia option and, as the recruitment process could be more complex in such studies, a multimedia method may help simplify this process.

Second, not all individuals are computer literate. For some participants, a computer-based approach may be too complicated to understand, and some may not have access to a computer. However, most participants in both arms of our study were computer literate. Further, those in the group that completed the computer-based task spent significantly more time reading the information. However, they were more engaged in the process and did not mind spending the additional time.

Third, there is a theoretical risk that verbal communication between researchers and participants may decrease if researchers become reliant on computers to provide information. Researchers must remain aware that computers cannot replace the trust and understanding that comes from taking the time to talk to study participants.

Our study had some limitations. It was restricted to English-speaking, computer-literate patients with diabetes. Although the computer-based method was successful in our study, further research is necessary to assess its efficacy in other settings and participant groups. Also, we measured participants’ levels of understanding immediately after they completed the reading tasks; this may have demonstrated improvement in information recall rather than understanding,9 but it is more likely that both are improved. In addition, further research is needed to assess whether the findings apply equally to men and women.

In conclusion, we have shown that a strategy for communication which uses the interactive capacity of computers is likely to provide an effective means for overcoming key deficiencies in the conventional, paper-based system of communication about clinical research projects. As access to and familiarity with computer-based approaches to communication increase, it is likely that such methods will become part of a new standard of practice in the clinical research consent process.

1 Protocol used to compare a computer-based presentation with a paper-based statement of information relating to a mock study


* Participants were randomly assigned in blocks of 10 to complete the computer-based task or the paper-based task. No participants were lost to follow-up.

2 Outline of computer-based presentation*


* Participants could work their way through the computer-based presentation sequentially by moving forwards or backwards up to the section on termination of the study, and could also skip to a specific page by clicking on a side panel.

3 Characteristics of study participants

Computer-based task (n = 30)

Paper-based task (n = 30)


Sex

Male

18

24

Female

12

6

Mean age, years (range)

52.6  (27–67)

51.0  (27–70)

Education

Secondary

15

9

Tertiary

15

21

Occupational status

Unemployed

3

3

Working full-time

11

13

Working part-time

8

8

Retired

8

6

Fluent in written English

30

29

Fluent in spoken English

30

30

Frequency of computer use

Daily

23

27

Less than weekly, more than monthly

3

1

Monthly

0

1

Less than monthly, more than yearly

3

1

Never

1

0

Comfortable using a computer

Very comfortable

22

21

Somewhat comfortable

5

8

Neither comfortable nor uncomfortable

3

0

Somewhat uncomfortable

0

0

Very uncomfortable

0

1

4 Assessment scores of participants*


* Assessment scores represent percentages of correct answers to 23 assessment questions.

5 Quotes from participants regarding the methods of information presentation

General comments about the computer-based presentation

“I don’t think you could get it any easier ...”

“It’s the most clearly written piece of medical information I’ve ever seen. I could actually understand it ...”

Positive aspects of the computer-based presentation

Video

“... at least you can see beforehand what you have to go through ... sometimes when you go to have a procedure you’re not aware of what’s going on [and] you can be very fearful of it.”

Hyperlinks

“Easy to access other missing information if you needed to, if you weren’t sure about what you were reading on the screen.”

Quizzes

“I got one lot wrong and I realised I really hadn’t read that section properly. So it forced me to go back ...”

Shortcomings of the paper-based statement

Too technical

“I wouldn’t say it was easy reading ... I didn’t find it too difficult because I have got a background in anatomy and physiology ... But I think if you didn’t ... that part would have been fairly difficult to follow.”

Too long

“Yeah, well, all documents are lengthy. It reminds me of filling out those bank application forms ... pages and pages and pages.”

Presentation could be improved

“There is some scope of it missing, in terms of presentation.”

Preference for computer-based presentation

“Personally I think a visually based one would be easier for me to understand than a text based one.”

“I tend not to concentrate too much on forms. I sort of glaze over ... I use the computer all day so it’s just a much more natural thing for me.”

Received 28 April 2009, accepted 19 October 2009

  • Asuntha S Karunaratne1
  • Stanley G Korenman2
  • Samantha L Thomas1
  • Paul S Myles3,1
  • Paul A Komesaroff1,3

  • 1 Monash University, Melbourne, VIC.
  • 2 David Geffen School of Medicine, University of California, Los Angeles, Calif, United States.
  • 3 Alfred Hospital, Melbourne, VIC.


Competing interests:

None identified.

  • 1. Davis TC, Holcombe RF, Berkel HJ, et al. Informed consent for clinical trials: a comparative study of standard versus simplified forms. J Natl Cancer Inst 1998; 90: 668-674.
  • 2. Stead M, Eadie D, Gordon D, Angus K. “Hello, hello - it’s English I speak!”: a qualitative exploration of patients’ understanding of the science of clinical trials. J Med Ethics 2005; 31: 664-669.
  • 3. Dixon-Woods M, Williams SJ, Jackson CJ, et al. Soc Sci Med 2006; 62: 2742.
  • 4. Molyneux CS, Peshu N, Marsh K. Trust and informed consent: insights from community members on the Kenyan coast. Soc Sci Med 2005; 61: 1463-1473.
  • 5. Cassileth BR, Zupkis RV, Sutton-Smith K, March V. Informed consent — why are its goals imperfectly realised? N Engl J Med 1980; 302: 896-900.
  • 6. Wendler D, Rackoff JE. Informed consent and respecting the autonomy: what’s a signature got to do with it? IRB 2001; 23: 1-4.
  • 7. Cheng JD, Hitt J, Koczwara B, et al. Impact of quality of life on patient expectations regarding phase I clinical trials. J Clin Oncol 2000; 18: 421-428.
  • 8. Llewellyn-Thomas HA, Thiel EC, Sem FW, Woermke DE. Presenting clinical trial information: a comparison of methods. Patient Educ Couns 1995; 25: 97-107.
  • 9. Dunn LB, Lindamer LA, Palmer BW, et al. Improving understanding of research consent in middle-aged and elderly patients with psychotic disorders. Am J Geriatr Psychiatry 2002; 10: 142-150.
  • 10. Raich PC, Plomer KD, Coyne CA. Literacy, comprehension, and informed consent in clinical research. Cancer Invest 2001; 19: 437-445.
  • 11. Stalonas PM, Keane TM, Foy DW. Alcohol education for inpatient alcoholics: a comparison of live, videotape and written presentation modalities. Addict Behav 1979; 4: 223-229.
  • 12. Fureman I, Meyers K, McLellan T, et al. AIDS Educ Prev 1997; 9: 330-341.
  • 13. Gagliano ME. A literature review on the efficacy of video in patient education. J Med Educ 1988; 63: 785-792.
  • 14. Benson PR, Roth LH, Appelbaum PS, et al. Information disclosure, subject understanding, and informed consent in psychiatric research. Law Hum Behav 1988; 12: 455-475.
  • 15. Wirshing DA, Sergi MJ, Mintz JA. Videotape intervention to enhance the informed consent process for medical and psychiatric treatment research. Am J Psychiatry 2005; 162: 186.
  • 16. Coyne CA, Xu R, Raich P, et al. Randomized, controlled trial of an easy-to-read informed consent statement for clinical trial participation: a study of the Eastern Cooperative Oncology Group. J Clin Oncol 2003; 21: 836-842.
  • 17. Gray SH. The effect of sequence control on computer assisted learning. J Comput Based Instr 1987; 14: 54-56.
  • 18. Kinzie MB, Sullivan HJ, Berdel RL. Learner control and achievement in science computer assisted instruction. J Educ Psychol 1988; 80: 299-303.
  • 19. Watters J. Information retrieval and the virtual document. J Am Soc Inf Sci 1999; 50: 1028-1029.
  • 20. Lynch PJ, Horton S. Imprudent linking weaves a tangled Web. Computer 1997; 30: 115-117.
  • 21. Lawless KA, Kulikowich JM. Understanding hypertext navigation through cluster analysis. J Educ Comput Res 1996; 14: 385-399.
  • 22. Ancker J. Assessing patient comprehension of informed consent forms. Control Clin Trials 2004; 25: 72-74.
  • 23. Agre P, McKee K, Gargon N, Kurtz RC. Patient satisfaction with an informed consent process. Cancer Pract 1997; 5: 162-167.
  • 24. Akkad A, Jackson C, Kenyon S, et al. Informed consent for elective and emergency surgery: questionnaire study. BJOG 2004; 111: 1133-1138.
  • 25. Cox K. Informed consent and decision-making: patients’ experiences of the process of recruitment to phases I and II anti-cancer drug trials. Patient Educ Couns 2002; 46: 31-38.
  • 26. Guarino P, Elbourne D, Carpenter J, Peduzzi P. Consumer involvement in consent document development: a multicenter cluster randomized trial to assess study participants’ understanding. Clin Trials 2006; 3: 19-30.
  • 27. Calisir F, Gurel Z. Influence of text structure and prior knowledge of the learner on reading comprehension, browsing and perceived control. Comput Human Behav 2003; 19: 135-145.
  • 28. Kerrigan DD, Thevasagayam RS, Woods TO, et al. Who’s afraid of informed consent? BMJ 1993; 306: 298-300.
  • 29. Flory J, Emanuel E. Interventions to improve research participants’ understanding in informed consent for research: a systematic review. JAMA 2004; 292: 1593-1601.
  • 30. Jeste DV, Palmer BW, Golshan S, et al. Multimedia consent for research in people with schizophrenia and normal subjects: a randomized controlled trial. Schizophr Bull 2008; 35: 719-729.
  • 31. Limacher MC. What prevents women from participating in research studies? J Watch Womens Health 2007; June 28: 3.

Author

remove_circle_outline Delete Author
add_circle_outline Add Author

Comment
Do you have any competing interests to declare? *

I/we agree to assign copyright to the Medical Journal of Australia and agree to the Conditions of publication *
I/we agree to the Terms of use of the Medical Journal of Australia *
Email me when people comment on this article

Responses are now closed for this article.