Design, participants and setting: Secondary analysis of data from a random sample of 1336 Australian GPs who participated in Bettering the Evaluation and Care of Health, a national continuous cross-sectional survey of general practice activity, between November 2003 and March 2005. The prescribing behaviour of participants who used the advertising software was compared with that of participants who did not, for seven pharmaceutical products advertised continually throughout the study period.
Results: GP age, practice location, accreditation status, patient bulk-billing status and hours worked were significantly associated (P < 0.05) with use of advertising software. We found no significant differences, either before or after adjustment for these confounders, in the prescribing rate of Lipitor (adjusted odds ratio [AOR], 0.90; P = 0.26); Micardis (AOR, 0.98; P = 0.91); Mobic (AOR, 1.02; P = 0.89); Norvasc (AOR, 1.02; P = 0.91); Natrilix (AOR, 0.80; P = 0.32); or Zanidip (AOR, 0.88; P = 0.47). GPs using advertising software prescribed Nexium significantly less often than those not using advertising software (AOR, 0.78; P = 0.02). When all advertised products were combined and compared with products that were not advertised, no difference in the overall prescribing behaviour was demonstrated (AOR, 0.96; P = 0.42).
Over recent decades, a number of factors have been shown to influence the prescribing behaviour of general practitioners. These include guideline reminders and educational interventions,1-3 scientific journal articles,4 detailing visits from pharmaceutical company representatives (which may include promotional materials and product samples),5-7 attitudes of peers and “opinion leaders” or authority figures,8,9 prescribing behaviour of specialists or hospital physicians,10,11 patient expectation,12-14 advertising in medical journals and periodicals,11,15-17 and industry sponsorship of education and gifts ranging from meals to conference travel expenses to research funding.5,18,19 While a great deal of literature describes the effects of advertising and other methods of promotion,20-22 doctors generally feel that they are immune to the effects of these influences.4,5,8,21
Interested stakeholders are keen to know what “works best” either to use that method of promotion, or to curtail it where possible, depending on their perspective. Such stakeholders include: general practice educational bodies advocating best practice, and those promoting Quality Use of Medicines (QUM); government groups interested in judicious prescribing both for QUM and for reasons of economy; and the pharmaceutical industry looking to maximise profits.
In the early 1990s, the first (and currently, only) clinical software system with embedded advertising (referred to hereafter as “the advertising software”) was released to medical practitioners in Australia. The vendors used an advertising revenue strategy to offset the cost of the product, and sent a full working copy to all GPs.23 During the period of this study (1 November 2003 to 28 March 2005), the types of advertisements embedded in the software included full-screen images and “strip messages”, with or without animation. The “pop-up” full-screen advertisements appeared when any document was printed (this function has since been removed). The strip messages cycled through the program’s screens during the course of each work session, at the opening of each patient record, when new data were added to a record, and when prescriptions or pathology test orders were prepared. The strip advertisements were also displayed when the software’s clinical support tools were accessed. The software developers provided quarterly updates, and advertisements could change with each new version. The advertisements cycled for a month within each version, allowing for three different sets of advertisements to be shown within the quarter. An advertisement could be repeated in all three sets, and in multiple cycles.
At the commencement of this study, the price of primary full-screen advertisements was $7380 for 1 month ($19 557 for 3 months) and for the minor strip advertisements was $4768 for 1 month ($12 675 for 3 months).24 While most advertisements were for pharmaceutical products, advertising “space” had also been purchased by medical indemnity insurers, private health insurers, pathology services, Divisions of General Practice, employment networks, the Australian Government Department of Health and Ageing (DoHA), and other non-profit organisations such as the National Heart Foundation Australia, the National Prescribing Service, and Médecins sans Frontières.
A 2005 review of the advertising software25 reported that 95% of pharmaceutical advertisements appeared to be non-compliant with the Medicines Australia Code of Conduct26 through one or more of the following: missing information; illegibility of generic names; claims that were unsubstantiated; lack of Pharmaceutical Benefits Scheme (PBS) listing information; or were in breach of the Therapeutic Goods Act 1989 (Cwlth) regarding direct-to-consumer advertising of pharmaceutical products.25
Our data were drawn from the national Bettering the Evaluation and Care of Health (BEACH) program. The BEACH methods have been published in detail elsewhere,27 but relevant features are summarised here. BEACH is a paper-based, continuous cross-sectional survey of general practice activity. Each year about 1000 GPs from a national rolling random sample (drawn by the DoHA) participate in BEACH. The sample used in our analysis was representative of the GP population in Australia.27 GPs provide demographic and encounter information for 100 consecutive, consenting, unidentified patients. They also provide demographic information about themselves and their practices on a GP-profile questionnaire. The foci for this study were questions related to the GPs’ individual computer use for clinical purposes. Each GP was asked: “To what extent are computers used by you at work?”, with numbered response options of: “not at all”; “test ordering”; “prescribing”; “medical records”; “Internet”; and “email”. They were also asked what prescribing and medical record software they used. For this analysis, prescriptions recorded by GPs, and the software program they use for clinical purposes were the elements investigated for 1336 GPs participating in BEACH between 1 November 2003 and 28 March 2005. Medications prescribed were coded at the product level according to an in-house system known as CAPS, and were classified at the generic level according to the Anatomical Therapeutic Chemical (ATC) classification.28
GPs were assigned to one of two groups: those exposed to advertising (users of the advertising software for prescribing and/or test ordering and/or medical records, with or without email and/or Internet); and those not exposed to advertising (GPs who used other software, did not use a computer for clinical purposes or did not use a computer at all).
Although the date of encounter is one of the elements collected in BEACH, and we had a release date for each software update, we could not be certain that updates had been installed immediately when received and so were unable to reliably align dates of encounter with advertisements supposedly being shown on those dates. However, there were seven pharmaceutical products that had been advertised continuously throughout each month and in each version of the advertising software for the duration of the study period: Lipitor (atorvastatin [Pfizer Australia, Sydney, NSW]); Micardis (telmisartan [Boehringer Ingelheim, Sydney, NSW]); Mobic (meloxicam [Boehringer Ingelheim, Sydney, NSW]); Nexium (esomeprazole [AstraZeneca, Sydney, NSW]); Norvasc (amlodipine besylate [Pfizer Australia, Sydney, NSW]); Natrilix (indapamide hemihydrate [Servier Laboratories (Aust), Melbourne, Vic); and Zanidip (lercanidipine hydrochloride [Solvay Pharmaceuticals, Sydney, NSW]). Nexium had been on the market for 13 months, and all other brands for a minimum of 18 months before the study’s commencement.
The sample of GPs was a simple random sample, so we used conventional simple random sample methods for GP-based comparisons. The sample of encounters was a cluster-based sample, so we adjusted the 95% confidence intervals and P values reported for the single-stage clustered study design using procedures in SAS, version 8.2 (SAS Institute, Cary, NC, USA). A-priori power estimations for two-sample comparison of proportions were performed with Stata, version 8.0 (StataCorp, College Station, Tex, USA).
We made univariate comparisons of the characteristics (listed in Box 1) of the GPs in each group, eliminated those highly correlated with others, and used simple logistic regression to identify those associated (P < 0.10) with use of advertising software for clinical purposes. We used stepwise procedures32 in logistic regression analysis to identify characteristics independently related to advertising software use for clinical purposes (P < 0.05). The prescribing outcomes were categorised as advertised brand or non-advertised products. Logistic regression was used to analyse the categorical outcomes, after adjusting for the potential confounding variables. Results are expressed in terms of odds ratios with unexposed GPs as the reference group (odds ratio [OR], 1). Prescriptions for each of the seven advertised products as a proportion of prescriptions for all products in the same ATC class were compared between the GP groups (eg, the proportion of HMG CoA [3-hydroxy-3-methylglutaryl coenzyme-A] reductase inhibitor prescriptions that were for Lipitor). In a few cases, a product under investigation and another product from the same ATC class or group had been prescribed for the same problem at the encounter. These cases were removed from the analysis as they were no longer mutually exclusive. Where two strengths of the same product were prescribed (eg, Lipitor 20 mg and Lipitor 40 mg), these were counted as a single prescription for the product. The final step was to determine if combining the data from the seven comparisons would detect a different overall effect. We grouped the seven brands together and the total number of prescribing decisions for advertised medications were compared as a proportion of all prescribing decisions in the combined ATC classes.
Of the 1336 GPs who participated during the study period, 79 did not provide responses about their use of computers, and 35 did not report which software they had; these were excluded from the analyses. Of the 1222 remaining GPs, 773 (63.3%) reported using the advertising software and provided information about 77 300 encounters involving 63 335 prescribed medications. The 449 (36.7%) who did not use the advertising software provided information about 44 900 encounters involving 37 895 prescribed medications. The GP and practice characteristics were tested for association with use of advertising software, and those included in the final model are shown in Box 1. Five GP characteristics — GP age, patient bulk-billing status, practice location, practice accreditation status and weekly hours worked in direct patient care — were found to be independently associated (P < 0.05) with GP use of the advertising software. GPs who were using the advertising software were significantly more likely to be aged less than 45 years (P < 0.001), to live in areas other than major cities (P = 0.02), and to work in accredited practices (P < 0.001), and significantly less likely to bulk bill all their patients (P < 0.001) and to work 31–40 hours per week in direct patient care (P = 0.03) (Box 2).
In the prescribing data, there were 29 prescriptions removed in which both the advertised and a non-advertised product were prescribed for the same problem. These are enumerated for each ATC class in the footnote to Box 3.
We found no significant differences between the two GP groups, either before or after adjustment, in the prescribing rate of Lipitor (adjusted odds ratio [AOR], 0.90; P = 0.26); Micardis (AOR, 0.98; P = 0.91); Mobic (AOR, 1.02; P = 0.89); Norvasc (AOR, 1.02; P = 0.91); Natrilix (AOR, 0.80; P = 0.32); or Zanidip (AOR, 0.88; P = 0.47). For Nexium, a significant difference emerged after adjustment between the two groups (AOR, 0.78; P = 0.02) — the GPs who were exposed to the advertising software prescribed this product less often than those not exposed. When the seven products were combined, there was no difference in the overall prescribing behaviour between the two groups either before or after adjustment (AOR, 0.96; P = 0.42) (Box 3).
We found that exposure to advertisements embedded in clinical software had one significant and selective effect on the prescribing behaviour of the GPs in this study. However, this effect was subsumed in the overall result when all seven products were grouped.
As with all observational studies, the influence of confounding factors requires consideration. We do not know, for instance, the exposure to advertising at the exact time of prescribing. We could not determine what exposure GPs had to advertising through other media, but assumed that GPs in both groups had an equal chance of exposure to advertisements through such avenues as scientific journals, periodicals, and visits from pharmaceutical representatives. We did not investigate the appropriateness of the chosen medication for the condition for which it was prescribed — our purpose was to detect any influence of the advertising once the decision to prescribe had been made. We also had no way of examining the effect, if any, on patients exposed to the advertisements, and acknowledge that patient request is a recognised influence on how GPs prescribe.12-14 It would have been interesting to compare brand choice for those medications being prescribed for the patient for the first time, rather than all medications, as a new choice must be made at that point. However, new prescriptions form a very small proportion of all prescriptions, particularly in the area of chronic disease management, so this would have resulted in too small a sample size for meaningful comparison.
For all but one sample (low-ceiling diuretics), the sample size was sufficient to detect a difference of 5% with power at 0.81 or over. Because the differences between the groups were so small (ranging from 0.2% to 4.9%), there may be insufficient power in some of the sample sizes to conclude the null effects with certainty (ie, type II errors might have been incurred). However, the sample size for proton-pump inhibitors/H2-receptor antagonists (3125 cases) has power calculated at 0.85, giving greater reliability to the Nexium result. We had hypothesised that the promotion would produce greater prescribing of the advertised product, and we think it is clinically significant that the result is the opposite of that hypothesised. If we have reported a difference where in fact none exists (ie, a type I error), this further supports a finding of no difference, and that the influence of advertising through clinical software was not proven.
Although this is the first study to examine the effect of advertising in clinical software, other studies have had similar results when examining the relationship between prescribing and advertising in journals. One found no relationship between the extent of advertising for a drug and the amount of prescribing by GPs.11 Another reported no correlation between changes in expenditure on detailing visits from pharmaceutical company representatives or on journal advertising and size of market or market share.33 It concluded that the most likely cause of its negative results was that there is so much spent on promotion that additional advertising makes little difference to prescribing under the law of diminishing returns.33 As most promotional funds are spent on detailing visits by representatives, and a comparative paucity on media advertising,34 there may be so much compared with so little that the extra amount spent on advertising in software makes no difference.
Incidental exposure of patients to advertisements is one aspect of the ethical debate about advertisement-embedded software, but exposure of GPs is the dominant one, and has echoes of the same issues involved in pharmaceutical advertising in medical or scientific journals.11,15-17 The assumption that this method of advertising influences prescribing behaviour is supported by the amount of advertising commissioned by pharmaceutical companies. One report used the example of advertisements in the New England Journal of Medicine and the Journal of the American Medical Association, which produce multiple versions of the same journal that have the same text but different pharmaceutical advertisements, depending on the geographical region and physician specialty intended for that version. Primary care physicians receive editions with the most advertisements, and libraries receive those with the fewest.16 This collaboration in promoting pharmaceutical products does not correlate with best practice ideals and creates a potential conflict of interest for the organisations publishing the journals and for their policies. Nonetheless, this advertising offsets the production cost of the journals and is a significant source of funding for some physician organisations that, in some cases, might not exist without it.16
To some extent, the same dilemma is assumed for users of advertising software — removing the advertisements would mean removing the subsidy made available through advertising revenue, and the software would then become more expensive for its purchasers. Despite the obvious amount of revenue contributed by advertisements, and acknowledging that there is similar software available at a much higher price, the current price of the advertising software aligns with at least two similar clinical software packages presently available in Australia, which do not have advertisements.
There are a couple of other considerations in the ethical debate where advertisement-embedded software is concerned. It could be argued that provision 3.10.11 (of edition 14, now 3.9.2 of edition 15) of Medicines Australia’s Code of Conduct26,35 is being breached when advertisements are clearly aimed at a condition or clinical function with which the condition is associated (eg, the only two advertisements in the cardiovascular monitor tool were for Micardis or Norvasc; the only two in the product information tool for musculoskeletal drugs were for Celebrex and Mobic). Edition 15 (provision 3.9.1) of the Code now precludes a company from placing advertisements with clinical tools.35 The pharmaceutical industry is held responsible for any breaches. But with effective industry standards and accreditation for clinical software, these regulations might be better followed and breaches better controlled.
In our study, the advertisements for Nexium had a negative effect on the GPs exposed to them. Some GPs providing feedback in a previous study stated that the advertisements were “annoying”,25 and our result may be associated with an “annoyance” factor — the strip advertisement for Nexium appeared in the pathology ordering tool continually throughout the study period, as well as in the routine display through the software’s general cycle of advertisements. While warnings and reminders can be switched off in the software, the advertisements are very difficult for the average user to eliminate. In any case, the software has achieved market dominance, so neither moral indignation nor the annoyance factor appear to have the same influence as the perceived cost saving. Computerisation is an expensive process, requiring continuing updates of hardware, software and other associated equipment. It has become almost essential, and the costs are borne by the practice. Given that the advertising software no longer has a cost advantage, practices may begin to reconsider their choice of software. However, the advertising software has “first-to-market” advantage, and “vendor lock-in” arising from a lack of standards to facilitate data transfer between systems may deter many from considering change.
While we could measure differences in the prescribing behaviour for the products nominated, we could not test the effect of advertisements for the not-for-profit organisations. Given the cost of these advertisements, and that this mode of advertising may not effect an increase in prescriptions for the advertised product, this may not be the best use of advertising expenditure. The pharmaceutical industry may be able to absorb the cost of this questionably efficient method of promotion (and one that also exposes it to criticism and potential fines for breaches of the Medicines Australia Code of Conduct) on the basis of possible marginal increases in sales (within the confidence intervals shown in this study), but organisations being funded by the public purse may not be as able to justify such expenditure.
Our study suggests that advertisements in clinical software may have no impact on prescribing, or may even reduce prescribing, but it does not exclude the possibility that such advertisements increase prescribing marginally but sufficiently to provide a competitive return on investment. In light of our results, we invite both the pharmaceutical industry and government organisations to publish their own evaluation data that may contradict our findings.
1 General practitioner and practice variables tested for their association with use of advertising software for clinical purposes
* Variables that showed some association (P < 0.10) with use of advertising software for clinical purposes, and were therefore included in the logistic regression analysis. † Variables that were found to be highly correlated with other variables and were therefore eliminated from the logistic regression analysis.
2 Variables independently associated with general practitioner use of advertising software for clinical purposes and included in the final model
* By Australian standard geographical classification.30
3 Distribution of prescriptions: advertised medication brands versus other brands within the same Anatomical Therapeutic Chemical (ATC) drug groups28
- 1. Zwar N, Wolk J, Gordon J, et al. Influencing antibiotic prescribing in general practice: a trial of prescriber feedback and management guidelines. Fam Pract 1999; 16: 495-500.
- 2. Mandryk JA, Mackson JM, Horn FE, et al. Measuring change in prescription drug utilization in Australia. Pharmacoepidemiol Drug Saf 2006; 15: 477-484.
- 3. Shiffman RN, Liaw Y, Brandt CA, et al. Computer-based guideline implementation systems: a systematic review of functionality and effectiveness. J Am Med Inform Assoc 1999; 6: 104-114.
- 4. Avorn J, Chen M, Hartley R. Scientific versus commercial sources of influence on the prescribing behavior of physicians. Am J Med 1982; 73: 4-8.
- 5. Wazana A. Physicians and the pharmaceutical industry: is a gift ever just a gift? JAMA 2000; 283: 373-380.
- 6. Prosser H, Walley T. Understanding why GPs see pharmaceutical representatives: a qualitative interview study. Br J Gen Pract 2003; 53: 305-311.
- 7. McGettigan P, Golden J, Fryer J, et al. Prescribers prefer people: the sources of information used by doctors for prescribing suggest that the medium is more important than the message. Br J Clin Pharmacol 2001; 51: 184-189.
- 8. Breen KJ. The medical profession and the pharmaceutical industry: when will we open our eyes? Med J Aust 2004; 180: 409-410. <MJA full text>
- 9. Black H. Dealing in drugs. Lancet 2004; 364: 1655-1656.
- 10. Robertson J, Treloar CJ, Sprogis A, et al. The influence of specialists on prescribing by GPs. A qualitative study. Aust Fam Physician 2003; 32: 573-576.
- 11. Jones M, Greenfield S, Bradley C. A survey of the advertising of nine new drugs in the general practice literature. J Clin Pharm Ther 1999; 24: 451-460.
- 12. Little P, Dorward M, Warner G, et al. Importance of patient pressure and perceived pressure and perceived medical need for investigations, referral, and prescribing in primary care: nested observational study. BMJ 2004; 328: 444.
- 13. Prosser H, Almond S, Walley T. Influences on GPs’ decision to prescribe new drugs — the importance of who says what. Fam Pract 2003; 20: 61-68.
- 14. Cockburn J, Pit S. Prescribing behaviour in clinical practice: patients’ expectations and doctors’ perceptions of patients’ expectations — a questionnaire study. BMJ 1997; 315: 520-523.
- 15. Wilkes MS, Doblin BH, Shapiro MF. Pharmaceutical advertisements in leading medical journals: experts’ assessments. Ann Intern Med 1992; 116: 912-919.
- 16. Glassman PA, Hunter-Hayes J, Nakamura T. Pharmaceutical advertising revenue and physician organizations: how much is too much? West J Med 1999; 171: 234-238.
- 17. Smith R. Medical journals and pharmaceutical companies: uneasy bedfellows. BMJ 2003; 326: 1202-1205.
- 18. Rogers WA, Mansfield PR, Braunack-Mayer AJ, et al. The ethics of pharmaceutical industry relationships with medical students. Med J Aust 2004; 180: 411-414. <MJA full text>
- 19. Lexchin J, Bero LA, Djulbegovic B, et al. Pharmaceutical industry sponsorship and research outcome and quality: systematic review. BMJ 2003; 326: 1167-1170.
- 20. Katz D, Mansfield P, Goodman R, et al. Psychological aspects of gifts from drug companies. JAMA 2003; 290: 2404-2405.
- 21. Dana J, Loewenstein G. A social science perspective on gifts to physicians from industry. JAMA 2003; 290: 252-255.
- 22. Roughead EE, Harvey KJ, Gilbert AL. Commercial detailing techniques used by pharmaceutical representatives to influence prescribing. Aust N Z J Med 1998; 28: 306-310.
- 23. Harvey KJ. The Pharmaceutical Benefits Scheme 2003-2004. Aust New Zealand Health Policy 2005; 2: 2.
- 24. Health Communication Network. Medical Director — product details. http://www.hcn.com.au/products/md/md_details.asp (accessed Sep 2007).
- 25. Harvey KJ, Vitry AI, Roughead E, et al. Pharmaceutical advertisements in prescribing software: an analysis. Med J Aust 2005; 183: 75-79. <MJA full text>
- 26. Medicines Australia Inc. Medicines Australia code of conduct. 14th ed. Canberra: Medicines Australia Inc, 2003.
- 27. Britt H, Miller G, Knox S, et al. General practice activity in Australia 2004–05. Canberra: Australian Institute of Health and Welfare, 2005. http://www.aihw.gov.au/publications/index.cfm/title/10189 (accessed Sep 2007).
- 28. World Health Organization Collaborating Centre for Drug Statistics Methodology. Anatomical therapeutic chemical (ATC) classification index with Defined Daily Doses (DDDs). Oslo: WHO, Jan 2005.
- 29. Australian Government Department of Health and Ageing. Rural, remote and metropolitan areas (RRMA) classification. http://www.health.gov.au/internet/wcms/publishing.nsf/content/work-bmp-where-rrma (accessed Nov 2007).
- 30. Australian Bureau of Statistics. Australian standard geographical classification (ASGC). Canberra: ABS, 2004.
- 31. Australian Bureau of Statistics. Census of population and housing: Socio-economic indexes for areas (SEIFA), Australia. Canberra: ABS, 2001.
- 32. Armitage P, Berry G, Matthews JNS. Statistical methods in medical research. 4th ed. London: Blackwell Science, 2002.
- 33. Mackowiak JI, Gagnon JP. Effects of promotion on pharmaceutical demand. Soc Sci Med 1985; 20: 1191-1197.
- 34. Rosenthal MB, Berndt ER, Donohue JM, et al. Promotion of prescription drugs to consumers. N Engl J Med 2002; 346: 498-505.
- 35. Medicines Australia Inc. Medicines Australia code of conduct. 15th ed. Canberra: Medicines Australia Inc, 2006. http://www.medicinesaustralia.com.au/pages/images/Code%20 of%20Conduct%20Final%20Version%20Edition%2015%20(with%20links).pdf (accessed Sep 2007).
Publication of your online response is subject to the Medical Journal of Australia's editorial discretion. You will be notified by email within five working days should your response be accepted.