Connect
MJA
MJA

Estimating prevalence of common chronic morbidities in Australia

Stephanie A Knox, Christopher M Harrison, Helena C Britt and Joan V Henderson
Med J Aust 2008; 189 (2): 66-70. || doi: 10.5694/j.1326-5377.2008.tb01918.x
Published online: 21 July 2008

Abstract

Objectives: To estimate prevalence of selected diagnosed chronic diseases among patients attending general practice, in the general practice patient population, and in the Australian population, and to compare population estimates with those of the National Health Survey (NHS).

Design, setting and participants: In late 2005, 305 general practitioners each provided data for about 30 consecutive patients (total, 9156) as part of the BEACH (Bettering the Evaluation And Care of Health) program, a continuous national study of general practice activity. GPs used their knowledge of the patient, patient self-report, and medical records as sources.

Main outcome measures: Crude prevalence of each listed condition currently under management among surveyed patients, and adjusted prevalence for the general practice patient population, and the national population.

Results: 39.6% of respondents had none of the listed conditions diagnosed; 30.0% had a cardiovascular problem (uncomplicated hypertension, 17.6%; ischaemic heart disease, 9.5%); 24.8% had a psychological problem (depression, 14.2%; anxiety, 10.7%); 22.8% had arthritis, mostly osteoarthritis (20.0%); 10.7% had asthma; and 8.3% had diabetes, mostly type 2 (7.2%). Adjustment to the population attending general practice resulted in lower estimates for cardiovascular disease, arthritis and diabetes but had little effect on prevalence of asthma and psychological problems. After adjusting for non-attenders, about one in five people in the population had a cardiovascular problem, a similar proportion had a psychological problem, 14.8% had arthritis, and about 10% had asthma, hyperlipidaemia and gastro-oesophageal reflux disease. Estimates were similar to NHS results for any arthritis, asthma, and malignant neoplasms; higher for any cardiovascular problem; far higher for specific cardiovascular diseases, cerebrovascular disease and hyperlipidaemia; and almost twice the NHS estimate for psychological problems (particularly depression and anxiety). Estimates for type 1 diabetes aligned with NHS results, but were far higher for “all diabetes” and type 2 diabetes.

Conclusions: This study offers an alternative, perhaps more accurate, approach to measurement of disease prevalence than the NHS approach, which relies on respondent self-report alone. It provides valid prevalence estimates with the help of GPs at a fraction of the cost of the NHS. This study could be repeated annually to augment other data sources and better define existing health needs in the population.

Reliable estimates of population disease prevalence provide a marker of the health of the community, and assist planning of health services and health promotion. Many countries rely on prevalence estimates derived from patient self-report in national health surveys.1-3 In Australia, the National Health Survey (NHS), currently conducted by the Australian Bureau of Statistics every 3 years, provides these estimates using samples of about 25 000 people.1

A growing body of literature raises concerns about the reliability and validity of self-reported data,4-11 particularly the reliability of respondent recall,7 and poor respondent understanding and labelling of conditions.6,9,10,12 Our previous study suggests that “diagnoses recalled later by patients . . . are at best only a rough approximation of the diagnoses recorded by the doctor”.13

Reliability of respondent recall may vary depending on the disease in question.11,14 When compared with medical records, patient recall has been shown to be good for diabetes,10-12,14 but certain cardiovascular diseases are under-reported,10-12 and rheumatoid arthritis is over-reported.8,11,12

Efforts to estimate disease prevalence from medical records (often used as the “gold standard” when measuring the accuracy of patient self-report4,5,8,10,15-17) suggest issues of quality still arise, including incomplete records,11,16 inaccurate records,12,18,19 obtaining consent from patients,17 and lack of patient disclosure to their doctors.11 Further, this method works better in a capitation or list system, where each patient is registered with a general practitioner.

Cost of data collection also needs to be considered. Suggested advantages of patient self-report include lower financial investment20 and organisational requirement21 than clinical assessment. For example, it has been argued that telephone interviews, even with a sensitivity of 59% for hypercholesterolaemia, are an inexpensive and time-efficient way to collect prevalence data.15 Others suggest that a combination of self-report and medical record search may be a better way to estimate prevalence.8,13,22 Using a qualified medical practitioner to record morbidity in conjunction with patient self-report might also provide a more accurate classification of patients’ health problems than self-report alone.22

This study aimed to estimate prevalence of selected diagnosed chronic diseases among patients attending general practice, in the general practice patient population and in the Australian population, and also to compare population estimates with the results of the NHS.

We used the patient’s GP as an expert interviewer to conduct the survey, utilising his or her knowledge of the patient, the patient’s response to the questions, and (where available) the patient’s medical record.

In substudies of BEACH, the GP records information additional to BEACH encounter data, in discussion with the patient. The full substudy methodology is reported elsewhere.23,24 In this substudy, 305 GPs recorded information on 30 patients during the periods 12 July – 19 August and 25 October – 28 November 2005.

Survey questions

Questions were brief, to reduce the response burden to GPs and patients. The GP was asked, “Does this patient have any of the following conditions which require ongoing management?” A series of conditions were listed with tick-box options24 (Box 1).

The conditions listed included those determined by the Australian Government as National Health Priority Areas (NHPAs),25 such as cardiovascular disease, with more specific conditions (eg, ischaemic heart disease [IHD]) selected on the basis of chronicity26 and management frequency in Australian general practice.23 Chronic obstructive airways disease (COAD) was added because of its frequent confusion with asthma, particularly in older people.27 While hyperlipidaemia is classified as a disease of the endocrine and metabolic system in the International Classification of Primary Care26 (the classification used for morbidity managed in the BEACH program), and therefore could not be listed under cardiovascular problems, it is a recognised risk factor and was therefore included. Gastro-oesophageal reflux disease (GORD) was added because it is one of the 10 most frequently managed problems in general practice.23 Injuries, although an NHPA, are generally acute in nature and were therefore omitted.

Data analysis
Crude prevalence estimates

Crude prevalence estimates were calculated as the number of persons with the morbidity as a proportion of the total sample. These estimates can be interpreted as the prevalence among patients found in GP waiting rooms at any time.

The sample of patients was a two-stage cluster sample,28 with a sample of GPs from a randomised list as the primary sampling unit, and a quota of 30 patients from each GP. When estimating from two-stage cluster samples, the variance needs to be adjusted to account for the correlation between observations within clusters (intracluster correlation); this was achieved using procedures in SAS version 9.1.3 (SAS Institute, Cary, NC, USA) that calculate the intracluster correlation and adjust the confidence intervals accordingly.

Results

A total of 9156 patients were surveyed, with all GPs responding for some patients. GPs indicated 3237 (35%) of the 9156 patients surveyed had “none of the above” conditions. For 429 of the 9156 (5%), GPs failed to tick a condition, including “none of the above”; these patients were similar to those with none of the listed conditions (ie, they were younger than average and had non-chronic problems managed at the encounter) and were included in the denominator. Forty-two had a listed condition managed at the encounter and were included as having that condition; the other 387 were included as having “none of the above”.

The final study sample was generally older than the population who attended a GP at least once in the year 2005–2006 (Box 2).

Prevalence
Crude prevalence estimates

Prevalence of selected conditions is shown in Box 3. Of the 9156 patients sampled, 30.0% had a diagnosed cardiovascular problem, the most common being uncomplicated hypertension (17.6%), followed by IHD (9.5%). About 25% had a current psychological problem (14.2% depression and 10.7% anxiety). About one in five had arthritis (22.8%), mostly osteoarthritis (20.0%). Nearly 11% had asthma, and 8.3% had diabetes, mostly type 2 (7.2%).

Estimates for general practice patient population

Crude rates were adjusted to provide prevalence estimates for the general practice patient population. These were generally lower than crude sample rates (Box 3). In particular, cardiovascular disease, arthritis and diabetes, which are related to older age, were significantly less prevalent after adjustment. The estimated prevalence of asthma and of psychological problems were largely unaffected by adjustment.

Comparison with the NHS

Our prevalence estimates for the national population were compared with those from the NHS.1 As confidence intervals were unavailable for the NHS estimates, we assumed that if this estimate did not fall within the 95% CI for the national rate then the two results were different. Our national prevalence estimates were similar to NHS estimates for presence of any arthritis, asthma, and malignant neoplasms. However, they were higher for any cardiovascular problem, and far higher for specific cardiovascular problems such as hypertension, IHD, congestive heart failure, cerebrovascular disease and hyperlipidaemia. Our national estimate was almost twice the NHS estimate for psychological problems (19.4% compared with 10.7%), and for depression and anxiety specifically. The NHS and our estimates for type 1 diabetes (insulin-dependent) were very close, but our estimates for “all diabetes” and type 2 diabetes were both higher than the NHS estimates.

Our estimate for chronic back pain (currently under management) was less than half the NHS back pain estimate of 15.2%. Arthritis prevalence estimates were not significantly different when arthritis was considered in total, but our osteoarthritis estimate was far higher than the NHS, and our rheumatoid arthritis estimate was about one-quarter that of the NHS. No comparative results were available from the NHS for GORD, COAD, insomnia and “other psychological problems”.

Discussion

Our results suggest that three in 10 patients presenting to a GP have a cardiovascular problem, one in four have a diagnosed psychological problem, and a similar proportion have arthritis. Our crude prevalence estimates provide a measure of the underlying health needs of patients attending general practice, distinct from the demand for health care measured by general practice morbidity management rates. However, not surprisingly, the most prevalent problems among the surveyed patients broadly reflected the most common chronic problems managed in general practice.23

The population prevalence estimates for GORD, COAD, insomnia, asthma severity levels and chronic back pain under ongoing management provide new knowledge, as the NHS does not measure these morbidities.

Our population prevalence estimate for “any psychological problem” was almost double that of the NHS, but was similar to the prevalence found in the 1997 National Survey of Mental Health and Wellbeing of Adults.29 That survey estimated, using structured interviews and diagnostic tools, that 17.7% of the population had experienced a psychological problem in the previous 12 months, far closer to our estimate about 10 years later. Our estimated prevalence of diabetes was significantly higher than the NHS estimate, but was similar to that found in two other major studies — the Australian Diabetes, Obesity and Lifestyle Study and the South Australian Monitoring and Surveillance System. These found diabetes in 7.4% (aged 25 years and over) and 6.7% (aged 15 years and over) of the adult population, respectively.30,31

Many differences between our estimates and the NHS findings could be explained by inclusion criteria. For example, “back pain” in the NHS included undifferentiated (ie, symptomatic) pain, while we included only diagnosed chronic back pain. Another source of difference is in patient recall and the use of lay terms in describing conditions in the NHS.6,7,9,10,12 For example, confusion in the lay use of the terms “arthritis” and “rheumatism” may explain the differences between our estimates and those of the NHS, especially as the overall estimates for “any arthritis” are similar.8,11,12

The prevalence of GORD was almost identical to the prevalence of asthma, yet GORD has not been given equal attention, and is not an NHPA. Although GORD has a low mortality rate, it has a significant impact on quality of life.32 Perhaps it is time to consider its addition to the NHPAs. Our study did not include obesity among the conditions listed. Obesity has since been added to the NHPAs and will be included in future surveys.

This analysis provided prevalence estimates among the population currently attending general practice by adjusting crude rates according to the age–sex distribution of those who attended a primary practitioner at least once in the 12-month period. This was effectively an adjustment for frequency of GP or primary care visits by age and sex. These estimates therefore depend on how well the population of primary care patients has been enumerated by the Medicare administrative data.

The adjustment for visit frequency was averaged across conditions, and our method was more likely to sample frequent attenders of all ages. If patients with a particular condition attend more frequently than average for their age and sex, this may have led to overestimation for that condition. For example, our previous research found that patients with depression self-report visiting more frequently than average.33 Recent research suggests that sampling general practice patients’ visits for chronic disorders such as diabetes,34,35 hyperlipidaemia and hypertension, which have structured GP visiting patterns, provides reliable estimates, while chronic disorders with less regular management could be underestimated.36

In extrapolating to the general practice patient population, we included all patients attending any primary care medical practitioner (including non-vocationally registered GPs), on the assumption that patients attending these GPs do not differ from those attending vocationally registered GPs. Our estimates for the national population also assume that all patients diagnosed with one of the listed conditions visited a GP or primary care practitioner at least once in a 12-month period. The remaining 12% were assumed not to have been diagnosed with any of the listed conditions. This assumption may not hold for conditions such as asthma, where the condition may be well controlled and not require regular GP attendance.9

As in most studies, these estimates are for recognised conditions only, as no systematic screening was performed to uncover previously unrecognised conditions. It was left to the GPs’ discretion to select the clinical criteria for inclusion. As some diseases are vastly underdiagnosed, more of the sample patients may have one or more of these diseases (eg, for every 12 people with diagnosed diabetes, there are probably three with undiagnosed diabetes37).

Despite these limitations, our study is likely to provide more reliable prevalence estimates than the NHS, which has been the benchmark to date. Further, it provides these estimates at a fraction of the cost of the NHS, as the cost was marginal to that of the total BEACH national program. Our method has the benefit of the input of a medical practitioner, which probably leads to greater accuracy than self-report alone. There is no reason that this study could not be repeated annually as part of the BEACH program and therefore provide valuable estimates of trends in morbidity prevalence to augment other data sources and better define existing health needs in the Australian population.

3 Prevalence of selected conditions in the survey sample, the population attending general practice and the Australian population (with 95% CIs)

Diagnosed morbidity

Crude rate* (n = 9156)

Adjusted to general practice patient population

Adjusted to national estimate

NHS1


None of the listed conditions

39.6% (37.6%–41.6%)

46.9% (44.9%–48.9%)

53.2%

na

Any cardiovascular

30.0% (28.1%–31.7%)

22.4% (21.0%–23.9%)

19.7% (18.4%–21.0%)

18.0%

Combined hypertension

23.3% (21.8%–24.9%)

17.6% (16.4%–18.8%)

15.5% (14.4%–16.6%)

10.7%

Uncomplicated hypertension

17.6% (16.3%–18.9%)

13.4% (12.4%–14.5%)

11.8% (10.9%–12.7%)

na

Complicated hypertension

5.7% (5.0%–6.4%)

4.2% (3.6%–4.8%)

3.7% (3.2%–4.2%)

na

Ischaemic heart disease

9.5% (8.5%–10.5%)

6.4% (5.7%–7.1%)

5.7% (5.0%–6.3%)

1.9%§

Cerebrovascular disease

3.7% (3.0%–4.5%)

2.4% (1.9%–2.9%)

2.1% (1.7%–2.6%)

0.5%

Congestive heart failure

3.2% (2.7%–3.7%)

2.0% (1.7%–2.3%)

1.8% (1.5%–2.1%)

1.4%

Peripheral vascular disease

2.0% (1.5%–2.5%)

1.3% (1.0%–1.6%)

1.2% (0.9%–1.5%)

1.0%

Any psychological

24.8% (23.2%–26.3%)

22.1% (20.5%–23.6%)

19.4% (18.1%–20.8%)

10.7%

Depression

14.2% (13.0%–15.4%)

12.9% (11.7%–14.1%)

11.3% (10.3%–12.4%)

5.3%

Anxiety

10.7% (9.6%–11.8%)

9.5% (8.5%–10.6%)

8.4% (7.4%–9.3%)

4.9%

Insomnia

5.5% (4.6%–6.4%)

4.8% (3.9%–5.7%)

4.2% (3.4%–5.0%)

na

Other psychological problem

4.1% (3.5%–4.7%)

4.0% (3.4%–4.5%)

3.5% (3.0%–4.0%)

na

Any arthritis

22.8% (21.1%–24.5%)

16.8% (15.5%–18.2%)

14.8% (13.6%–16.0%)

15.3%

Osteoarthritis

20.0% (18.3%–21.6%)

14.3% (13.1%–15.6%)

12.6% (11.5%–13.7%)

7.9%

Rheumatoid

1.0% (0.8%–1.2%)

0.7% (0.6%–0.9%)

0.7% (0.5%–0.8%)

2.5%

Asthma + COAD

14.4% (13.3%–15.5%)

12.8% (11.7%–13.8%)

11.2% (10.3%–12.2%)

na

Asthma

10.7% (9.8%–11.6%)

10.6% (9.6%–11.5%)

9.3% (8.5%–10.2%)

10.2%

Mild

6.3% (5.6%–7.0%)

6.5% (5.8%–7.2%)

5.7% (5.1%–6.3%)

na

Moderate

3.7% (3.2%–4.2%)

3.5% (3.0%–4.0%)

3.1% (2.6%–3.5%)

na

Severe

0.7% (0.5%–0.9%)

0.6% (0.4%–0.8%)

0.5% (0.4%–0.7%)

na

COAD

3.6% (3.1%–4.2%)

2.6% (2.2%–3.0%)

2.3% (1.9%–2.6%)

na

Hyperlipidaemia

15.9% (14.7%–17.2%)

12.7% (11.6%–13.7%)

11.2% (10.2%–12.1%)

6.8%

GORD

13.1% (11.9%–14.4%)

10.4% (9.3%–11.5%)

9.2% (8.2%–10.1%)

na

Chronic back pain

10.1% (9.0%–11.1%)

8.4% (7.4%–9.3%)

7.4% (6.5%–8.2%)

15.2%

Diabetes (all)

8.3% (7.5%–9.0%)

6.6% (6.0%–7.3%)

5.8% (5.3%–6.4%)

3.6%

Type 1

0.6% (0.4%–0.8%)

0.6% (0.4%–0.8%)

0.5% (0.3%–0.7%)

0.4%

Type 2

7.2% (6.5%–7.9%)

5.7% (5.1%–6.3%)

5.0% (4.5%–5.5%)

3.0%

Malignant neoplasms

3.1% (2.6%–3.6%)

2.3% (1.9%–2.7%)

2.0% (1.7%–2.3%)

1.7%


NHS = National Health Survey, 2004–2005. na = not available. COAD = chronic obstructive airways disease. GORD = gastro-oesophageal reflux disease. * Equates to estimated prevalence among patients in general practice waiting room. Estimated prevalence among patients who visited a GP at least once in a year. Estimated prevalence among the Australian population. § Angina + other ischaemic heart disease. Oedema + heart failure.

  • Stephanie A Knox1
  • Christopher M Harrison2
  • Helena C Britt3
  • Joan V Henderson4

  • Family Medicine Research Centre, School of Public Health, University of Sydney, Sydney, NSW.


Correspondence: helenab@med.usyd.edu.au

Acknowledgements: 

We thank the GPs who participated in the substudy, and the Australian Government Department of Health and Ageing for supplying Medicare claims data used for adjustments. During the data collection period of this substudy, the BEACH program was funded by the National Prescribing Service, AstraZeneca, Roche Products, Janssen–Cilag, Merck Sharp and Dohme, Pfizer Australia, the Office of the Australian Safety and Compensation Council (Australian Government Department of Employment and Workplace Relations) and the Australian Government Department of Veterans’ Affairs.

Competing interests:

The funding organisations had no role in the study design, data collection, analysis and interpretation, or the writing and publication of this report.

  • 1. Australian Bureau of Statistics. National Health Survey: summary of results, 2004–05. Canberra: ABS, 2006. (ABS Cat. No. 4364.0.)
  • 2. Adams PF, Dey AN, Vickerie JL. Summary health statistics for the US population: National Health Interview Survey, 2005. Vital Health Stat 2007; 10 (233): 1-104.
  • 3. Rickards L, Fox K, Roberts C, et al. Living in Britain: results from the 2002 General Household Survey. London: HMSO, 2004.
  • 4. Moore JR Jr. Accuracy of a health interview survey in measuring chronic illness prevalence. Health Serv Res 1975; 10: 162-167.
  • 5. Harlow SD, Linet MS. Agreement between questionnaire data and medical records. The evidence for accuracy of recall. Am J Epidemiol 1989; 129: 233-248.
  • 6. The Italian Longitudinal Study on Aging Working Group. Prevalence of chronic diseases in older Italians: comparing self-reported and clinical diagnoses. Int J Epidemiol 1997; 26: 995-1002.
  • 7. West SL, Savitz DA, Koch G, et al. Recall accuracy for prescription medications: self-report compared with database information. Am J Epidemiol 1995; 142: 1103-1112.
  • 8. Linet MS, Harlow SD, McLaughlin JK, McCaffrey LD. A comparison of interview data and medical records for previous medical conditions and surgery. J Clin Epidemiol 1989; 42: 1207-1213.
  • 9. Mohangoo AD, van der Linden MW, Schellevis FG, Raat H. Prevalence estimates of asthma or COPD from a health interview survey and from general practitioner registration: what’s the difference? Eur J Public Health 2006; 16: 101-105.
  • 10. Merkin SS, Cavanaugh K, Longenecker JC, et al. Agreement of self-reported comorbid conditions with medical and physician reports varied by disease among end-stage renal disease patients. J Clin Epidemiol 2007; 60: 634-642.
  • 11. Kehoe R, Wu SY, Leske MC, Chylack LT Jr. Comparing self-reported and physician-reported medical history. Am J Epidemiol 1994; 139: 813-818.
  • 12. Kriegsman DM, Penninx BW, van Eijk JT, et al. Self-reports and general practitioner information on the presence of chronic diseases in community dwelling elderly. A study on the accuracy of patients’ self-reports and on determinants of inaccuracy. J Clin Epidemiol 1996; 49: 1407-1417.
  • 13. Britt H, Harris M, Driver B, et al. Reasons for encounter and diagnosed health problems: convergence between doctors and patients. Fam Pract 1992; 9: 191-194.
  • 14. Midthjell K, Holmen J, Bjorndal A, Lund-Larsen G. Is questionnaire information valid in the study of a chronic disease such as diabetes? The Nord-Trondelag diabetes study. J Epidemiol Community Health 1992; 46: 537-542.
  • 15. Martin LM, Leff M, Calonge N, et al. Validation of self-reported chronic conditions and health services in a managed care population. Am J Prev Med 2000; 18: 215-218.
  • 16. Zhu K, McKnight B, Stergachis A, et al. Comparison of self-report data and medical records data: results from a case-control study on prostate cancer. Int J Epidemiol 1999; 28: 409-417.
  • 17. Karlson EW, Lee IM, Cook NR, et al. Comparison of self-reported diagnosis of connective tissue disease with medical records in female health professionals: the Women’s Health Cohort Study. Am J Epidemiol 1999; 150: 652-660.
  • 18. Luck J, Peabody JW, Dresselhaus TR, et al. How well does chart abstraction measure quality? A prospective comparison of standardized patients with the medical record. Am J Med 2000; 108: 642-649.
  • 19. Peabody JW, Luck J, Glassman P, et al. Comparison of vignettes, standardized patients, and chart abstraction: a prospective validation study of 3 methods for measuring quality. JAMA 2000; 283: 1715-1722.
  • 20. Katz JN, Chang LC, Sangha O, et al. Can comorbidity be measured by questionnaire rather than medical record review? Med Care 1996; 34: 73-84.
  • 21. Sherbourne CD, Meredith LS. Quality of self-report data: a comparison of older and younger chronically ill patients. J Gerontol 1992; 47 Suppl: S204-S211.
  • 22. Driver B, Britt H, O’Toole B, et al. How representative are patients in general practice morbidity surveys? Fam Pract 1991; 8: 261-268.
  • 23. Britt H, Miller GC, Knox S, et al. General practice activity in Australia 2003–04. Canberra: Australian Institute of Health and Welfare, 2004.
  • 24. Britt H, Miller GC, Henderson J, Bayram C. Patient-based substudies from BEACH: abstracts and research tools 1999–2006. Canberra: Australian Institute of Health and Welfare, 2007.
  • 25. Australian Institute of Health and Welfare, and Commonwealth Department of Health and Family Services. First report on National Health Priority Areas 1996. Canberra: AIHW and DHFS, 1997. (AIHW Cat. No. PHE 1.)
  • 26. Classification Committee of the World Organization of Family Doctors (WICC). ICPC-2: International Classification of Primary Care. 2nd ed. Oxford: Oxford University Press, 1998.
  • 27. Australian Centre for Asthma Monitoring. Asthma in Australia 2005. Canberra: Australian Institute of Health and Welfare, 2005.
  • 28. Carlin JB, Hocking J. Design of cross-sectional surveys using cluster sampling: an overview with Australian case studies. Aust N Z J Public Health 1999; 23: 546-551.
  • 29. Australian Bureau of Statistics. Mental health and wellbeing: profile of adults, Australia, 1997. Canberra: ABS, 1998. (ABS Cat. No. 4326.0.)
  • 30. Dunstan DW, Zimmet PZ, Welborn TA, et al. The rising prevalence of diabetes and impaired glucose tolerance: the Australian Diabetes, Obesity and Lifestyle Study. Diabetes Care 2002; 25: 829-834.
  • 31. Chittleborough CR, Grant JF, Phillips PJ, Taylor AW. The increasing prevalence of diabetes in South Australia: the relationship with population ageing and obesity. Public Health 2007; 121: 92-99.
  • 32. Irvine EJ. Quality of life assessment in gastro-oesophageal reflux disease. Gut 2004; 53 Suppl 4: iv35-iv39.
  • 33. Knox SA, Britt H. The contribution of demographic and morbidity factors to self-reported visit frequency of patients: a cross-sectional study of general practice patients in Australia. BMC Fam Pract 2004; 5: 17.
  • 34. Commonwealth Department of Health and Aged Care, and Australian Institute of Health and Welfare. National Health Priority Areas report: diabetes mellitus 1998. Canberra: DHAC and AIHW, 1999. (AIHW Cat. No. PHE 10.)
  • 35. Bonney MA, Harris M, Burns J, Davies GP. Diabetes information management systems: general practitioner and population reach. Aust Fam Physician 2000; 29: 1100-1103.
  • 36. Hoogenveen R, Westert G, Dijkgraaf M, et al. Disease prevalence estimations based on contact registrations in general practice. Stat Med 2002; 21: 2271-2285.
  • 37. Australian Institute of Health and Welfare. Australia’s health 2006: the tenth biennial health report of the Australian Institute of Health and Welfare. Canberra: AIHW, 2006.

Author

remove_circle_outline Delete Author
add_circle_outline Add Author

Comment
Do you have any competing interests to declare? *

I/we agree to assign copyright to the Medical Journal of Australia and agree to the Conditions of publication *
I/we agree to the Terms of use of the Medical Journal of Australia *
Email me when people comment on this article

Online responses are no longer available. Please refer to our instructions for authors page for more information.