Predictors of publication: characteristics of submitted manuscripts associated with acceptance at major biomedical journals

Kirby P Lee, Elizabeth A Boyd, Jayna M Holroyd-Leduc, Peter Bacchetti and Lisa A Bero
Med J Aust 2006; 184 (12): 621-626.
Published online: 19 June 2006


Objective: To identify characteristics of submitted manuscripts that are associated with acceptance for publication by major biomedical journals.

Design, setting and participants: A prospective cohort study of manuscripts reporting original research submitted to three major biomedical journals (BMJ and the Lancet [UK] and Annals of Internal Medicine [USA]) between January and April 2003 and between November 2003 and February 2004. Case reports on single patients were excluded.

Main outcome measures: Publication outcome, methodological quality, predictors of publication.

Results: Of 1107 manuscripts enrolled in the study, 68 (6%) were accepted, 777 (70%) were rejected outright, and 262 (24%) were rejected after peer review. Higher methodological quality scores were associated with an increased chance of acceptance (odds ratio [OR], 1.39 per 0.1 point increase in quality score; 95% CI, 1.16–1.67; P < 0.001), after controlling for study design and journal. In a multivariate logistic regression model, manuscripts were more likely to be published if they reported a randomised controlled trial (RCT) (OR, 2.40; 95% CI, 1.21–4.80); used descriptive or qualitative analytical methods (OR, 2.85; 95% CI, 1.51–5.37); disclosed any funding source (OR, 1.90; 95% CI, 1.01–3.60); or had a corresponding author living in the same country as that of the publishing journal (OR, 1.99; 95% CI, 1.14–3.46). There was a non-significant trend towards manuscripts with larger sample size (≥ 73) being published (OR, 2.01; 95% CI, 0.94–4.32). After adjustment for other study characteristics, having statistically significant results did not improve the chance of a study being published (OR, 0.83; 95% CI, 0.34–1.96).

Conclusions: Submitted manuscripts are more likely to be published if they have high methodological quality, RCT study design, descriptive or qualitative analytical methods and disclosure of any funding source, and if the corresponding author lives in the same country as that of the publishing journal. Larger sample size may also increase the chance of acceptance for publication.

Editors publish articles based on various factors, including originality of the research, clinical importance and usefulness of the findings, methodological quality, and readership interest of the journal.1-3 Selecting which manuscripts to publish from a large number of submissions is a difficult and complex process, and critics have argued that the editorial review process is arbitrary, slow and biased, and fails to prevent the publication of flawed studies.4

For example, it is not clear whether editors tend to publish studies with statistically significant results (positive) in preference to those with statistically non-significant or null results (negative), thereby contributing to the problem of publication bias.5-7 Publication bias raises the concern that statistically significant study results may dominate the research record and skew the results of systematic reviews and meta-analyses in favour of new treatments with positive results.8 The source of publication bias is unclear. Previous studies have concluded that authors often do not submit studies with statistically non-significant findings for publication because of a perceived lack of interest, methodological limitations,5-7,9 or the assumption that editors and reviewers are less likely to publish them.5,9,10

To our knowledge, only one other study has systematically evaluated manuscript characteristics that are associated with publication.11 Olson and colleagues assessed a prospective cohort of 745 manuscripts submitted to JAMA reporting controlled clinical trials. They found that higher quality studies and those enrolling participants in the United States were more likely to be published, but there was no difference in publication rates between studies with positive and negative results.

In our study, we identified characteristics of manuscripts submitted to three major biomedical journals across a wide range of study designs. We systematically evaluated manuscript characteristics that had been shown in other studies to be predictive of publication. We hypothesised that editors were more likely to publish studies with statistically significant results, higher methodological quality, and other characteristics found to be individually associated with publication, including study design, sample size, funding source and geographic region.2,5-7,10,11

Selection of journals

We studied publication outcomes at three major peer-reviewed biomedical journals: BMJ and the Lancet (in the United Kingdom) and Annals of Internal Medicine (in the US). We selected these three journals because they are among the top 10 journals in impact factor (range, 7.0–21.7) and immediacy index (the average number of times current articles in a specific journal are cited in the year they are published) (range, 3.0–5.8);12 have wide circulation and subscription rates throughout the world;13 represent the concerns of a general medical audience; and have highly competitive acceptance rates for original research (range, 5%–10%). In addition, each of the journals is published weekly or biweekly, ensuring a high volume of submitted and published articles. Each journal publishes clinical studies based on a variety of study designs, including randomised controlled trials (RCTs), observational studies, qualitative research and systematic reviews.

Selection of submitted manuscripts

From January 2003 to April 2003, we consecutively enrolled manuscripts reporting original research submitted to each journal. For two of the journals, enrolment was re-opened from November 2003 to February 2004 to obtain the originally planned number of accepted manuscripts (about 70). We included RCTs; non-randomised trials; systematic reviews; meta-analyses; prospective and retrospective cohort studies; and case–control, case series, cross-sectional, qualitative and ethnographic studies. We excluded case reports on a single patient. Submitted manuscripts meeting these criteria were enrolled and randomly assigned a study number.

Data abstraction of submitted manuscript characteristics

Data on manuscript characteristics were abstracted independently by two of us (K P L, E A B), who were unaware of the manuscript’s publication status. We developed a standardised data collection form to record manuscript characteristics, many of which have been examined individually by others.2,5-7,10,11 Our main predictor of publication was statistical significance of results. For studies conducting statistical analyses, primary results were classified as statistically significant (P < 0.05 or 95% CI for difference excluding 0 or 95% CI for ratio excluding 1) or not statistically significant. We also examined:

  • study design (eg, RCT, non-RCT, cohort, case–control, systematic review);

  • analytical methods used, classified as statistical/quantitative (eg, t tests, χ2 tests, analysis of variance, regressions, survival analysis, Kaplan–Meier curves, econometric analyses) or descriptive/qualitative (eg, proportions, frequencies, qualitative data or ethnography);

  • whether a hypothesis was clearly stated;

  • sample size (all study designs were included except systematic reviews that reported the number of studies but not the total number of subjects);

  • whether a description of the study subjects was provided;

  • whether the manuscript disclosed any funding source (eg, industry, private non-profit, government, no funding, or multiple sources);

  • authorship characteristics: apparent sex of the first and last authors; whether the corresponding author’s country was classified as high- or low-income based on World Bank classifications;14 and whether the corresponding author was from the same country as that of the publishing journal.

Our primary outcome was acceptance for publication. We classified rejection as either outright (with no external peer review) or after peer review.

Analysis of submitted manuscript characteristics

Proportions of manuscripts accepted for publication were first analysed using univariate logistic regression and estimating odds ratios (ORs) to identify associations between independent variables and publication. P values were not adjusted for multiple comparisons and P < 0.05 was considered statistically significant.

To control for several variables simultaneously, we carried out multivariate logistic regression analysis and calculated ORs. For our primary analysis, we compared accepted manuscripts with all rejected manuscripts. Further sensitivity analyses compared accepted manuscripts with manuscripts that were rejected outright or rejected after peer review.

The number of manuscripts enrolled was targeted to produce about 70 acceptances, which we chose so that there would be at least 10 acceptances per predictor in a multivariate model with up to seven simultaneous predictors. Data were analysed using SAS software (version 9.1, SAS Institute Inc, Cary, NC, USA).

Assessment and analysis of methodological quality of manuscripts

Accepted manuscripts were matched by journal and study design to rejected-outright manuscripts, which were selected at random if there were more rejected than accepted manuscripts in a journal-design stratum. Manuscripts rejected after peer review were not included in this analysis because the number of manuscripts in this group was inadequate for matching.

Two reviewers (K P L, J M H-L) independently assessed the methodological quality of each manuscript using a validated instrument.15 Our quality assessment instrument includes 22 items designed to measure the minimisation of systematic bias for a wide range of study designs (including RCTs, non-RCTs and observational studies), regardless of study topic. This instrument compares favourably in terms of validity and reliability with other instruments assessing the quality of RCTs,16 and performs similarly to other well accepted instruments for scoring the quality of trials included in meta-analyses.17 For systematic reviews and meta-analyses, we used a slightly modified version of the Oxman instrument,18 which is a valid, reliable instrument for assessing these types of studies.19

The two reviewers were trained to use the instruments and given detailed written instructions. One reviewer (J M H-L) was blinded to manuscript publication status (accepted or rejected). Scores ranged on a continuous scale from 0 (lowest quality) to 2 (highest quality).15,20 The average of the two reviewers’ scores was used for the analyses. If the reviewers’ scores differed by more than 1 SD, the manuscript was discussed by both reviewers until consensus was achieved, and the consensus score was used in the analyses. About 1.3% of methodological quality scores required adjudication. Inter-rater reliability of overall scores measured by intraclass correlation was good (r = 0.78).

Because quality scores have limitations in accurately assessing the reduction of bias in RCTs,17 we also evaluated the specific quality components of concealment of treatment allocation and double-blinding in a sub-sample of RCTs. When inadequate, these components are associated with exaggerated effect sizes in RCTs,21,22 although this cannot be generalised to all clinical areas.23 Individual components of study design for non-RCTs were not analysed because there is no empirical evidence to suggest which components are associated with exaggerated effect sizes.

In a nested case–control analysis, we assessed methodological quality as a predictor of publication. Matched conditional logistic regression was used to model the influence of methodological quality scores on odds of acceptance, stratified by journal and study design. ORs were scaled to correspond to a 0.1 point increase in quality score, as this is an interpretable degree of difference in quality. Additional models tested interactions between quality scores and study design, as the design of the study (RCT v non-RCT and observational studies) is known to influence the quality score.20 In a separate analysis of RCTs, the individual components of concealment of random allocation and double-blinding were dichotomised (adequate or inadequate), and ORs were calculated by matched conditional logistic regression.

Ethics approval

Our study was approved by the Committee on Human Research at the University of California, San Francisco.


During the study period, 1107 manuscripts meeting eligibility criteria were submitted to the three journals. Sixty-eight (6%) were accepted for publication, 777 (70%) were rejected outright and 262 (24%) were rejected after peer review (Box 1).

Characteristics of submitted manuscripts

In a univariate analysis, there were significant associations between publication and study design (RCT v all other study designs), analytical methods (descriptive/qualitative v statistical/quantitative), funding source (any disclosure v no disclosure), and corresponding author’s country of residence (same country as publishing journal v other country) (Box 2).

Sample size data were divided into quartiles, and the upper three quartiles (≥ 73 subjects) were compared with the lowest quartile (< 73 subjects).

Most of the submitted manuscripts (718/1107 [87%]) reported statistically significant results. The proportion of accepted manuscripts reporting statistically significant results (35; 83%) was slightly lower, but not significantly different from the proportion of submitted manuscripts with significant results (Box 2).

In multivariate logistic regression analyses comparing accepted with all rejected manuscripts we included study design, analytical methods, sample size, funding source, and country of the corresponding author (total n = 969) (Box 3). Factors significantly associated with publication were having an RCT or systematic review study design, use of descriptive/qualitative analytical methods, disclosure of any funding source, and having a corresponding author residing in the same country as the publishing journal. There was a non-significant trend towards manuscripts with larger sample sizes being published. After controlling for these five variables, manuscripts with statistically significant results were no more likely to be published than those with non-significant results.

In multivariate logistic regression analyses comparing accepted manuscripts with those rejected outright, we included the same five variables (total n = 734). We also compared accepted manuscripts with those rejected after peer review (total n = 294) (Box 3). Similar findings were observed in both models, although associations were not statistically significant when comparing accepted manuscripts with those rejected after peer review. In the latter analysis, the number of observations decreased because fewer manuscripts were rejected after peer review than rejected outright. In none of the sensitivity analyses did statistical significance of study results appear to increase the chance of publication.

Methodological quality of submitted manuscripts

Of the 68 accepted manuscripts, three basic research studies were excluded because our quality instrument was not designed to evaluate these types of studies. Two of the remaining accepted manuscripts did not contribute to the matched analysis because they were in a journal-design stratum that had no rejected manuscripts. In three strata there were one fewer rejected than accepted manuscripts. Thus, our final sample size for analysis consisted of 123 manuscripts (63 cases [accepted manuscripts] and 60 controls [rejected manuscripts]) distributed over 21 journal-design strata that had at least one each of accepted and rejected manuscripts. We also performed separate analyses for RCTs (n = 26; 13 cases, 13 controls), systematic reviews (n = 12; 6 cases, 6 controls), and “all other” types of study design (n = 85; 44 cases, 41 controls).

Manuscripts with higher methodological quality scores were significantly more likely to be accepted for publication (OR, 1.39 per 0.1 point increase in quality score; 95% CI, 1.16–1.67; P < 0.001) (Box 4). Checking for non-linearity by adding a quadratic term for quality score did not substantially improve the model (P = 0.24). The estimated effect of quality on odds of acceptance separately for the three major study design categories is shown in Box 4. All estimates were positive, with CIs overlapping considerably, suggesting that the influence of quality score on chance of acceptance appeared to be similar by study design. Formal tests for interactions by design or journal had large P values (P = 0.43).

Among the 26 RCTs, those with adequate concealment of treatment allocation appeared to be more likely to be published, although the results, within this small sample, were not statistically significant (9 accepted v 4 rejected; OR, 8.6; 95% CI, 0.91–80.9; P = 0.060), as were those with double-blinding (9 accepted v 5 rejected; OR, 3.4; 95% CI, 0.69–16.7; P = 0.13).


Manuscripts with higher methodological quality were more likely to be published, but those reporting statistically significant results were no more likely to be published than those without, suggesting that the source of publication bias is not at the editorial level. This confirms previous findings at a single, large, general biomedical journal with a high impact factor.11 In our study, the proportion of submitted manuscripts reporting statistically significant results far outnumbered those reporting statistically non-significant results, corroborating previous findings that suggest investigators may fail to submit negative studies.5-7,9 Furthermore, in none of the sensitivity analyses (accepted v rejected outright, accepted v rejected after peer review) did statistical significance of results appear to increase the chance of publication, suggesting that studies with statistically significant results are not more likely to be published, regardless of whether they have been peer reviewed or not.

Studies with an RCT or systematic review study design and (possibly) larger sample size were more likely to be published than smaller studies of other designs. Such studies may be less susceptible to methodological bias.21 This is also supported by our findings that manuscripts with higher methodological quality were more likely to be published. On the other hand, editors may have a tendency to publish systematic reviews and RCTs because they are cited more often than other study designs,24 thereby positively influencing their own journal’s impact factor.

We can suggest a couple of reasons why descriptive/qualitative analytical methods were associated with higher publication rates in our study. Firstly, early examinations of new treatments are often conducted using observational studies or case series, and results from these studies may be novel and stimulate new areas of research or reassessment of current clinical practice and standards of care. Secondly, descriptive/qualitative studies may be more likely to be published in these major biomedical journals because they may receive higher quality submissions.

Manuscripts disclosing any funding source were significantly more likely to be published than those with no disclosure. At each of the three journals surveyed, authors are required to disclose the funding source, describe the role of the funding source in the research process, and declare any conflicts of interest. Such disclosure helps editors and reviewers to assess potential bias associated with funding and research findings,25-27 and previous research shows that readers’ perceptions and reactions to research reports are influenced by statements of competing interests.28,29

There appears to be an editorial bias towards accepting manuscripts whose corresponding author lives in the same country as that of the publishing journal. Other studies have found a similar association, but did not control for differences in submissions to journals by nationality,30 or compared nationality of authors and reviewers only and did not adjust for other aspects of the submitted manuscripts.31 We did not observe an association between publication and income level of the corresponding author’s country or sex of the first or last author.

Our study is strengthened by its prospective design and large sample size. We included a variety of study designs, evaluated well defined objective manuscript characteristics, abstracted data independently while blinded to publication status, and adjusted for confounding variables. However, our findings were based on large general medical journals and may not be generalisable to specialty journals or journals with fewer editors, fewer submissions, or lower circulation.32 Secondly, the types of manuscripts submitted and accepted during the chosen time period may be unique. (However, by prospectively enrolling consecutive manuscripts submitted to three large, high-impact general medical journals over an 8-month period, we believe our sample would have been representative.) Finally, although we examined characteristics of submitted manuscripts associated with publication, we did not examine the editorial decision-making process. Many factors other than manuscript characteristics — such as novelty, clinical importance and usefulness, and readership interest of the journal — clearly influence the decision to publish.1-3

1 Publication outcomes for the cohort of submitted manuscripts at the three biomedical journals during the study period

2 Association between characteristics of submitted manuscripts (MSs) and publication: univariate analysis (accepted [n = 68] v all rejected [n = 1039] MSs)


Total number (%*)

Published number (%)

Odds ratio (95% CI)

Total manuscripts

1107 (100)

68 (6)


Journal 1

345 (31)

20 (6)

0.96 (0.52–1.78)

Journal 2

381 (34)

25 (7)

1.09 (0.61–1.96)

Journal 3

381 (34)

23 (6)


Study design

Randomised controlled trial

131 (12)

13 (10)


Systematic review

44 (4)

6 (14)

1.43 (0.51–4.03)

All other types of study design

932 (84)

49 (5)

0.50 (0.27–0.96)

Analytical methods

Statistical or quantitative

827 (75)

42 (5)


Descriptive or qualitative

274 (25)

25 (9)

1.88 (1.12–3.14)


6 (0.5)

1 (17)

3.74 (0.43–32.72)

Statistical significance of results§

Statistically significant

718 (87)

35 (5)

0.75 (0.32–1.72)

Not statistically significant

109 (13)

7 (6)


Hypothesis clearly stated§


211 (25)

13 (6)

1.33 (0.68–2.61)


616 (75)

29 (5)


Sample size

Upper three quartiles (≥ 73)

739 (75)

51 (7)

1.98 (0.96–4.08)

Lowest quartile (< 73)

249 (25)

9 (4)


Described human subjects


890 (85)

50 (6)

0.60 (0.32–1.11)


155 (15)

14 (9)


Funding source

Any disclosure

663 (60)

50 (8)

1.93 (1.11–3.36)

No disclosure

444 (40)

18 (4)


Corresponding author’s country of residence

Low income§§

52 (5)

2 (4)

0.60 (0.14–2.50)

High income§§

1032 (95)

65 (6)


Same country as journal

427 (40)

37 (9)

1.98 (1.21–3.26)

Other country

657 (61)

30 (5)


Sex of first author


679 (69)

48 (7)

1.29 (0.73–2.29)


306 (31)

17 (6)


Sex of last author***


745 (79)

53 (7)

1.63 (0.79–3.35)


200 (21)

9 (5)


* Percentage of grand total.  Percentage of row category that were published.  Six manuscripts conducted basic research studies with genetic analyses as their primary outcome. § Applies to statistical or quantitative studies only. Statistically significant results were defined as P < 0.05 or 95% CI for difference excluding 0 or 95% CI for ratio excluding 1.  Not all manuscripts reported a sample size. ** 62 manuscripts about non-human subjects (eg, animal subjects, journal articles, medical devices) were excluded.  Collapsed categories: industry, private non-profit, government, no funding reported or multiple sources v no funding source reported.  23 manuscripts not reporting the corresponding author’s country were excluded. §§ Based on World Bank classifications.14  122 manuscripts were excluded because first author’s sex could not be determined. *** 162 manuscripts were excluded because last author’s sex could not be determined.

3 Association between characteristics of submitted manuscripts and publication: multivariate analysis

Accepted (n = 59) v all rejected (n = 910) manuscripts*

Accepted (n = 59) v rejected outright (n = 675) manuscripts

Accepted (n = 59) v rejected after peer review (n = 235) manuscripts


Odds ratio (95% CI)


Odds ratio (95% CI)


Odds ratio (95% CI)


Study design

RCT v all other types of study design

2.40 (1.21–4.80)


2.40 (1.18–5.00)


2.13 (1.00–4.55)


RCT v systematic review

0.99 (0.12–8.33)


1.01 (0.11–9.09)


0.92 (0.08–10.00)


Analytical methods

Descriptive/qualitative v statistical/quantitative

2.85 (1.51–5.37)


2.72 (1.42–5.21)


3.10 (1.49–6.44)


Sample size

≥ 73 v < 73

2.01 (0.94–4.32)


2.39 (1.11–5.15)


1.28 (0.53–3.09)


Funding source

Any disclosure v no disclosure

1.90 (1.01–3.60)


2.17 (1.13–4.15)


1.37 (0.68–2.74)


Country of corresponding author

Same as country of journal v other country

1.99 (1.14–3.46)


2.40 (1.37–4.20)


1.31 (0.71–2.42)


Statistical significance of results§

Statistically significant v not statistically significant

0.83 (0.34–1.96)


0.85 (0.35–2.13)


0.74 (0.29–1.92)


RCT = randomised controlled trial. * n = 763 for this model with descriptive/qualitative manuscripts excluded.  n = 570 for this model with descriptive/qualitative manuscripts excluded.  n = 232 for this model with descriptive/qualitative manuscripts excluded. § Based on separate multivariate logistic regression models that substitute statistical significance for analytical methods as a predictor because statistical significance only applies to statistical/quantitative designs.

4 Association between methodological quality score and publication: aggregate results* and results stratified by study design



Odds ratio (95% CI)




1.39 (1.16–1.67)

< 0.001

Study design

Randomised controlled trial


1.46 (1.00–2.13)


Systematic review


1.62 (0.88–2.99)


All other


1.25 (1.01–1.53)


* Based on conditional logistic regression models of acceptance, stratified by journal and study design.

Received 9 January 2006, accepted 10 April 2006

  • Kirby P Lee1
  • Elizabeth A Boyd1
  • Jayna M Holroyd-Leduc2
  • Peter Bacchetti1
  • Lisa A Bero1

  • 1 University of California, San Francisco, Calif, USA.
  • 2 University Health Network, Toronto, and University of Toronto, Toronto, ON, Canada.



We thank the many editors, peer reviewers, and authors for their participation in our study, and Kay Dickersin for advice on study design. Our research was supported by grant #5R01NS044500-02 from the Research Integrity Program, Office of Research Integrity/National Institutes of Health collaboration. The funding source had no role in the study design, data collection, analysis, interpretation or writing of our article.

Competing interests:

None identified.

  • 1. Iverson C, Flanagin A, Fontanarosa PB, et al. American Medical Association manual of style. A guide for authors and editors. 9th ed. Baltimore: Williams & Wilkins, 1998.
  • 2. Turcotte C, Drolet P, Girard M. Study design, originality and overall consistency influence acceptance or rejection of manuscripts submitted to the Journal. Can J Anaesth 2004; 51: 549-556.
  • 3. Stroup DF, Thacker SB, Olson CM, et al. Characteristics of meta-analyses related to acceptance for publication in a medical journal. J Clin Epidemiol 2001; 54: 655-660.
  • 4. Kassirer JP, Campion EW. Peer review. Crude and understudied, but indispensable. JAMA 1994; 272: 96-97.
  • 5. Dickersin K, Chan S, Chalmers TC, et al. Publication bias and clinical trials. Control Clin Trials 1987; 8: 343-353.
  • 6. Dickersin K, Min YI, Meinert CL. Factors influencing publication of research results. Follow-up of applications submitted to two institutional review boards. JAMA 1992; 267: 374-378.
  • 7. Easterbrook PJ, Berlin JA, Gopalan R, Matthews DR. Publication bias in clinical research. Lancet 1991; 337: 867-872.
  • 8. Ioannidis JP. Effect of the statistical significance of results on the time to completion and publication of randomized efficacy trials. JAMA 1998; 279: 281-286.
  • 9. Weber EJ, Callaham ML, Wears RL, et al. Unpublished research from a medical specialty meeting: why investigators fail to publish. JAMA 1998; 280: 257-259.
  • 10. Chalmers I, Adams M, Dickersin K, et al. A cohort study of summary reports of controlled trials. JAMA 1990; 263: 1401-1405.
  • 11. Olson CM, Rennie D, Cook D, et al. Publication bias in editorial decision making. JAMA 2002; 287: 2825-2828.
  • 12. Institute for Scientific Information. Science citation index: journal citation reports. Philadelphia, Pa: ISI, 2004.
  • 13. Ulrich’s international periodicals directory. 42nd ed. New York: Bowker, 2004. Available at: (accessed Dec 2005).
  • 14. World Bank. Country data and statistics. 2005. Available at: (accessed Jun 2005).
  • 15. Cho MK, Bero LA. Instruments for assessing the quality of drug studies published in the medical literature. JAMA 1994; 272: 101-104.
  • 16. Moher D, Jadad AR, Nichol G, et al. Assessing the quality of randomized controlled trials: an annotated bibliography of scales and checklists. Control Clin Trials 1995; 16: 62-73.
  • 17. Juni P, Witschi A, Bloch R, Egger M. The hazards of scoring the quality of clinical trials for meta-analysis. JAMA 1999; 282: 1054-1060.
  • 18. Barnes DE, Bero LA. Why review articles on the health effects of passive smoking reach different conclusions. JAMA 1998; 279: 1566-1570.
  • 19. Oxman AD, Guyatt GH. Validation of an index of the quality of review articles. J Clin Epidemiol 1991; 44: 1271-1278.
  • 20. Lee KP, Schotland M, Bacchetti P, Bero LA. Association of journal quality indicators with methodological quality of clinical research articles. JAMA 2002; 287: 2805-2808.
  • 21. Schulz KF, Chalmers I, Hayes RJ, Altman DG. Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA 1995; 273: 408-412.
  • 22. Moher D, Pham B, Jones A, et al. Does quality of reports of randomised trials affect estimates of intervention efficacy reported in meta-analyses? Lancet 1998; 352: 609-613.
  • 23. Balk EM, Bonis PA, Moskowitz H, et al. Correlation of quality measures with estimates of treatment effect in meta-analyses of randomized controlled trials. JAMA 2002; 287: 2973-2982.
  • 24. Patsopoulos NA, Analatos AA, Ioannidis JP. Relative citation impact of various study designs in the health sciences. JAMA 2005; 293: 2362-2366.
  • 25. Als-Nielsen B, Chen W, Gluud C, Kjaergard LL. Association of funding and conclusions in randomized drug trials: a reflection of treatment effect or adverse events? JAMA 2003; 290: 921-928.
  • 26. Lexchin J, Bero LA, Djulbegovic B, Clark O. Pharmaceutical industry sponsorship and research outcome and quality: systematic review. BMJ 2003; 326: 1167-1170.
  • 27. Bekelman JE, Li Y, Gross CP. Scope and impact of financial conflicts of interest in biomedical research: a systematic review. JAMA 2003; 289: 454-465.
  • 28. Schroter S, Morris J, Chaudhry S, et al. Does the type of competing interest statement affect readers’ perceptions of the credibility of research? Randomised trial. BMJ 2004; 328: 742-743.
  • 29. Chaudhry S, Schroter S, Smith R, Morris J. Does declaration of competing interests affect readers’ perceptions? A randomised trial. BMJ 2002; 325: 1391-1392.
  • 30. Ernst E, Keinbacher T. Chauvinism. Nature 1991; 352: 560.
  • 31. Link AM. US and non-US submissions: an analysis of reviewer bias. JAMA 1998; 280: 246-247.
  • 32. Ioannidis JP, Cappelleri JC, Sacks HS, Lau J. The relationship between study design, results, and reporting of randomized clinical trials of HIV infection. Control Clin Trials 1997; 18: 431-444.


remove_circle_outline Delete Author
add_circle_outline Add Author

Do you have any competing interests to declare? *

I/we agree to assign copyright to the Medical Journal of Australia and agree to the Conditions of publication *
I/we agree to the Terms of use of the Medical Journal of Australia *
Email me when people comment on this article

Responses are now closed for this article.