SciELO - Scientific Electronic Library Online

 
vol.18 issue2Patient experience with clinical pharmacist services in Travis County Federally Qualified Health Centers author indexsubject indexarticles search
Home Pagealphabetic serial listing  

My SciELO

Services on Demand

Journal

Article

Indicators

Related links

  • On index processCited by Google
  • Have no similar articlesSimilars in SciELO
  • On index processSimilars in Google

Share


Pharmacy Practice (Granada)

On-line version ISSN 1886-3655Print version ISSN 1885-642X

Pharmacy Pract (Granada) vol.18 n.2 Redondela Apr./Jun. 2020  Epub Oct 05, 2020

https://dx.doi.org/10.18549/pharmpract.2020.2.1924 

Editorial

Is my paper relevant for an international audience?

Fernando Fernandez-Llimos (orcid: 0000-0002-8529-9595)1 

1PhD, MPharm, MBA. Editor-inchief, Pharmacy Practice. Laboratory of Pharmacology, Department of Drug Sciences, College of Pharmacy, University of Porto. Porto (Portugal). fllimos@ff.up.pt

Abstract

This is the first question one should consider before submitting a paper to an international journal. The answer is simple: If researchers or practitioners from another country can learn something from your paper that can influence a practice or a research they are involved in, then your paper is relevant for an international audience. There are many elements that can influence in this cross-border transferability. One could think that having a big “n”, or performing complex statistical calculations, or using complicated study designs makes the paper more attractive to colleagues from other countries. These elements can help, but they are not sufficient. On the other hand, one could think that a study performed in a small hospital in a given country will never be of interest for these foreign colleagues. That is not necessarily correct. Let's burst some myths.

Keywords: Internationality; Global Health; Writing; Publishing; Biomedical Research; Research Design; Peer Review, Research; Quality Control; Editorial Policies; Periodicals as Topic

This is the first question one should consider before submitting a paper to an international journal. The answer is simple: If researchers or practitioners from another country can learn something from your paper that can influence a practice or a research they are involved in, then your paper is relevant for an international audience.

There are many elements that can influence in this cross-border transferability. One could think that having a big “n”, or performing complex statistical calculations, or using complicated study designs makes the paper more attractive to colleagues from other countries. These elements can help, but they are not sufficient. On the other hand, one could think that a study performed in a small hospital in a given country will never be of interest for these foreign colleagues. That is not necessarily correct. Let's burst some myths.

Myth #1: Experimental studies are much more interesting than observational studies. Well, it depends on the specific experimental study and observational study we are comparing. Obviously, randomized controlled trials (RCTs) are the gold standard to identify causation in clinical research and the highest level of evidence among unfiltered studies. But it's important to acknowledge that an RCT achieves these attributes only when conducted under rigorous standards that will ensure a low risk of bias. These are not simple words because we can measure — in fact, we should always measure — the potential risk of bias of an RCT when designing the protocol, and then the risk of bias before writing the article.1 Because, properly reporting an RCT is as important as properly conducting it. Some RCTs do not report all the data needed to include them in a meta-analysis: results at baseline and after follow-up in both groups with the corresponding dispersion measures, numbers of participants in each group, subgroups and additional analyses, etc.2 Another common issue of RCTs, especially in pharmacy services, is a poor description of the intervention performed. Pharmacy interventions are complex interventions, but we have several tools to improve the descriptions of these interventions.3,4,5On the other hand, a well-conducted observational study can add much value to a research question. Newman, et al., compared case-control studies with the house-red, because they are “more modest and a little riskier than the other selections, but much less expensive and sometimes surprisingly good”.6

Myth #2: Systematic reviews compile the evidence about a research question, so their relevance is guaranteed. This is true if the systematic review has a relevant research question and is conducted ensuring the highest levels of quality. Some supervisors design research plans for their Ph.D. students starting with a systematic review. Their procedure relies on the idea that a systematic review is the best way the Ph.D. student embraces the state of the art about the topic of thesis. Many of these newbies don't have experience or specific training on evidence synthesis, and they have a limited knowledge of the topic of the thesis. In this scenario, the probability of producing a poor systematic review, poorly conducted, and with a poor answer to a poorly established research question is very high. This might be one of the reasons why only 3% of systematic reviews are clinically sound.7 A systematic review should be the final piece of research in a doctoral program, only achievable when the student has depth of knowledge of the topic, had time to be trained to synthesize evidence, and has skills to write a paper following, again, the reporting recommendations. Only an expert researcher can have the skills to critically evaluate the studies included in the meta-analysis.8 (see PRISMA extensions for variants of a systematic review: http://www.prisma-statement.org/Extensions/).9

Myth #3: Replicating a good study is always a reasonable way of confirming the facts in our environment or setting. A study is a good study not only because it was correctly conducted, but because it tries to solve a relevant research question. The first study probably found the cause of a phenomenon, or simply identified an important association. Repeating the study by means of a ‘me-too study' might not be relevant at all. If environmental conditions are similar to that of the first study, and nothing predicts different results, replicating the study and obtaining similar results could be a waste of resources. And, if nothing predicts different results but we obtained different results, unless we can explain or, at least, guess why these different results came out, replicating the study does not add much value. Adding analyses to the first study, perhaps with subgroup or sensitivity analyses, and explaining the results with a different view could warrant a new publication with interest for an international audience.10

Myth #4: Thoroughly describing how my setting performed in a period of time can help other researchers. This type of study is very common. Articles reporting the consumption of a therapeutic class during a period of time in a given hospital, or the number and type of drug-drug interactions pharmacists identified, or the characteristics of the students of a given university are frequently submitted as potential journal articles. These analyses are more appropriate for an annual report of the institution than for a research article. Two reasons limit the international relevance of these reports: 1) they depict the situation of a given setting in a given time, and nothing guarantees that the same institution in a different time or another institution at the same time would have similar results; and 2) these studies usually do not explain why that institution achieved those results. Are those DDDs consumed because some policy was implemented? Were those drug interactions identified after introducing a new system? The international reader will not be interested in a still image if nothing explains how this situation came about. Research should not only announce what happens, but it should also explain why that happened, or at least produce an educated guess.

Myth #5: Pilot studies should always be published before. A pilot study is probably the least interesting paper one could imagine. In fact, a pilot study serves only for the researchers of a full-length study to test whether the materials and methods are ready to start the big study. No conclusions, other than we are able to use these materials in our setting with the resources we have available, should be taken from a pilot study.11The conclusions of a pilot study cannot be inferred in a different scenario. Different from pilot studies, feasibility studies might be relevant for an international audience. Feasibility studies should be run after pilot studies, and they serve to obtain firsthand data that will influence the full-length study design. For instance, they serve to establish dispersion measures that will be necessary to establish sample size calculations, allow identifying smallest dose of the intervention required to obtain the effect, give an estimative of the drop-outs, etc.12To obtain reliable information, feasibility studies should be executed in a realistic population, similar to the one to be used in the full-length study. Thus, a researcher from another country can use this information to replicate the research, which is not possible with pilot studies.

Myth #6: Assessing obvious things is also relevant. We should not devote too much research time and efforts to publish a no-brainer. A frequently submitted type of article that could fit in this category are the KAP studies, where KAP stands for “knowledge, attitudes and practice.” KAP studies are pre-defined questionnaire-based studies that aim to establish the relationship between the three domains: knowledge influences attitudes, attitudes influence practice.13 Unfortunately, many KAP studies are limited to establish a baseline or a still picture of the knowledge and opinions of a group of individuals (e.g., patients, professionals, students) about a topic, usually concluding that these individuals have a limited knowledge about the topic of interest. These poor KAP studies gather the problems of the previous myth: They represent a small population with no interest outside that population. Researchers are interested in assessing KAP when they expect knowledge is limited, and not between experts in the topic. A KAP study could be interesting for international audiences if researchers can identify the reason why the performance (practice) is low based on negative attitudes supported in specific pieces of knowledge lacking. An alternative to make interesting KAP studies relies on increasing the number and characteristics of interviewees to be able to identify population subgroups with different KAP relations, and thus design tailored educational activities. And this links with the other type of obvious studies with limited interest for international audiences: those evaluating the increased knowledge achieved after a training activity. These studies usually consist of the repetition of knowledge evaluations before and after the educational/training activity and almost always conclude that the activity increased knowledge among participants. Should we expect that any educational activity would not increase knowledge among participants? Since 1990, when Miller published his pyramid, we should not be interested in what people know, but in what people do.14Again, we increase the knowledge to influence on attitudes and subsequently in practice, whether we educate patients or professionals.

So, in summary, international audience is interested in learning from others why things happen, and if these things could also happen in their environment/setting, perhaps because they want to imitate the first, or perhaps because they want to avoid others' failures. A study performed in a small hospital of a given country can be relevant for an international audience if it provides something more than a still picture of the scene. A general recommendation before submitting a paper could be thinking what you could learn from a paper like the one you're considering submitting if it would have been written by colleagues from another country.

REFERENCES

1. Higgins JP, Altman DG, Gøtzsche PC, Jüni P, Moher D, Oxman AD, Savovic J, Schulz KF, Weeks L, Sterne JA; Cochrane Bias Methods Group; Cochrane Statistical Methods Group. The Cochrane Collaboration's tool for assessingrisk of bias in randomised trials. BMJ. 2011;343:d5928. https://doi.org/10.1136/bmj.d5928 [ Links ]

2. Garcia-Cardenas V, Armour C, Benrimoj SI, Martinez-Martinez F, Rotta I, Fernandez-Llimos F. Pharmacists' interventions on clinical asthma outcomes: a systematic review. Eur Respir J. 2016;47(4):1134-1143. https://doi.org/10.1183/13993003.01497-2015 [ Links ]

3. Rotta I, Salgado TM, Felix DC, Souza TT, Correr CJ, Fernandez-Llimos F. Ensuring consistent reporting of clinical pharmacy services to enhance reproducibility in practice: an improved version of DEPICT. J Eval Clin Pract. 2015;21(4):584-590. https://doi.org/10.1111/jep.12339 [ Links ]

4. de Barra M, Scott C, Johnston M, De Bruin M, Scott N, Matheson C, Bond C, Watson M. Do pharmacy intervention reports adequately describe their interventions? A template for intervention description and replication analysis of reports included in a systematic review. BMJ Open. 2019;9(12):e025511. https://doi.org/10.1136/bmjopen-2018-025511 [ Links ]

5. Clay PG, Burns AL, Isetts BJ, Hirsch JD, Kliethermes MA, Planas LG. PaCIR: A tool to enhance pharmacist patient care intervention reporting. J Am Pharm Assoc (2003). 2019;59(5):615-623. https://doi.org/10.1016/j.japh.2019.07.008 [ Links ]

6. Hulley SB, Cummings SR, Browner WS, Grady DG, Newman TB. Designing clinical research. Philadelphia: Wolters; 2013. [ Links ]

7. Ioannidis JP. The Mass Production of Redundant, Misleading, and Conflicted Systematic Reviews and Meta-analyses. Milbank Q. 2016;94(3):485-514. https://doi.org/10.1111/1468-0009.12210 [ Links ]

8. Fan Y. Data reviews quick to produce but open to abuse. Nature. 2018;557(7703):31. https://doi.org/10.1038/d41586-018-05006-2 [ Links ]

9. Moher D, Liberati A, Tetzlaff J, Altman DG; PRISMA Group. Preferred reporting items for systematic reviews and metaanalyses: the PRISMA statement. PLoS Med. 2009;6(7):e1000097. https://doi.org/10.1371/journal.pmed.1000097 [ Links ]

10. Dingledine R. Why is it so hard to do good science? eNeuro. 2018;5(5): ENEURO.0188-18.2018. https://doi.org/10.1523/ENEURO.0188-18.2018 [ Links ]

11. Leon AC, Davis LL, Kraemer HC. The role and interpretation of pilot studies in clinical research. J Psychiatr Res. 2011;45(5):626-629. https://doi.org/10.1016/j.jpsychires.2010.10.008 [ Links ]

12. Arain M, Campbell MJ, Cooper CL, Lancaster GA. What is a pilot or feasibility study? A review of current practice and editorial policy. BMC Med Res Methodol. 2010;10:67. https://doi.org/10.1186/1471-2288-10-67 [ Links ]

13. Kwol VS, Eluwole KK, Avci T, Lasisi TT. Another look into the Knowledge Attitude Practice (KAP) model for food control: An investigation of the mediating role of food handlers’ attitudes. Food Control. 2020;110(4):107025. https://doi.org/10.1016/j.foodcont.2019.107025 [ Links ]

14. Miller GE. The assessment of clinical skills/competence/performance. Acad Med. 1990;65(9 Suppl):S63-S67. https://doi.org/10.1097/00001888-199009000-00045 [ Links ]

pub: April 30, 2020

Creative Commons License This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY-NC-ND 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.