This is the first question one should consider before submitting a paper to an international journal. The answer is simple: If researchers or practitioners from another country can learn something from your paper that can influence a practice or a research they are involved in, then your paper is relevant for an international audience.
There are many elements that can influence in this cross-border transferability. One could think that having a big “n”, or performing complex statistical calculations, or using complicated study designs makes the paper more attractive to colleagues from other countries. These elements can help, but they are not sufficient. On the other hand, one could think that a study performed in a small hospital in a given country will never be of interest for these foreign colleagues. That is not necessarily correct. Let's burst some myths.
Myth #1: Experimental studies are much more interesting than observational studies. Well, it depends on the specific experimental study and observational study we are comparing. Obviously, randomized controlled trials (RCTs) are the gold standard to identify causation in clinical research and the highest level of evidence among unfiltered studies. But it's important to acknowledge that an RCT achieves these attributes only when conducted under rigorous standards that will ensure a low risk of bias. These are not simple words because we can measure — in fact, we should always measure — the potential risk of bias of an RCT when designing the protocol, and then the risk of bias before writing the article.1 Because, properly reporting an RCT is as important as properly conducting it. Some RCTs do not report all the data needed to include them in a meta-analysis: results at baseline and after follow-up in both groups with the corresponding dispersion measures, numbers of participants in each group, subgroups and additional analyses, etc.2 Another common issue of RCTs, especially in pharmacy services, is a poor description of the intervention performed. Pharmacy interventions are complex interventions, but we have several tools to improve the descriptions of these interventions.3,4,5On the other hand, a well-conducted observational study can add much value to a research question. Newman, et al., compared case-control studies with the house-red, because they are “more modest and a little riskier than the other selections, but much less expensive and sometimes surprisingly good”.6
Myth #2: Systematic reviews compile the evidence about a research question, so their relevance is guaranteed. This is true if the systematic review has a relevant research question and is conducted ensuring the highest levels of quality. Some supervisors design research plans for their Ph.D. students starting with a systematic review. Their procedure relies on the idea that a systematic review is the best way the Ph.D. student embraces the state of the art about the topic of thesis. Many of these newbies don't have experience or specific training on evidence synthesis, and they have a limited knowledge of the topic of the thesis. In this scenario, the probability of producing a poor systematic review, poorly conducted, and with a poor answer to a poorly established research question is very high. This might be one of the reasons why only 3% of systematic reviews are clinically sound.7 A systematic review should be the final piece of research in a doctoral program, only achievable when the student has depth of knowledge of the topic, had time to be trained to synthesize evidence, and has skills to write a paper following, again, the reporting recommendations. Only an expert researcher can have the skills to critically evaluate the studies included in the meta-analysis.8 (see PRISMA extensions for variants of a systematic review: http://www.prisma-statement.org/Extensions/).9
Myth #3: Replicating a good study is always a reasonable way of confirming the facts in our environment or setting. A study is a good study not only because it was correctly conducted, but because it tries to solve a relevant research question. The first study probably found the cause of a phenomenon, or simply identified an important association. Repeating the study by means of a ‘me-too study' might not be relevant at all. If environmental conditions are similar to that of the first study, and nothing predicts different results, replicating the study and obtaining similar results could be a waste of resources. And, if nothing predicts different results but we obtained different results, unless we can explain or, at least, guess why these different results came out, replicating the study does not add much value. Adding analyses to the first study, perhaps with subgroup or sensitivity analyses, and explaining the results with a different view could warrant a new publication with interest for an international audience.10
Myth #4: Thoroughly describing how my setting performed in a period of time can help other researchers. This type of study is very common. Articles reporting the consumption of a therapeutic class during a period of time in a given hospital, or the number and type of drug-drug interactions pharmacists identified, or the characteristics of the students of a given university are frequently submitted as potential journal articles. These analyses are more appropriate for an annual report of the institution than for a research article. Two reasons limit the international relevance of these reports: 1) they depict the situation of a given setting in a given time, and nothing guarantees that the same institution in a different time or another institution at the same time would have similar results; and 2) these studies usually do not explain why that institution achieved those results. Are those DDDs consumed because some policy was implemented? Were those drug interactions identified after introducing a new system? The international reader will not be interested in a still image if nothing explains how this situation came about. Research should not only announce what happens, but it should also explain why that happened, or at least produce an educated guess.
Myth #5: Pilot studies should always be published before. A pilot study is probably the least interesting paper one could imagine. In fact, a pilot study serves only for the researchers of a full-length study to test whether the materials and methods are ready to start the big study. No conclusions, other than we are able to use these materials in our setting with the resources we have available, should be taken from a pilot study.11The conclusions of a pilot study cannot be inferred in a different scenario. Different from pilot studies, feasibility studies might be relevant for an international audience. Feasibility studies should be run after pilot studies, and they serve to obtain firsthand data that will influence the full-length study design. For instance, they serve to establish dispersion measures that will be necessary to establish sample size calculations, allow identifying smallest dose of the intervention required to obtain the effect, give an estimative of the drop-outs, etc.12To obtain reliable information, feasibility studies should be executed in a realistic population, similar to the one to be used in the full-length study. Thus, a researcher from another country can use this information to replicate the research, which is not possible with pilot studies.
Myth #6: Assessing obvious things is also relevant. We should not devote too much research time and efforts to publish a no-brainer. A frequently submitted type of article that could fit in this category are the KAP studies, where KAP stands for “knowledge, attitudes and practice.” KAP studies are pre-defined questionnaire-based studies that aim to establish the relationship between the three domains: knowledge influences attitudes, attitudes influence practice.13 Unfortunately, many KAP studies are limited to establish a baseline or a still picture of the knowledge and opinions of a group of individuals (e.g., patients, professionals, students) about a topic, usually concluding that these individuals have a limited knowledge about the topic of interest. These poor KAP studies gather the problems of the previous myth: They represent a small population with no interest outside that population. Researchers are interested in assessing KAP when they expect knowledge is limited, and not between experts in the topic. A KAP study could be interesting for international audiences if researchers can identify the reason why the performance (practice) is low based on negative attitudes supported in specific pieces of knowledge lacking. An alternative to make interesting KAP studies relies on increasing the number and characteristics of interviewees to be able to identify population subgroups with different KAP relations, and thus design tailored educational activities. And this links with the other type of obvious studies with limited interest for international audiences: those evaluating the increased knowledge achieved after a training activity. These studies usually consist of the repetition of knowledge evaluations before and after the educational/training activity and almost always conclude that the activity increased knowledge among participants. Should we expect that any educational activity would not increase knowledge among participants? Since 1990, when Miller published his pyramid, we should not be interested in what people know, but in what people do.14Again, we increase the knowledge to influence on attitudes and subsequently in practice, whether we educate patients or professionals.
So, in summary, international audience is interested in learning from others why things happen, and if these things could also happen in their environment/setting, perhaps because they want to imitate the first, or perhaps because they want to avoid others' failures. A study performed in a small hospital of a given country can be relevant for an international audience if it provides something more than a still picture of the scene. A general recommendation before submitting a paper could be thinking what you could learn from a paper like the one you're considering submitting if it would have been written by colleagues from another country.