SciELO - Scientific Electronic Library Online

 
vol.13 número2La relación de las complicaciones, los detalles de observación común y las estrategias de autojustificación con la veracidad: un meta-análisis índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Revista

Articulo

Indicadores

Links relacionados

  • En proceso de indezaciónCitado por Google
  • No hay articulos similaresSimilares en SciELO
  • En proceso de indezaciónSimilares en Google

Compartir


The European Journal of Psychology Applied to Legal Context

versión On-line ISSN 1989-4007versión impresa ISSN 1889-1861

The European Journal of Psychology Applied to Legal Context vol.13 no.2 Madrid jul./dic. 2021  Epub 09-Ago-2021

https://dx.doi.org/10.5093/ejpalc2021a4 

Research Articles

Plausibility: a verbal cue to veracity worth examining?

La credibilidad: ¿señal verbal de veracidad que vale la pena analizar?

Aldert Vrija  , Haneen Deeba  , Sharon Leala  , Pär-Anders Granhagb  , Ronald P Fisherc 

aUniversity of Portsmouth

bUniversity of Gothenburg, Sweden

cFlorida International University

ABSTRACT

Truth tellers sound more plausible than lie tellers. Plausibility ratings do not require much time or cognitive resources, but a disadvantage is that it is measured subjectively on Likert scales. The aim of the current paper was to understand if plausibility can be predicted by three other verbal veracity cues that can be measured objectively by counting their frequency of occurrence: details, complications, and verifiable sources. If these objective cues could predict plausibility, observers could be instructed to pay attention to them when judging plausibility, which would make plausibility ratings somewhat more objective. We therefore re-analysed five existing datasets; all of them included plausibility, details and complications and two of them also verifiable sources as dependent variables. Plausibility was positively correlated with all three other tested cues, but mostly predicted by complications and verifiable sources, explaining on average almost 40% of the variance. Plausibility showed larger effect sizes in distinguishing truth tellers from lie tellers than the three other cues, perhaps because the plausibility cue consists of multiple components (complications and verifiable sources). Research has shown that the cues that showed the strongest relationship with veracity typically consisted of multiple components.

Keywords Deception; Plausibility; Details; Complications; Verifiable sources

RESUMEN

Las personas que dicen la verdad suenan más creibles que las mentirosas. La valoración de la credibilidad no necesita mucho tiempo ni recursos cognitivos, aunque tenga la desventaja de que se mide subjetivamente en escalas Likert. El objetivo de este trabajo es saber si la credibilidad puede predecirse mediante pistas de veracidad verbal que puede medirse objetivamente contando la frecuencia de ocurrencia: detalles, complicaciones y fuentes verificables. Si estas pistas cognitivas pudieran predecir la credibilidad se podría instruir a los observadores a que les presten atención al valorar la credibilidad, lo que haría que esta fuera algo más objetiva. Con esta intención reanalizamos cinco conjuntos de datos, todos los cuales incluían credibilidad, detalles y complicaciones y dos de ellos además fuentes verificables como variables dependientes. La credibilidad correlacionaba positivamente con las otras tres pistas que se probaron, predicha sobre todo por las complicaciones y las fuentes verificables, que explicaban de media casi el 40% de la varianza. La credibilidad mostró tamaños de efecto al distinguir personas que decían la verdad de las que mentían mayores que las otras tres pistas, tal vez porque la señal de plausibilidad tiene varios componentes (complicaciones y fuentes verificables). Según la investigación, las pistas con una relación más estrecha con la veracidad normalmente constaban de diversos componentes.

Palabras clave Engaño; Credibilidad; Detalles; Complicaciones; Fuentes verificables

Introduction

Plausibility: A Verbal Cue to Veracity worth Examining?

In 2003 Bella DePaulo et colleagues (DePaulo et al. (2003) published their meta-analysis examining nonverbal and verbal cues to deceit. It included 120 samples examining 158 cues. Fifty out of these 158 cues were examined more than five times, including plausibility, which was examined nine times. Plausibility was significantly related to veracity with truth tellers' stories sounding more plausible than lie tellers' stories. Although the effect size was small (d = 0.23), it was larger than the effect sizes of most other cues. In fact, plausibility emerged as the eighth strongest indicator in the list of 50 cues (DePaulo et al., 2003, Table 8).

Given that plausibility was more strongly related to veracity than most other verbal cues, someone would expect researchers to have included plausibility in the set of verbal cues they examine when assessing veracity. This did not happen. To date at least six frequently cited verbal veracity assessment protocols exist, but in five of them plausibility is not included: Assessment Criteria Indicative of Deception (ACID; Colwell et al., 2013; Colwell et al., 2007); Criteria-based Content Analysis (CBCA; Amado et al., 2016; Köhnken, 2004; Köhnken & Steller, 1988); Cognitive Credibility Assessment (CCA; Vrij, Fisher, et al., 2017; Vrij et al., 2015); the Strategic Use of Evidence (SUE; Granhag & Hartwig, 2015; Hartwig, et al., 2014); and the Verifiability Approach (VA; Nahari & Vrij, 2014; Vrij & Nahari, 2019). Plausibility is sometimes included in one tool, Reality Monitoring (RM; Masip et al., 2005; Sporer, 2004; Sporer et al., 2020), under the term 'realism'. In a recent study (Sporer et al., 2020, study 2) plausibility emerged as the strongest indicator of veracity out of the eight RM cues that were examined.

We do not know why other researchers do not include plausibility in their protocols, but for us subjectivity of coding is the main reason to exclude it. Subjectivity means that we cannot explain to practitioners how to use plausibility as a verbal veracity cue. However, ignoring plausibility could be considered a shortcoming, not only because research has shown that plausibility has potential as a veracity assessment cue but also because in our conversations with practitioners about verbal cues to deception they frequently ask us about this cue. Against this background we decided to start examining plausibility in more detail, resulting in the current project which should be seen as a first step. In the current project we explored to what extent plausibility could be predicted by verbal cues that are coded more objectively and that discriminate truth tellers from lie tellers according to research. If plausibility could be predicted by such cues, we would be one step closer to making the concept of statement plausibility more objective. That is, observers could be instructed to consider these objective cues when judging plausibility.

We checked our datasets and found five in which we examined plausibility and two other verbal cues we thought may be related to it: total details and complications (Deeb, Vrij, Leal, et al., 2020; Leal et al., 2019; Leal et al. 2015; Vrij, Leal, Deeb, et al. 2020; Vrij, Leal, Fisher, et al., 2020). In two of these datasets (Leal et al., 2019; Vrij, Leal, Deeb, et al. 2020) an additional cue was examined which we also thought to have potential in explaining plausibility: verifiable sources.

DePaulo et al. (2003) defined plausibility as "the degree to which the message seems plausible, likely, or believable" (p. 113). Another definition used in the literature that avoids the word 'plausible' in its definition is "how likely it is that the activities happened in the way described" (Leal et al., 2019, p. 278). To judge how likely it is that activities happened in the way they are described, it is useful to take contextual information into account. Contextual information can be present in at least two forms (Blair et al., 2010). First, statements can be compared with independent evidence such as CCTV footage (statements that contradict independent evidence are considered implausible). This is a compelling way to detect deception (Vrij & Fisher, 2016) and the SUE technique is based on this principle (Granhag & Hartwig, 2015). However, this use of plausibility is possible only when independent evidence is available, which is not always the case. Second, statements can be judged in terms of what is conventional or reasonable in a given situation (unconventional or unreasonable activities are considered implausible). Someone can always make this type of comparison and it has shown good potential for lie detection. Blair et al. (2010) conducted a series of experiments in which some participants were given contextual information (e.g., being told that the questions were very difficult to answer prior to deciding whether someone with a high score had cheated in a test), whereas other participants were not given contextual information. Observers who received contextual information performed better in detecting truths and lies (75%) than observers who did not receive such information (57%). However, comparing a statement against the "conventional" or "reasonable" could become subjective if observers disagree on what is conventional or reasonable.

The objective verbal cues we considered were details, complications, and verifiable sources. Details refer to the meaningful units of information in a statement. Truth tellers typically report more details than lie tellers (Amado et al., 2016). Lie tellers lack the cognitive resources to fabricate enough details (Köhnken, 2004) or are unwilling to report many details out of fear that the details they provide result in leads to investigators (Nahari et al., 2014). A complication is an occurrence that affects the story teller and makes a situation more complex ("The air conditioning was not working properly in the hotel, which made the room far too hot.") Truth tellers typically report more complications than lie tellers (Amado et al., 2016). Making up complications requires cognitive resources, but lie tellers may not have adequate cognitive resources to do so (Köhnken, 2004). Besides, adding complications makes the story more complex, which conflicts with lie tellers' inclination to keep their stories simple (Hartwig et al., 2007). The fear that the provided information results in leads for investigators also results in lie tellers reporting fewer verifiable sources than truth tellers (Leal et al., 2019). Verifiable sources refer to sources mentioned in a statement that could be consulted to check the veracity of a statement, such as named witnesses, receipts, and CCTV footage.

All three cues may be related to statement plausibility. Regarding details, people typically underestimate forgetting (Harvey et al., 2019; Koriat et al., 2004; Kornell & Bjork, 2009), which means that they expect others to be able to provide many details when they are asked to provide a detailed account of an activity. Therefore, a detailed account of an activity will be considered plausible and an account that provides few details will be considered unconventional and therefore implausible. Complications frequently occur (Vrij, Mann, et al., 2020; Vrij & Vrij, 2020) and observers will remember similar experiences when other people report them. This will make their stories sound plausible. The absence of complications in an account will be seen as an abnormally smooth report of an activity. People are often able to back up their activities with evidence. That is, they have met a named person, there is footage of the activity (CCTV or photos), the activity is documented (use of phone, bank cards, receipts), etc. Activities will thus be considered more plausible when such verifiable sources are reported. Activities that, according to the interviewee's account, took place in a vacuum of evidence will be considered less plausible.

Method

We re-analysed five datasets (Deeb, Vrij, Leal, et al., 2020; Leal et al., 2019; Leal et al., 2015; Vrij, Leal, Deeb, et al. 2020; Vrij, Leal, Fisher, et al., 2020). In all five datasets, details, complications, and plausibility were examined, and in two datasets (Leal et al., 2019; Vrij, Leal, Deeb, et al., 2020) also verifiable sources were examined. In all five experiments plausibility was defined as "how likely it is that the activities happened in the way described" and was measured subjectively on a 7-point scale ranging from 1 (implausible) to 7 (plausible). Details, complications and verifiable sources were counted objectively through their frequency of occurrence. We note that strictly speaking these measurements are still not objective. Someone has to define those variables and they then should be coded according to this definition. However, this coding is more objective than the Likert scale coding used for plausibility. We used the variables as they appeared in the datasets, so no additional coding was carried out for this article. However, some dependent variables were merged for the purpose of this article. See Appendix for more information.

In Deeb, Vrij, Leal, et al. (2020), truth tellers told the truth about a significant event they experienced in the past two years whereas lie tellers pretended to have experienced a similar event. In Leal et al. (2019) and Vrij, Leal, Fisher, et al. (2020), truth tellers told the truth about a trip they made in the last 12 months, whereas lie tellers pretended to have made such a trip. In Leal et al. (2015), truth tellers discussed a truthful experience of a theft, loss or damage, whereas lie tellers fabricated such experiences. In Vrij, Leal, Deeb, et al. (2020), truth tellers told the truth about a trip they were going to make (intended trip), whereas lie tellers made up such a story. The number of participants in the studies were 243 in Deeb, Vrij, Leal, et al. (2020), 83 in Leal et al. (2015), 150 in Leal et al. (2019), 208 in Vrij, Leal, Deeb, et al. (2020), and 201 in Vrij, Leal, Fisher, et al. (2020). In all five experiments, manipulations other than veracity took place and they were included as covariates in the current analyses (see Appendix). In addition, Vrij, Leal, Fisher, et al.'s (2020) experiment was carried out in three different countries and 'country' was also included as a covariate in the analyses.

Results

Plausibility as a Diagnostic Verbal Cue to Veracity

Table 1, final column, shows that plausibility could be measured reliably in all five studies. For each of the five datasets, analyses of covariance were conducted with veracity as the independent variable, manipulations other than veracity as covariates, and details, complications, verifiable sources, and plausibility as dependent variables. Table 1 shows the results for the five experiments. In all five experiments, truth tellers' statements came across as significantly more plausible than lie tellers' statements. The effect sizes for plausibility were large in all experiments and ranged from d = 0.71 to d = 1.18. In all five experiments, truth tellers reported significantly more complications than lie tellers, and the effect sizes ranged from small (d = 0.35) to large (d = 0.88). In four of the five experiments, truth tellers reported significantly more details than lie tellers, and the effect sizes ranged from small (d = 0.33) to medium/large (d = 0.59). Also, truth tellers reported more verifiable sources than lie tellers but this effect was significant in only one study (p = .066 in the other study), and the effect sizes were small (d = 0.31) to large (d = 0.78). Taken together, the findings for plausibility and complications were the most consistent across the five experiments, and plausibility emerged as the most diagnostic cue to predict veracity (largest d-scores).

Table 1 shows that in all five studies, moderate correlations emerged between plausibility and the remaining variables and all in the expected direction: increased plausibility was correlated with increased numbers of details, complications, and verifiable sources.

Table 1.  Inferential Statistics in the Five Experiments as a Function of Veracity 

Note. 1Two-tailed correlational tests were carried out between plausibility and the other variables (details, complications, verifiable sources) as the hypotheses were exploratory. We report Pearson's correlation coefficient r and the corresponding confidence intervals for each verbal cue.

2In Leal et al. (2015) interrater reliability of plausibility was measured through five judges. The statistic represents Cronbach's alpha. In the other studies interrrater reliability for plausability and all other variables were measured through two judges and the statistic represents intraclass correlation coefficient (ICC) using the two-way random effects model measuring consistency.

Verbal Cues that Predict Plausibility

Table 2 shows the results from the linear regression analyses for all five experiments. A forced entry method was used with details, complications, and verifiable sources as predictors and plausibility as the outcome variable. When verifiable sources were not included as predictors in the analyses, complications and details explained 25% to 48% of the variance. Complications contributed more than details to the model in two experiments (Leal et al., 2019; Leal et al., 2015). In Vrij, Leal, Deeb, et al. (2020), details (ß = .33, p < .001) contributed more than complications (ß= .30, p < .001) to the model, but this difference was negligible. In the remaining two studies (Deeb, Vrij, Leal, et al., 2020; Vrij, Leal, Fisher, et al., 2020), only complications contributed to the model.

Table 2.  Output of the Regression Analyses that Tested the Contribution of Details, Complications, and Verifiable Sources in Explaining Plausibility 

When verifiable sources was included as a predictor in the regression, 41% to 60% of the variance was explained. Verifiable sources contributed to explaining the model's variance, either more than details or more than both details and complications. These results demonstrate that complications and verifiable sources better predict plausibility than details.

Discussion

Plausibility was positively correlated with details, complications, and verifiable sources but was mostly predicted by complications and verifiable sources. These cues explained 37.29% of the variance (average of the seven R2 reported in Table 2), which means that we succeeded to some extent in making the concept plausibility more objective. However, it also means that the remaining 62.71% should be explained by other cues. We believe that contextual information about what is the convention in a given situation may account for at least some of the unexplained variance as research has shown (Blair et al., 2010; Masip & Herrero, 2015). To take an example from one of our own datasets, a businessman travelling from Tokyo to Barcelona pretended to go to Barcelona for a weekend break. He gave a detailed account of which attractions he was going to visit in Barcelona and where he would stay; he provided complications on the planning phase of the trip, and he could present his hotel reservation as evidence. Despite this, his story did not seem plausible because he said he would stay in Barcelona for less than 48 hours. A return trip Tokyo-Barcelona for less than 48 hours just for sightseeing sounds implausible. In a similar vein is the story of the two Russian men suspected of poisoning a former Russian military officer and double agent for the UK intelligence services in Salisbury in England (Roth & Dodd, 2018). They said they travelled from Moscow to the UK for a 43 hours trip to visit the Salisbury cathedral. That is an odd purpose for a trip from Moscow to the UK, even more so because they stayed in a London hotel. Why not staying in a Salisbury hotel if that was their final destination? Their story did not seem plausible even if they would have given many details, complications, and verifiable sources in their interview.

Using total details as a possible predictor for plausibility may be another reason as to why a substantial amount of variance remained unexplained. Total details is a rough measure that gives all details equal weight. In reality some details may be more important to explain plausibility than others. This would resemble the Model Statement findings. A Model Statement is an example of a detailed account unrelated to the topic of investigation (Leal et al., 2015). It raises expectations amongst both truth tellers and lie tellers to provide more information (Ewens et al., 2016). As a result, total details does not discriminate truth tellers from lie tellers after exposure to a Model Statement (Vrij et al., 2018). However, rather than the quantity of details it is the quality of details that discriminates truth tellers from lie tellers after being exposed to a Model Statement. For example, differences between truth tellers and lie tellers arise in reporting core or peripheral details (Leal et al., 2018) and in reporting complications (Deeb, Vrij, & Leal, 2020; Vrij, Leal et al., 2017). This quantity versus quality of detail argument may also influence plausibility ratings. The distinction between core and peripheral details may also be relevant for plausibility ratings, and perhaps statements that focus on core information are considered to be more plausible. In addition, the verbal deception literature contains a wealth of details (other than complications) that discriminate truth tellers from lie tellers (Amado et al., 2016). Researchers could start examining such details.

Statement plausibility was a diagnostic cue to veracity in all five experiments, and it showed larger effect sizes than the other three objectively assessed verbal cues: details, complications, and verifiable sources. A relatively strong performance from plausibility in discriminating between truth tellers and lie tellers was also found in Sporer et al. (2020) and in DePaulo et al.'s (2003) meta-analysis. That plausibility is predicted by multiple cues (complications, verifiable sources, and probably also contextual information) may explain why it showed the largest effect sizes. Assessing statements based on a combination of diagnostic cues (complications and verifiable sources in the current research) is more likely to enhance lie detection than assessments based on individual cues (DePaulo et al., 2003; DePaulo & Morris, 2004; Hartwig & Bond, 2014). Of course, we cannot rule out that even more cues, not examined in the current five experiments, contribute to plausibility ratings. It probably is a verbal cue that consists of more components than many other verbal cues.

The strong performance of plausibility in distinguishing truth tellers from lie tellers makes it an attractive verbal cue. In addition, given how difficult and time-consuming it is to count objective cues such as details, complications, and verifiable sources, rating statement plausibility on a Likert scale may save time as well as cognitive resources. This is crucial for investigative practitioners who are frequently under pressure to resolve cases rapidly (Horgan, 2014). The question arises whether the subjective nature of plausibility is worth a price paying. We think that at present using plausibility as a veracity assessment is premature and advocate against its use. However, we think that plausibility deserves more attention from researchers than it currently attracts.

In terms of research, first, we encourage deception researchers to start including plausibility as a cue in their research to further test its diagnostic value but also to examine which objective cues can explain plausibility. The latter results could lead to a more objective way to measure statement plausibility. Second, in the five experiments discussed in this article, plausibility was always defined as "how likely it is that the activities happened in the way described". Research could examine whether providing observers with different definitions of plausibility would lead to different results. For example, would the definition "how likely it is that the category of activity that is described in this statement generally happens in the way described" lead to different results? Based on the current findings the definition "how likely the overall statement includes complications and verifiable sources given the context" is worth examining.

Third, the five experiments in this paper used samples of college students or community members. It may be useful to examine plausibility among forensic suspects. Suspects and inmates typically do not provide detailed statements and prefer to keep their stories simple, but at the same time, they strive to sound plausible (Alison et al., 2014; Strömwall & Willén, 2011). Suspects also have more insight into people's beliefs about deception and may use countermeasures effectively to mimic truth tellers' responses, but they are not necessarily successful in all their attempts (Deeb et al., 2018; Granhag et al., 2004; Rosenfeld, 2018; Vrij, Leal, Fisher, et al., 2020). For example, asking lie tellers to provide verifiable details does not make lie tellers more forthcoming with respect to verifiable information as that may incriminate them (Nahari et al., 2014). Future research could examine how successful real suspects are when instructed (or not) to provide plausible statements, which we have shown to be partially based on verifiable information.

Fourth, future research could examine true statements that attract low plausibility ratings and false statements that attract high plausibility ratings. Is there something beyond different types of detail that triggers those incorrect plausibility ratings? For example, are rare events seen as implausible regardless of their veracity? And are statements that are considered to be against someone's self-interest seen as implausible regardless of their veracity?

We think there is a large set of questions to be examined in relation to plausibility and that it is worthwhile to pursue them given that plausibility seems to be a relatively strong veracity indicator and practitioners frequently ask questions about it. We hope that this article will start research and discussions about the relationship between plausibility and veracity.

This study was funded by the Centre for Research and Evidence on Security Threats (ESRC Award: ES/N009614/1).

Cite this article as: Vrij, A., Deeb, H., Leal, S., Granhag, P. A., & Fisher, R. P. (2020). Plausibility: A verbal cue to veracity worth examining? The European The European Journal of Psychology Applied to Legal Context, 13(2), 47-53. https://doi.org/10.5093/ejpalc2021a4

References

Alison, L., Alison, E., Noone, G., Elntib, S., Waring, S., & Christiansen, P. (2014). Whatever you say, say nothing: Individual differences in counter interrogation tactics amongst a field sample of right wing, AQ inspired and paramilitary terrorists. Personality and Individual Differences, 68, 170-175. https://doi.org/10.1016/j.paid.2014.04.031 [ Links ]

Amado, B. G., Arce, R., Fariña, F., & Vilarino, M. (2016). Criteria-based content Analysis (CBCA) reality criteria in adults: A meta-analytic review. International Journalof Clinical and Health Psychology, 16(2), 201-210. https://doi.org/10.1016/j.ijchp.2016.01.002 [ Links ]

Blair, J. P., Levine, T., & Shaw, A. (2010). Content in context improves deception detection accuracy. Human Communication Research, 36(3), 423-442. https://doi.org/10.1111/j.1468-2958.2010.01382.x [ Links ]

Colwell, K., Hiscock-Anisman, C. K., & Fede, J. (2013). Assessment criteria indicative of deception: An example of the new paradigm of differential recall enhancement. In B. S. Cooper, D. Griesel, & M. Ternes (Eds.), Applied issues in investigative interviewing, eyewitness memory, and credibility assessment (pp. 259-292). Springer. https://doi.org/10.1007/978-1-4614-5547-9_11 [ Links ]

Colwell, K., Hiscock-Anisman, C. K., Memon, A., Taylor, L., & Prewett, J. (2007). Assessment criteria indicative of deception (ACID): An integrated system of investigative interviewing and detecting deception. Journal of Investigative Psychology and Offender Profiling, 4(3), 167-180. https://doi.org/10.1002/jip.73 [ Links ]

Deeb, H., Granhag, P. A., Vrij, A., Strömwall, L. A., Hope, L., & Mann, S. (2018). Visuospatial counter-interrogation strategies by liars familiar with the alibi setting. Applied Cognitive Psychology, 32(1), 105-116. https://doi.org/10.1002/acp.3383 [ Links ]

Deeb, H., Vrij, A., & Leal, S. (2020). The effects of a Model Statement on information elicitation and deception detection in multiple interviews. Acta Psychologica. https://doi.org/10.1016/j.actpsy.2020.103080 [ Links ]

Deeb, H., Vrij, A., Leal, S., & Burkhardt, J. (2020). The effects of sketching while narrating on information elicitation and deception detection in multiple interviews. Manuscript in preparation. [ Links ]

DePaulo, B. M., Lindsay, J. L., Malone, B. E., Muhlenbruck, L., Charlton, K., & Cooper, H. (2003). Cues to deception. Psychological Bulletin, 129(1), 74-118. https://doi.org/10.1037/0033-2909.129.1.74 [ Links ]

DePaulo, B. M., & Morris, W. L. (2004). Discerning lies from truths: Behavioural cues to deception and the indirect pathway of intuition. In P. A. Granhag & L. A. Strömwall (Eds.), Deception detection in forensic contexts (pp. 15-40). Cambridge University Press. [ Links ]

Ewens, S., Vrij, A., Leal, S., Mann, S., Jo, E., Shaboltas, A., Ivanova, M., Granskaya, J., & Houston, K. (2016). Using the model statement to elicit information and cues to deceit from native speakers, non-native speakers and those talking through an interpreter. Applied Cognitive Psychology, 30(6), 854-862. https://doi.org/10.1002/acp.3270 [ Links ]

Granhag, P. A., Andersson, L. O., Strömwall, L. A., & Hartwig, M. (2004). Imprisoned knowledge: Criminals' beliefs about deception. Legal and Criminological Psychology, 9(1), 103-119. https://doi.org/10.1348/135532504322776889 [ Links ]

Granhag, P. A., & Hartwig, M. (2015). The strategic use of evidence (SUE) technique: A conceptual overview. In P. A. Granhag, A. Vrij, & B. Verschuere (Eds.), Deception detection: Current challenges and new approaches (pp. 231-251). Wiley. [ Links ]

Hartwig, M., & Bond, C. F., Jr. (2014). Lie detection from multiple cues: A meta-analysis. Applied Cognitive Psychology, 28(5), 661-676. https://doi.org/10.1002/acp.3052 [ Links ]

Hartwig, M., Granhag, P. A., & Luke, T. (2014). Strategic use of evidence during investigative interviews: The state of the science. In D. C. Raskin, C. R. Honts, & J. C. Kircher (Eds.), Credibility assessment: Scientific research and applications (pp. 1-36). Academic Press. [ Links ]

Hartwig, M., Granhag, P. A., & Strömwall, L. A. (2007). Guilty and innocent suspects' strategies during police interrogations. Psychology, Crime & Law, 13(2), 213-227. https://doi.org/10.1080/10683160600750264 [ Links ]

Harvey, A. C., Vrij, A., Leal, S., Hope, L., & Mann, S. (2019). Amplifying deceivers' flawed metacognition: Encouraging disclosures after delays with a Model Statement. Acta Psychologica, 200, 102935. https://doi.org/10.1016/j.actpsy.2019.102935 [ Links ]

Horgan, J. (Ed.). (2014). The psychology of terrorism. Routledge. [ Links ]

Köhnken, G. (2004). Statement Validity Analysis and the 'detection of the truth'. In P. A. Granhag & L. A. Strömwall (Eds.), Deception detection in forensic contexts (pp. 41-63). Cambridge University Press. https://doi.org/10.1017/CBO9780511490071 [ Links ]

Köhnken, G., & Steller, M. (1988). The evaluation of the credibility of child witness statements in German procedural system. In G. Davies & J. Drinkwater (Eds.), The child witness: Do the courts abuse children? (Issues in Criminological and Legal Psychology, no. 13) (pp. 37-45). British Psychological Society. [ Links ]

Koriat, A., Bjork, R. A., Sheffer, L., & Bar, S. K. (2004). Predicting one's own forgetting: The role of experience-based and theory-based processes. Journal of Experimental Psychology: General, 133(4), 643-656. https://doi.org/10.1037/0096-3445.133.4.643 [ Links ]

Kornell, N., & Björk, R. A. (2009). A stability bias in human memory: Overestimating remembering and underestimating learning. Journal of Experimental Psychology: General, 138(4), 449-468. https://doi.org/10.1037/a0017350 [ Links ]

Leal, S., Vrij, A., Deeb, H., & Jupe, L. (2018). Using the Model Statement to elicit verbal differences between truth tellers and liars: The benefit of examining core and peripheral details. Journal of Applied Research in Memory and Cognition, 7(4), 610-617. https://doi.org/10.1016/j.jarmac.2018.07.001 [ Links ]

Leal, S., Vrij, A., Deeb, H., & Kamermans, K. (2019). Encouraging interviewees to say more and deception: The Ghostwriter method. Legal and Criminological Psychology, 24(2), 273-287. https://doi.org/10.1111/lcrp.12152 [ Links ]

Leal, S., Vrij, A., Warmelink, L., Vernham, Z., & Fisher, R. (2015). You cannot hide your telephone lies: Providing a model statement as an aid to detect deception in insurance telephone calls. Legal and Criminological Psychology, 20(1), 129-146. https://doi.org/10.1111/lcrp.12017 [ Links ]

Masip, J., & Herrero, C. (2015). Police detection of deception: Beliefs about behavioral cues to deception are strong even though contextual evidence is more useful. Journal of Communication, 65(1), 125-145. https://doi.org/10.1111/jcom.12135 [ Links ]

Masip, J., Sporer, S., Garrido, E., & Herrero, C. (2005). The detection of deception with the reality monitoring approach: A review of the empirical evidence. Psychology, Crime, & Law, 11(1), 99-122. https://doi.org/10.1080/10683160410001726356 [ Links ]

Nahari, G., & Vrij, A. (2014). Can I borrow your alibi? The applicability of the verifiability approach to the case of an alibi witness. Journal of Applied Research in Memory and Cognition, 3(2), 89-94. https://doi.org/10.1016/j.jarmac.2014.04.005 [ Links ]

Nahari, G., Vrij, A., & Fisher, R. P. (2014). The verifiability approach: Countermeasures facilitate its ability to discriminate between truths and lies. Applied Cognitive Psychology, 28(1), 122-128. https://doi.org/10.1002/acp.2974 [ Links ]

Rosenfeld, J. P. (Ed.). (2018). Detecting concealed information and deception: Recent developments. Academic Press. [ Links ]

Roth, A., & Dodd, V. (2018, September 13). Salisbury Novichok suspects say they were only visiting cathedral. The Guardian. https://www.theguardian.com/uk-news/2018/sep/13/russian-television-channel-rt-says-it-is-to-air-interview-with-skripal-salisbury-attack-suspectsLinks ]

Sporer, S. L. (2004). Reality monitoring and detection of deception. In P. A. Granhag & L. A. Strömwall (Eds.), Deception detection in forensic contexts (pp. 64-102). Cambridge University Press. https://doi.org/10.1017/CBO9780511490071 [ Links ]

Sporer, S., Manzanero, A. L., & Masip, J. (2020). Optimizing CBCA and RM research: Recommendations for analyzing and reporting data on content cues to deception. Psychology, Crime, & Law. https://doi.org/10.1080/1068316X.2020.1757097 [ Links ]

Strömwall, L. A., & Willén, R. M. (2011). Inside criminal minds: Offenders' strategies when lying. Journal of Investigative Psychology and Offender Profiling, 8(3), 271-281. https://doi.org/10.1002/jip.148 [ Links ]

Vrij, A., & Fisher, R. P. (2016). Which lie detection tools are ready for use in the criminal justice system? Journal of Applied Research in Memory and Cognition, 5(3), 302-307. https://doi.org/10.1016/j.jarmac.2016.06.014 [ Links ]

Vrij, A., Fisher, R., & Blank, H. (2017). A cognitive approach to lie detection: A meta-analysis. Legal and Criminological Psychology, 22(1), 1-21. https://doi.org/10.1111/lcrp.12088 [ Links ]

Vrij, A., Leal, S., Deeb, H., Chan, S., Khader, M., Chai, W., & Chin, J. (2020). Lying about flying: The efficacy of the information protocol and model statement for detecting deceit. Applied Cognitive Psychology, 34(1), 241-255. https://doi.org/10.1002/acp.3614 [ Links ]

Vrij, A., Leal, S., & Fisher, R. P. (2018). Verbal deception and the Model Statement as a lie detection tool. Frontiers in Psychiatry, section Forensic Psychiatry, 9, 492. https://doi.org/10.3389/fpsyt.2018.00492 [ Links ]

Vrij, A., Leal, S., Fisher, R. P., Mann, S., Deeb, H., Jo, E., Castro Campos, C., & Hamzeh, S. (2020). The efficacy of using countermeasures in a Model Statement interview. European Journal of Psychology Applied to Legal Context, 12(1), 23-34. https://doi.org/10.5093/ejpalc2020a3 [ Links ]

Vrij, A., Leal, S., Mann, S., Dalton, G. Jo, E., Shaboltas, A., Khaleeva, M., Granskaya, J., & Houston, K. (2017). Using the Model Statement to elicit information and cues to deceit in interpreter-based interviews. Acta Psychologica, 177, 44-53. https://doi.org/10.1016/j.actpsy.2017.04.011 [ Links ]

Vrij, A., Leal, S., Mann, S., Vernham, Z., & Brankaert, F. (2015). Translating theory into practice: Evaluating a cognitive lie detection training workshop. Journal of Applied Research in Memory and Cognition, 4(2), 110-120. https://doi.org/10.1016/j.jarmac.2015.02.002 [ Links ]

Vrij, A., Mann, S., Leal, S., Fisher, R. P., & Deeb, H. (2020). Sketching while narrating as a tool to detect deceit. Applied Cognitive Psychology, 34(3), 628-642. https://doi.org/10.1002/acp.3646 [ Links ]

Vrij, A., & Nahari, G. (2019). The verifiability approach. In J. J. Dickinson, N. S. Compo, R. Carol, B. L. Schwartz, & M. McCauley, Evidence-based investigative interviewing: Applying cognitive principles (pp. 116-133). Routledge. [ Links ]

Vrij, A., & Vrij, S. (2020). Proportion of complications in interpreters-absent and interpreter-present interviews. Psychiatry, Psychology and Law, 27(1), 155-164. https://doi.org/10.1080/13218719.2019.1705197 [ Links ]

Appendix

The Selected Variables and Manipulations in the Five Experiments

Leal et al. (2015). Experiment 2

Dependent variables

Details refers to the total number of visual and contextual details reported during the interview.

Complications refers to the total number of complications reported during the interview.

Plausibility refers to the plausibility ratings of the interview.

Manipulation

Participants were or were not exposed to a Model Statement.

Leal et al. (2019)

Dependent variables

Details refers to the total number of visual and contextual details reported during the interview.

Complications refers to the total number of complications reported during the interview.

Plausibility refers to the plausibility ratings of the interview.

Verifiable sources refers to the total number of verifiable sources reported during the interview.

Manipulation

Participants were or were not exposed to a Ghostwriter or Be Detailed condition.

Vrij, Leal, Deeb, et al. (2020). Experiment 2

Dependent variables

Details refers to the total number of unique details reported throughout the different parts of the interview.

Complications refers to the total number of unique complications reported throughout the different parts of the interview.

Plausibility refers to the plausibility ratings of the entire interview.

Verifiable sources refers to the total number of unique sources reported throughout the different parts of the interview (witness and digital sources combined).

Manipulations

Participants were or were not exposed to a Model Statement and were or were not given an Information Protocol.

Vrij, Leal, Fisher, et al. (2020)

Dependent variables

Details refers to the total number of unique details reported throughout the different parts of the interview.

Complications refers to the total number of unique complications reported throughout the different parts of the interview (complications 'low' and 'medium/high' combined).

Plausibility refers to the plausibility ratings of the entire interview.

Manipulations

Participants were or were not given information to read about the Model Statement technique or the dependent variables 'complications', 'common knowledge details' and 'self-handicapping strategies.

Deeb et al. (2020)

Dependent variables

Details refers to the total number of unique details reported throughout the three interviews (core and peripheral details combined).

Complications refers to the total number of unique complications reported throughout the three interviews.

Plausibility refers to the plausibility ratings of the three interviews combined.

Manipulation

Participants were or were not asked to sketch while narrating.

Received: June 15, 2020; Accepted: October 22, 2020

Correspondence: aldert.vrij@port.ac.uk (S. Leal)1.

Conflict of Interest

The authors of this article declare no conflict of interest.

Creative Commons License This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.