Expert Systems With Applications 136 (2019) 145–158Contents lists available at ScienceDirectExpert Systems With Applicationsjournal homepage: www.elsevier.com/locate/eswaComprehension of business process models: Insight into cognitivestrategies via eye trackingMiles Tallon a,d, , Michael Winter b, Rüdiger Pryss b, Katrin Rakoczy c, Manfred Reichert b,Mark W. Greenlee a, Ulrich Frick daInstitute for Experimental Psychology, University of Regensburg, Regensburg, GermanyInstitute of Databases and Information Systems, Ulm University, Ulm, GermanyLeibniz Institute for Research and Information in Education, Frankfurt/Main, GermanydHSD Research Centre, HSD – University of Applied Sciences, Cologne, Germanybca r t i c l ei n f oArticle history:Received 3 February 2019Revised 30 May 2019Accepted 15 June 2019Available online 17 June 2019Keywords:Visual literacyBusiness process modelEye trackingLatent class analysisCognitive workloada b s t r a c tProcess Models (PM) are visual documentations of the business processes within or across enterprises.Activities (tasks) are arranged together into a model (i.e., similar to ﬂowcharts). This study aimed at understanding the underlying structure of PM comprehension. Though standards for describing PM havebeen deﬁned, the cognitive work load they evoke, their structure, and the eﬃcacy of information transmission are only partially understood. Two studies were conducted to better differentiate the concept ofvisual literacy (VL) and logical reasoning in interpreting PM.Study I: A total of 1047 students from 52 school classes were assessed. Three different process modelsof increasing complexity were presented on tablets. Additionally, written labels of the models’ elementswere randomly allocated to scholars in a 3-group between-subjects design. Comprehension of processmodels was assessed by a series of 3 4 ( 12) dichotomous test items. Latent Class Analysis of solveditems revealed 6 qualitatively differing solution patterns, suggesting that a single test score is insuﬃcientto reﬂect participants’ performance.Study II: Overall, 21 experts and 15 novices with respect to visual literacy were presented the sameset of PMs as in Study I, while wearing eye tracking glasses. The ﬁxation duration on relevant parts ofthe PM and on questions were recorded, as well as the total time needed to solve all 12 test items. Thenumber of gaze transitions between process model and comprehension questions was measured as well.Being an expert in visual literacy did not alter the capability of correctly understanding graphical logicalPMs. Presenting PMs that are labelled by single letters had a signiﬁcant inﬂuence on reducing the timespent on irrelevant model parts but did not affect the ﬁxation duration on relevant areas of interest.Both samples’ participants required longer response times with increasing model complexity. Thenumber of toggles (i.e., gaze transitions between model and statement area of interest) was predictive formembership in one of the latent classes. Contrary to expectations, denoting the PM events and decisionsnot with real-world descriptions, but with single letters, led to lower cognitive workload in responding tocomprehension questions and to better results. Visual Literacy experts could neither outperform novicesnor high-school students in comprehending PM. 2019 Elsevier Ltd. All rights reserved.1. Introduction1.1. What are process models?Abbreviations: PM, Process Model; VL, Visual Literacy. Corresponding author at: Institute for Experimental Psychology, Universitätsstraße 31, 93053 Regensburg, Germany.E-mail addresses: [email protected], [email protected](M. Tallon), [email protected] (M. Winter), [email protected] (R.Pryss), [email protected] (K. Rakoczy), [email protected] (M. Reichert),[email protected] (M.W. Greenlee), [email protected] (U. 957-4174/ 2019 Elsevier Ltd. All rights reserved.A process model (PM) is a textual or visual representation,which documents all steps of an entire process (Schultheiss &Heiliger, 1963). Thereby, visual process models, inter alia, allowthe depiction of complex algorithms, business steps, or logisticaloperations in a descriptive form (Aguilar-Saven, 2004; Bharathiet al., 2008; Rojas, Munoz-Gama, Sepúlveda & Capurro, 2016).
146M. Tallon, M. Winter and R. Pryss et al. / Expert Systems With Applications 136 (2019) 145–158PM should be designed such that practitioners can apply themfor their tasks at hand (Roehm, Tiarks, Koschke & Maalej, 2012;Ungan, 2006). Moreover, PMs have to be understandable by allpractitioners (Reggio, Ricca, Scanniello, Di Cerbo & Dodero, 2015;Zimoch, Pryss, Probst, et al., 2017). Existing research on processmodel comprehension therefore has considered two groups offactors: (1) Subjective capability (e.g., model reader expertise)should be distinguished from (2) objective characteristics of themodel itself (e.g., process model complexity).For objective factors, a framework has been proposed(Moody, Sindre, Brasethvik & Sølvberg, 2002) to evaluate thequality of process models. Notational deﬁciencies (e.g., semantic transparency) and their inﬂuence on the comprehensionof process models have been reported by Figl, Mendling andStrembeck (2013). Regarding subjective factors, Recker and Dreiling (2007) compared two popular process modeling languages(business process model notation BPMN and event-driven process chain EPC). These studies focus on subjective aspects of PMcomprehension, since they conclude that subjective factors havea greater impact than objective factors. A recent overview onstudies investigating subjective as well as objective factors of PMcomprehension is provided by Figl (2017).Understanding PMs may not only be regarded as an endpointdepending on both factors described above, but also as a keycompetence for a multitude of cognitive tasks that share in common the classiﬁcation and ordering of events and decisions intomeaningful sequences (Dumas, La Rosa, Mendling & Reijers, 2013).As PMs are mostly presented as charts following speciﬁc rules offormalization in a standardized notation, it seems to be of interestto analyse the interplay between the visual inspection of chartsrepresenting PMs and their comprehension (Barthet & Hanachi,1991; Dumas et al., 2012).1.2. Semantic notation of PMAfter a series of experiments with both subjective (i.e., cognitive load, Sweller, Ayres & Kalyuga, 2011) and objective factors (i.e.,semiotic theory), Mendling, Strembeck and Recker (2012) concludethat additional semantic information impedes syntax comprehension, whereas theoretical knowledge facilitates syntaxcomprehension.The study at hand tries to open up the perspective of PMcomprehension from pure graphical notation to semantic notions(real-world problem descriptions versus symbolic notation) as wellas to personal capacities necessary for model comprehension (psychometric measurement of competence types or levels). Recker andDreiling (2011) also highlight the importance of understandingsubjective factors to enable development of understandable PMs.1.3. Visual literacySubjective factors play a key role in the understanding of PMs.It is therefore of interest to take a closer look at the ability ofattentively analysing and interpreting images, an ability that iscoined as Visual Literacy (VL; see Avgerinou & Petterson, 2011).From the review by Figl (2017), it becomes clear that the constructof VL has not yet been used to analyse potential interactions between subjective and objective factors with respect to model comprehension. To the best of our knowledge, with the exception of arecent study (Bačić & Fadlalla, 2016), whose authors focused moreon visual intelligence than on literacy, no study has yet been published dealing with the concept of Visual Literacy and its impact onPM comprehension. This is even more astonishing considering thatVL has been postulated as a basic competence underlying the precise deciphering of images (receptive component of VL), the production of such images, as well as the reﬂection on the constituentprocesses (Wagner & Schönau, 2016). Images guide our perceptionof the world, our preferences, and our decisions, and VL is considered a central goal of arts education (Wagner & Schönau, 2016).Whether or not a good capability of analysing, memorizing, andenvisaging visual stimuli is helpful for the comprehension orproduction of PMs (Brumberger, 2011), has yet to be determined.It also remains unclear whether VL can be measured like anIQ score on a continuum of homogeneous tasks representing thesame, continuously distributed latent trait, best assessed by a“Rasch scale” (see (Boy, Rensink, Bertini & Fekete, 2014) for an example in the ﬁeld of visualization capability). By contrast, VL mightalso represent a categorical model (Brill & Maribe Branch, 2007),for which different groups of people have speciﬁc gifts and talentsin common, qualitatively differing from each other without thepossibility of representing these differences by a single score(latent class model, see (McCutcheon, 1987)).1.4. Eye tracking as measurement for PM comprehensionEye tracking methods help to understand and visualize underlying cognitive processes in problem solving (Bednarik &Tukiainen, 2006). Thus, eye tracking can help to externally validatethe measurement method of VL. Eye tracking has been establishedin the investigation of competence and competence acquisition(Jarodzka, Gruber & Holmqvist, 2017). Conclusions about strategiesor procedural knowledge can be drawn by analysing the processing of visual tasks that, otherwise, could not have been verbalizedor could only be partially verbalized by the subjects retrospectively (Reingold & Sheridan, 2011; Sheridan & Reingold, 2014). Theunderlying cognitive processes thus may be better understood(Lai et al., 2013). Eye tracking measures have provided insightsinto differences in experts and novices (Gegenfurtner, Lehtinen& Säljö, 2011; Vogt & Magnussen, 2007), the prediction of ﬂuidintelligence (Laurence, Mecca, Serpa, Martin & Macedo, 2018),as well as distinguishing between strategies in spatial problemsolving (Chen & Yang, 2014).PM comprehension has been studied by means of eye tracking (Figl, 2017; Hogrebe, Gehrke & Nüttgens, 2011; Petrusel &Mendling, 2013; Zimoch, Mohring, et al., 2017, 2018), but not fromthe viewpoint of VL. It could be shown that subjects providingcorrect responses to comprehension questions after regardinga graphical model had ﬁxated longer on relevant parts of therespective PM than on irrelevant parts (Petrusel & Mendling,2013;Zimoch et al., 2018).Cognitive strategies analysed by eye movements have beenstudied for graphically oriented intelligence tests (Hayes, Petrov &Sederberg, 2011;Vakil & Lifshitz-Zehavi, 2012). A recent study byLaurence et al. (2018) could predict from eye movement indicatorsapproximately 45% of the variance of “Wiener Matrizen Test 2 (Formann, Waldherr & Piswanger, 2011) test results. Toggling (gazetransition between two areas of interest) has been shown to bethe most reliable measure (Laurence et al., 2018) in this context.Other typical measurements include pupillometry (Van Der Meeret al., 2010) or ﬁxation distribution (Bucher & Schumacher, 2006;Najemnik & Geisler, 2005). Based on previous results on the analysis of matrix-based cognitive tests, the present study enhancesthe spectrum of visual tasks and tries to compare similar outputmeasures for the comprehension of PMs.To conclude, this study contributes to further analysing comprehension of PMs by using eye tracking data. Previous studieshave shown that experts in their professional domain (e.g. art,medicine, chess) ﬁxate longer on task relevant parts and shorteron task redundant parts (Gegenfurtner et al., 2011). It has yet tobe determined how the comprehension of graphically presentedlogical models is inﬂuenced by VL.
M. Tallon, M. Winter and R. Pryss et al. / Expert Systems With Applications 136 (2019) 145–1581.5. Research goals and objectivesThis study aims to apply psychometric concepts to the ﬁeldof PM research. Moreover, we try to corroborate these efforts byusing innovative technology (i.e., eye tracking measurements).Notably, the role of expertise in VL for solving visual tasks seemsunclear, and even questionable for comprehending PMs.Based on the previous research on process model comprehension, this paper wants to contribute empirically to the inﬂuenceson process model comprehension. Methodologically, this is accomplished by means of (1) latent class analysis (LCA) and (2) eyetracking. Through LCA, we are able to determine if the answersgiven by students follow a homogeneous latent trait or shouldbetter be interpreted as qualitatively differing solution patterns.The use of eye tracking helps to identify potential differences inparticipants’ understanding by analysing where and for how longsubjects ﬁxate PM aspects. Cognitive load theory (Sweller et al.,2011) interprets these measurements as indicators for cognitiveworkload.In summary, three major research questions are addressed inthis paper:(1) How can the comprehension of PMs be measured in a population of students? More speciﬁcally, do answering patternsfollow a homogeneous latent trait or should they be interpreted as qualitatively differing solution patterns?(2) How do features of PMs have an impact on the general PMcomprehension?a. Do students successfully decipher the graphical notation(e.g., logical symbols like arrows, “x” or “ ”)?b. How does the semantic notation of PMs inﬂuence the response time and the PM comprehension?c. What effect does the model complexity have on responsetime and comprehension?(3) How does the competence level in analysing and interpreting images (VL) covary with PM comprehension?a. How do VL experts and novices differ in ﬁxation durationon relevant rsp. redundant parts of the PMs?b. How does the expertise in VL covary with the eye movement’s volatility of gaze transitions?2. Materials and methods2.1. SubjectsSample I comprised 1047 high-school students from 52 classes(9th to 13th grade: 21, 28, 1, 1, 1) in 29 schools in Germany.Overall, 52.5% were female, the average age was 15.27 years(SD 0.94). Schools were recruited in the federal states of Hessen, North-Rhine Westphalia, Schleswig-Holstein, and RhinelandPalatinate via leaﬂets, letters and personal recommendations.The test was conducted in regular classrooms. Up to 30 studentswere able to participate in the test simultaneously. In Sample Iunderstanding PM was one segment of a longer (duration: 45 min)test on Visual Literacy. All answers were given via touchscreeninput by the participants. School classes were offered a lump sumof 100 as collective compensation.Participants in Sample II were enrolled as experts in visualliteracy (n 21), if they were members of the European Networkof Visual Literacy (ENViL) or working in professions requiring ahigh visual competence (photographer, gallerist, art educator, artdesigner, art students, or self-employed artists). Novices (n 15)in visual literacy were adults from the clerical and academic staffof various educational settings declaring themselves as not overwhelmingly talented or familiar with arts and visual design. The147age span ranged from 16 to 66 years (M 29.5). All participantshad normal or corrected-to-normal vision. Student participants inSample II received 20 each as compensation. Other participants,including the expert group, who were intrinsically interested inthe topic of Visual Literacy and eye tracking, participated withoutfurther compensation.The study was conducted according to the guidelines for human research outlined by the Declaration of Helsinki and wasapproved by the Ethics Committee of Research of the Leibniz Institute for Research and Information in Education (DIPF, 01JK1606A).All subjects (and their legal representatives respectively) had givenwritten informed consent.2.2. Materials and procedureThe assessment in both samples was conducted on Android A6Tablets with 10.1-inch screen size. All test items were programmedspeciﬁcally for the assessment tool (Andrews et al., 2018). Theprocess models were created in BPMN 2.0 (OMG, 2011 OMGSpeciﬁcation, Object Management Group.). This language serves asan industry standard and constitutes the most widely used processmodeling language (Allweyer, 2016).All participants were given the identical instruction on thetablet screen: “In the following, different processes are presented inthe form of process models. A process model visualizes the sequenceof events and decisions. Try to understand the process in the processmodel and select all correct statements (multiple statements can becorrect).”Participants were required to inspect three subsequently presented PMs and to evaluate 4 statements based on the respectivemodel, thereby representing a within-subject factor with threefactor levels (Fig. 1). Statements were balanced for aﬃrmationand rejection to indicate the correct response. The models wereordered in increasing complexity, where each new model included more activities (boxes) and gateways (inclusive, exclusiveor parallel paths). Furthermore, in order to ensure a proper increase in process model complexity, the process models werecreated using the guidelines from Becker, Rosemann and VonUthmann (20 0 0) and the adopted cognitive complexity measureproposed in Gruhn and Laue (2006). The comprehension statements as well as the activity-labels in the respective “boxes”of each process model were randomly allocated to each subjectin one of three different verbal frames, thereby representing abetween-subjects factor with the following factor levels: Letters(L), Sentences (S) and Pseudo Sentences (P). This manipulationmeans that events in the process models as well as in the comprehension test items were either denoted with a single letter(e.g. “execute F”), a meaningful sentence describing an everydaysituation (e.g. “read Facebook message”), or with a pseudo sentence (e.g. “An ecap with mistives cannot be handed over”) usingmeaningless artiﬁcial nouns to describe the events.For Sample II, SMI eye tracking glasses were used (SMI ETG2w Analysis Pro). The glasses were positioned onto the subject’shead, and the subjects were free to move their heads during taskcompletion. Subjects were seated 50–80 cm away from the tabletscreen. All eye tracking data were recorded at 60 Hz. Saccades andﬁxations (as well as blinks) were recorded binocularly and computed by the SMI event detection algorithm. Each session startedwith a 3-point calibration following the standard procedures forSMI iViewTM . The default eye movement parameters from SMIBeGazeTM version 3.7 were used. A ﬁxation cross was displayedbetween each trial for 2 s. More details of the procedure andon data processing for eye tracking measurements are given in asupplementary e-appendix.
148M. Tallon, M. Winter and R. Pryss et al. / Expert Systems With Applications 136 (2019) 145–158Fig. 1. Process Models (PM1, PM2, PM3) in the letter condition. PMs were presented to respondents in increasing complexity. The boxes (activities) include actions to beperformed, the arrows (sequence ﬂow) deﬁne the execution order of activities, the x (an exclusive gateway) splits the routes of the sequence ﬂow to exactly one of theoutgoing branches. The symbolizes a parallel gateway that is used to activate all outgoing branches simultaneously.2.3. Measurement and data analysisamong the participants.The vector of 12 responses given on the tablets was transformed into 12 dichotomous items x representing each a correctjudgement of the underlying verbal statement (1 correct). Thevector xν of judgements then was analysed by latent class models(Dayton & Macready, 2006) describing typical solution patternsp( xv ) G g 1πgk i 1πixg where :G πg 1(1)g 1with g: number of latent class (1 . G), x: response chosen onitem i (1 . k), xν vector of correct judgments, π g : relative size
M. Tallon, M. Winter and R. Pryss et al. / Expert Systems With Applications 136 (2019) 145–158149Fig. 2. AOI distribution for PM 2 (parallel paths, 1 loop) – Irrelevant PM parts (blue), relevant PM parts (red), and relevant parts of answers 1–4 (green). (For interpretationof the references to colour in this ﬁgure legend, the reader is referred to the web version of this article.)of class g, and π ixg probability of choosing response x on item igiven class g.Model parameters (π g , π ix g ) were estimated with MPLUS (6.0)software for all LCA solutions between 2 and 8 latent classes. Thebest number of latent classes was decided on model ﬁt criteria(AIC, BIC) and the Vuong-Lo-Mendell-Rubin Likelihood Ratio Test,as well as the Lo-Mendell-Rubin adjusted LR test implemented inMPLUS (Asparouhov & Muthén, 2012). In order to prevent localmaxima of the likelihood function of the estimated parameters,the number of initial stage random starts was set to 10 0 0, andthe number of ﬁnal stage optimizations to 50 for each number ofclasses. The estimated model parameters (π g, π ixg ) can be used tocalculate membership probabilities for each participant in everylatent class g in the following way (see equation 37, Rost andLangeheine (1997) p. 29).πgp(g xv ) Gh 1 ki 1πhπixgi 1 πixh k(2)Based on the modal value, each participant was classiﬁed inhis/her most probable latent class. Participants from Sample IIwere also classiﬁed using their response patterns and the itemparameters estimated from Sample I. Additional measurements inSample II were based on the following eye tracking characteristics:a) response latency, which is the time spent on each trial inseconds, b) ﬁxation duration on PM, which is the sum of allﬁxation durations on the model, c) ﬁxation time on statements,which is the time spent on ﬁxating the four response statements,d) number of toggles, which is the number of transitions betweenmodel and responses, and e) toggling rate, which is the number oftoggles between model and responses divided by response latency.Transitions between model and responses were counted each timethe subject’s gaze moved from model area of interest (AOI) toany statement AOI or vice versa. Whenever the gaze would stopto ﬁxate on regions that were not deﬁned by any AOI (“WhiteSpace”), the transition was not counted as a toggle.Fixations for each trial were mapped on corresponding reference images by a single rater (MT) using SMI ﬁxation-by-ﬁxationsemantic gaze mapping. For a comparison to frame-by-frame mapping see Vansteenkiste, Cardon, Philippaerts and Lenoir (2015).Independent ratings were performed (by MW) based on complete datasets of two randomly chosen subjects. In our study wereached a high inter-rater-reliability (Cohen’s Kappa 0.94 forall PMs). Fig. 2 shows the AOIs of the second PM. Relevant partsof the graphical model (coloured in red) that were necessary forcorrectly accepting/rejecting a statement were a priori determinedby process modeling experts from Ulm University (Zimoch, Pryss,Schobel, et al., 2017). The wording of all test items (in German)was also a result of expert discussions within the same group.All gaze data was acquired by SMI iView ETGTM software. Theanalyses were carried out with SMI eye tracking software “BeGaze3.7 . Further information on the eye tracking equipment, technicalsettings and calibration procedure can be found in the e-appendixof this article.Differences between PMs were analysed using repeated measurement ANOVA models for all eye movement indicators. Dueto the relatively small sample size, differences between groups ofrespondents on the same indicators (e.g. status of expertise) weretested using univariate GLM models. In order to test signiﬁcantassociations between latent class membership and eye movementindicators, dummy variables for the larger groups (LC4, LC5, andLC6, see Section 3.2) were constructed. In separate models, response latency, ﬁxation duration on redundant or relevant partsof PM2 (second model in order of appearance), ﬁxation durationon response statements, and number of toggles between PM2 andanswering statements were tested as predictors of class membership via logistic regression models. All subjects not classiﬁedinto one of the three larger groups were incorporated as part ofthe respective reference group, against which the impact of, forexample, toggles was tested to predict membership. Again, dueto small sample size these calculations were performed only inunivariate analyses (only one predictor) omitting multivariate relationships and interaction effects during these explorative analyses.All statistical tests beyond the experimental variation of conditionsare regarded as purely explorative and therefore not subject tomeasures against inﬂation of Type-I error risk.3. Results3.1. Solution patterns in scholars in sample IBoth criteria (AIC and BIC) displayed substantial improvementof model ﬁt until the introduction of a sixth latent class to be es-
150M. Tallon, M. Winter and R. Pryss et al. / Expert Systems With Applications 136 (2019) 145–158Table 1Process Model complexity and latent class parameters in Sample I. Table 1 gives model parameters for all conditions.timated. A seventh class resulted in deterioration of the BIC index,and no statistically signiﬁcant differences could be demonstratedcompared to the more parsimonious model with 6 latent classesin both the Vuong-Lo-Mendell-Rubin Likelihood Ratio Test, andthe Lo-Mendell-Rubin adjusted LR test. Therefore, six latent classeswere chosen as the ﬁnal solution.Table 1 gives an overview on the item parameters π ix g , whichdenote the probability of a correct solution in each of the sixlatent classes for each comprehension item.Red-shaded cells in Table 1 depict below-average probabilities( 10% ) of solutions for the respective item in each latent class.Green-shaded cells signify above-average probabilities ( 10%) ofcorrectly solved items.Interpretation of latent class 1 (LC1) and latent class 6 (LC6)seems straightforward: LC1 represents a group of persons withrather poor chances to solve each of the comprehension items.Members display probabilities at least 10% below the chance ratesof the whole sample. This group comprised about 13% of thesample and was called “under performers”. On the contrary, LC6consists of about 31% of the participants with excellent performance: members had no comprehension probability below sampleaverage, but most items were solved with slightly or clearly better(green cells: 10%) probabilities than the total sample. LC6 werecalled “logic champions”.LC2 (24%) closely resembles LC1 except that participants aremost likely able to respond correctly to items 1 and 2 of the “parallel paths – 1 loop” model (PM2), which had zero probability inLC1. On the other hand, the group LC5 (10%) is quite similar to thelargest group “logic champions” class (LC6), but it fails to recognizethe correct solutions for question 1, 2 and 4 of the “parallel paths– 1 loop” model (PM2). LC2 can be labelled as “under-performerswith understanding of simultaneous tasks”, and LC5 as “logicallycorrect thinking with misinterpretation of parallel paths”.LC3 represents a typical response pattern (12%) that is performing at an average level for all test items requiring a comparisonof not more than two activities. But when 3 or more informationunits have to be combined for a correct solution, LC3 stronglyunderperforms (e.g. “After the execution of D, C takes place“(PM1,Q1) vs. “After the execution of F and G, H takes place immediately” (PM3, Q1). Therefore they were called “binary thinkinggroup”. Finally, the solution probabilities in LC4 (size 10%) displayan excellent understanding of parallel paths (but misunderstandthe “x” notation of loops), and a slightly below average comprehension of PM1 and PM3. Accordingly, this group was thereforecalled “multi-tasking group”.Both the fact of numerous intersections of solution proﬁles inTable 1 and a formal model test of a Rasch scale (Andersen LR Testscore 104.99; df 11, p 0.0 0 01) reject a homogenous latenttrait as adequate psychometric model of PM comprehension, asmeasured by the given 12 items (see Andersen, 1973, Rost, 1988).It is therefore not meaningful to interpret the sum of correctlysolved items as a simple measure to quantify a latent, continuousability of high-school students to understand graphical models.Instead, it seems necessary to compare the interrelations of thetypical comprehension patterns as qualitatively differing groupsaccording to other variables like sociocultural background andtask-relevant eye movements.When events and decisions were presented under the “P”condition (pseudo sentences), latent classes 3 (binary thinkinggroup) and 4 (multi-tasking group) were more prevalent (eachby 12%) than expected under the assumption of having no association between model condition and problem-solving pattern(see Table 2), while the better performing groups LC5 and LC6were under-represented. Thus, describing processes with pseudosentences seems to prohibit correct deciphering of more complexloop structures. When PM were presented with meaningful sentences (condition “S”), latent classes 2 (under performers withunderstanding of simultaneous tasks) and 5 (misinterpretationof parallel paths) were clearly over-frequented (by 15% and 26%respectively). Finally, under the condition of solely mentioning
M. Tallon, M. Winter and R. Pryss et al. / Expert Systems With Applications 136 (2019) 145–158151Table 2Number of latent class members by model condition in Sample I.Latent classConditionLetter (L)Sentence (S)Pseudo Sentence 146.3412311.75247.1222.64339.3231.134913.7646.23
ysis of matrix-based cognitive tests, the present study enhances the spectrum of visual tasks and tries to compare similar output measures for the comprehension of PMs. To conclude, this study contributes to further analysing com- prehension of