Chapter 11 Abilities and individual differences
For studying tracking, so far we have discussed only the classic experimental approach of manipulating different factors within participants. This has led to our present understanding of the roles of spatial interference, temporal interference, and some of the relationships to the processes underlying other tasks. However, a different approach, the study of individual differences, is also valuable. In the individual-differences approach, the pattern in scores on different tests is examined to see which abilities tend to go together in the natural variation between humans. Those abilities that co-vary the most are more likely to share many processes in common than those that don’t.
11.1 Do people vary much in how many objects they can track?
Generally in psychology, documenting a difference among people requires more than ten times as many participants as a within-participants experimental design (Schönbrodt and Perugini 2013), but some studies have failed to use large samples. In addition to this shortcoming of the literature, there are also two other common pitfalls of MOT and MIT individual-difference studies.
One pitfall is not considering that differences in motivation can explain certain individual-difference findings. Meyerhoff and Papenmeier (2020) tested fifty participants and for each calculated the effective number of items tracked, for a display with four targets and four distractors. The modal effective number of items tracked was around two, but a substantial proportion of participants came in at three targets or one target tracked, and a few scored close to zero items tracked. Meyerhoff and Papenmeier (2020) concluded that some participants could only track one or zero targets, while others can track more. Unfortunately, however, there is no way to know how much of the variation between individuals is due to motivation rather than ability. Measuring motivation reliably is very difficult, but researchers can include attention checks or catch trials to facilitate exclusion of participants who show clear evidence of not reading the instructions carefully, or frequently not paying attention.
Oksama and Hyönä (2004) were also interested in how many objects people can track. They managed to test over two hundred participants, and like Meyerhoff and Papenmeier (2020) they found what appeared to be a substantial variation in capacity, with some people able to track six objects, while many could track only two or even just one. Their participants, who were provided by an air pilot recruitment program, were made up entirely of those who scored in the top 11% on intelligence test scores from a larger group. This provides some confidence that the participants were motivated The study, however, suffers from what I think of as the second pitfall - the failure to assess task reliability. On any test, a participant will tend to get somewhat different scores when tested on two different occasions, even if they did not learn anything from their first experience with the test. The extent to which participants’ scores are similar when taking a test twice is known as test-retest reliability. Ideally, this is measured with two tests administered at very different times, but a more limited assessment is provided by dividing a single session’s trials into two groups and calculating the correlation between those two groups, which is known as split-half reliability. Knowing the reliability can allow us to calculate how much of the variation in scores between participants is expected based on the noisiness of the test. Without knowing the reliability, there remains the possibility that the extreme variation in scores, with some participants’ data indicating that they could only track one target, could be due to limited reliability - extensive testing of these participants might reveal that their low score was merely a fluke.
Subsequent studies have assessed reliability. Happily, the reliabilities they have found for MOT are very high - 0.96 (Huang, Mo, and Li 2012), 0.85 (Wilbiks and Beatteay 2020), 0.92 (Treviño et al. 2021), and 0.87 (Eayrs and Lavie 2018), near the highest of all the tests administered in the studies (although only the split-half measure was calculated rather than testing on separate days). This looks especially good when one considers that many basic cognitive and attentional tasks have notoriously low reliabilities (Hedge, Powell, and Sumner 2018). Tasks with low reliabilities are not well suited for individual-differences studies - as mentioned above, individual-difference studies are largely based on measuring the pattern of correlations between tasks to reveal the relationship among abilities. The lower the reliability of a task, the harder it is to reliably measure the correlation with another task.
What do these high reliabilities mean for tracking? It suggests that the large individual differences observed by Oksama and Hyönä (2004) and others are actually real. Possibly, some young, healthy, high-intelligence people can truly only track one target. Second, the high task reliability of MOT means that individual-difference studies are a viable avenue for gaining new insights about tracking and its relation to other mental abilities.
In the general population, ageing is likely a major source of individual differences in MOT — older participants perform much worse than younger participants (Trick, Perl, and Sethi 2005; Sekuler, McLaughlin, and Yotsumoto 2008; Roudaia and Faubert 2017). Using a task requiring participants to detect which of multiple objects changed its trajectory, Kennedy, Tripathy, and Barrett (2009) found a steep performance decline between 30 and 60 years — the effective numbers of trajectories tracked in a multiple trajectory tracking task dropped by about 20% with each decade of aging, and the researchers found that this could not be explained by a drop in visual acuity. This is something that theories of aging and attention ought to explain. This result must also color our interpretation of individual-difference studies using samples with a wide age range — some of the correlations with other tasks will likely be due to those abilities declining together rather than them being linked in people of the same age. That’s still useful for drawing inferences, but the inferences should perhaps be different than from individual-difference studies of undergraduates.
The MOT individual difference literature has mostly taken a fairly wide-angle approach. Participants have been tested with a variety of tests, to see which mental abilities are linked. However, the first large-scale study concentrated on tasks typicaly thought of as attentional (Huang, Mo, and Li 2012). Liqiang Huang and his colleagues used tests of conjunction search, configuration search, counting, feature access, spatial pattern, response selection, visual short-term memory, change blindness, Raven’s test of intelligence, visual marking, attentional capture, consonance-driven orienting, inhibition of return, task switching, mental rotation, and Stroop. In their sample of Chinese university students many of these tasks showed high reliabilities of over 0.9, meaning that there was a potential for high inter-task correlations (inter-task correlations are limited by the reliabilities of the two tasks involved). However, the highest correlation of a task with MOT was 0.4. That task was counting, which required judging whether the number of dots in a brief (400 ms) display were odd or even (3, 4, 13, and 14 dots were used, so the task included the subitizing range). Change blindness, feature access, visual working memory, and visual marking were runner-ups, with correlations with MOT of around 0.3.
That no task had a higher correlation than 0.4 is very interesting, but also disappointing. It’s interesting because it suggests that MOT involves distinct abilities from several other tasks that have previously been lumped together with MOT as being “attentional”. It’s disappointing because it suggests that our theoretical understanding of these tasks is sorely lacking, and also because the low correlations mean that it’s hard to discern the pattern of correlations — when the highest correlations is 0.4, one needs very narrow confidence intervals to be confident of the ordering of the tasks.
Treviño et al. (2021) reported data from a web-based test of an opportunity sample of more than 400 participants aged 18 to 89. The test included cognitive, attentional, and common neuropsychological tasks: arithmetic word problems, the trail-making task, digit span, digit symbol coding, letter cancellation, spatial span, approximate number sense, flanker interference, gradual onset continuous performance, spatial configuration visual search, and visual working memory as well as MOT. MOT had among the highest reliabilities, at 0.92. MOT performance had little correlation with performance on the task designed to measure sustained attention over an extended period — about five minutes, using the gradual-onset continual performance task (Fortenbaugh et al. 2015). This supports the tentative conclusion that the ability to sustain attention without lapses is not an important determinant of tracking performance.
In the Treviño et al. (2021) inventory, the task that most resembled the counting task found by Huang, Mo, and Li (2012) to have a high correlation with MOT was an approximate number sense task, which had a moderate correlation of 0.3. The approximate number sense task differed from the counting task of Huang, Mo, and Li (2012) by not testing the subitizing (less than 5 items) range, which might help explain the apparent discrepancy. Indeed, Eayrs and Lavie (2018) found, using hierarchical regression, that subitizing made a contribution to predicting MOT performance that was somewhat separate to that of an estimation task using larger set sizes.
The tasks with the highest correlations with MOT in the data of Treviño et al. (2021) were visual working memory, spatial span, letter cancellation, and digit symbol coding, all at around 0.5. As the authors pointed out, the letter cancellation and digit symbol coding tasks are complex tasks believed to reflect a number of abilities. This makes it hard to interpret their correlation with MOT. Spatial span and visual working memory are quite different from MOT, but similar to each other in that they both involve short-term memory for multiple visual stimuli.
Overall, there is a reasonable level of agreement across these individual-differences studies, as well as others not reviewed here, such as Trick, Mutreja, and Hunt (2012). Visual working memory has a robust correlation with MOT performance, which is interesting because superficially, MOT imposes little to no memory demand. Many researchers conceive of tracking as simply the simultaneous allocation of multifocal attention to multiple objects, with a process autonomous to memory causing the foci of attention to move along with the moving targets.
From the consistently strong correlation of MOT performance with visual working memory, it is tempting to conclude that mechanistically the two tasks are tightly linked. However, it must be remembered that working memory tasks are among the best predictors of a wide range of tasks, including intelligence as well as the Stroop task, spatial cuing, and task switching (e.g. Redick and Engle 2006).
11.2 Going deeper
Variation in multiple object tracking is unlikely to be caused by variation in just one ability. We now understand that tracking performance can be limited by spatial interference and temporal interference, as well as less task-specific factors such as lapses of attention.
Unfortunately, no individual difference study to date seems to have used task variations to partial out components of MOT and determine whether they show different patterns of correlations with other tasks. With other tasks, using static stimuli, studies have revealed substantial individual differences in spatial interference (Petrov and Meleshkevich 2011), such as larger crowding zones in some types of dyslexia (Joo et al. 2018). It’s possible that these differences are responsible for a large part of the inter-individual differences in MOT. There is also evidence that training with action video games can reduce spatial interference and improve reading ability (Bertoni et al. 2019), which makes it especially important that spatial interference be investigated further.
With the growth of online testing, the sample sizes required for individual difference studies have become easier to obtain, and so individual differences are a promising future direction. However, researchers should be aware of the issues that are important for individual-differences studies, such as the pitfalls described in the beginning of this section.