Pre-attentive Processing: Implications for Product Design
March 2021 · 10 min read
Tara Sethi
Introduction
​
The human visual system collects information about the environment through the action of pre-attentive processing. During this process, the visual system groups different kinds of information into visual channels. These channels allow humans to focus attention on specific information when necessary. Pre-attentive processing occurs subconsciously and takes less than 500 milliseconds [Preattentive Visual, 2019]. This function evolved overtime as early humans used pre-attentive processing to identify potential threats in the environment [Öhman, 1997]. Today, this evolutionary advantage helps humans react quickly, even in environments with high information density.
​
Through an understanding of pre-attentive processing, product designers can better evaluate how well a design supports the rapid collection of visual information. The following sections will examine the neurological foundation of pre-attentive processing, as well as the various theoretical models that build upon this knowledge. In addition, this paper will outline key organizational structures that assist humans in the process of information grouping. The applications of these insights will then be demonstrated through the analysis of a mobile financial management platform: the Fidelity Investments app.
​
The Science of Pre-attentive Processing
Connecting the Eye and Brain
​
After light reaches the human eye and retinal processing occurs, neural signals in the axons of the retinal ganglion cells (RGCs) follow pathways that connect the eye to an area called the optic chiasm [Erskine & Herrera, 2014]. Here, the optic nerve from each eye splits to form the right and left optic tract. About 90 percent of axons in the optic tract travel to the lateral geniculate nucleus (LGN) in the thalamus, which sends signals to the primary visual cortex through optic radiations [Schwartz & Krantz, 2016]. The remaining 10 percent of axons enter other structures, such as the superior colliculus, which helps to control rapid eye movement [Schwartz & Krantz, 2016].
The LGN consists of several layers, each with individual functions that assist in visual processing. The three major categories of layers include magnocellular, parvocellular, and koniocellular layers. Magnocellular layers carry information about motion and flicker, while parvocellular layers carry information about color, texture, form, and depth [Yantis & Abrams, 2017]. The functions of koniocellular layers are less known. However, studies have shown they may also carry information about color [Nassi & Callaway, 2009]. Despite differences in functional specialization, each of the LGN layers contains neurons with cell bodies. Like retinal ganglion cells, neurons in the LGN have receptive fields with center-surround receptive field organization [Irvin et al., 1993; Xu et al., 2002]. Meaning, these neurons can transmit information not only about the amount of light but also about the difference in light (i.e., contrast). These neurons can also respond to visual stimuli that excite RGCs in the retina, allowing for pathways that extend from the retina to the layers of the LGN to the primary visual cortex [Xu et al., 2002; Casagrande, 1994].
Operations of the Primary Visual Cortex
Leaving the LGN, information then reaches the primary visual cortex, also known as visual area 1 (V1), where the work of pre-attentive processing occurs. V1 is organized retinotopically—where neurons are organized two-dimensionally based on “the position of each retinotopic neuron to the point in the visual field corresponding to the center of its receptive field” [Warnking et al., 2002]. However, retinotopic maps in V1 are distorted. Due to cortical magnification, V1 does not exactly represent what information is gathered by the retina. Instead, some sensory receptors are devoted more space than others [Schwartz & Krantz, p. 91, 2016]. For example, more V1 neurons respond to stimulus from the fovea than from the periphery because the density of foveal RGCs is much higher. Thus, many V1 cells lend support to a small area of the fovea while few V1 cells respond to the same area in the periphery—leading to strong visual acuity in the fovea and poor visual acuity in the periphery [Yantis & Abrams, 2017].
​
Neurons in V1 receive information from receptive fields and are tuned to respond to certain stimulus features. These features can include factors such as motion, color, depth, direction, length, and size. There are two major categories of neurons in V1: simple and complex. Both simple and complex cells are tuned to respond to edges[1] [Yantis & Abrams, 2017]. Simple and complex cells differ in that each simple cell has a preferred stimulus orientation [Yantis & Abrams, 2017]. Complex cells, the most numerous cell type, respond well to stimuli regardless of the location in the receptive field and the background environment. Many neurons in V1 have “end-stopped" cells. Meaning, they have “inhibitory regions outside the excitatory receptive field area” [Zarei Eskikand et al., 2016]. These cells help humans determine the boundaries of an object, as well as the location of an object through motion detection [Zarei Eskikand et al., 2016].
[1] See Jain et al. (1995) Chapter 5 for a description of edge and edge detection.
Key Pathways
Following VI, information lands in the secondary visual cortex, also known as visual area 2 (V2). A major pathway connects V1 to the adjacent V2 region where cells are sensitive to color, motion, shape, and position. Together, V1 and V2 send information in parallel to further regions of the occipital cortex such as V3 and V4—areas that have been found to respond to color, motion, and orientation [Arcaro & Kastner, 2015]. These parallel pathways, known as ventral and dorsal pathways, connect V1 to other regions of the cortex. In these pathways, information is divided into channels. The dorsal pathway sends information from V1 to V2 and represents properties related to an object’s motion or location, information that is also used to guide action [Yantis & Abrams, 2017]. Meanwhile, the ventral pathway sends information from V1 and V1 into V4, resenting properties that relate to an object's identity such as color and shape [Yantis & Abrams, 2017].
​
Contributing Theory in Visual Search
​
Feature Integration Theory
In 1980, Anne Treisman developed the Feature Integrating Theory to hypothesize how parallel processing occurs. In her framework, Treisman suggests that human vision consists of a set of feature maps that each correspond to a specific visual feature. She proposed humans had separate maps for color, orientation, shape, texture, and other pre-attentive features [Treisman & Gelade, 1980]. These maps resemble the neurological finding of retinotopic maps. Treisman argues that feature maps are processed in parallel during the early stages of perception [Treisman & Gelade, 1980]. Furthermore, Treisman found some visual characteristics are more easily perceived than others among distractors, stating “segregation is easy when areas differ in simple visual properties like shape and color” [Treisman, 1985]. Supporting Treisman’s findings, Julesz (1981) found that the visual system could detect groups of features, referring to these groups as “textons”. He also suggested that the differences in these textons could be processed pre-attentively [Julesz, 1981].
​
Guided Search
Treisman’s findings suggest that parallel search only occurs for basic features such as color, size, and orientation—claiming other features would require focused attention, also known as serial search [Tresiman & Gelade, 1980]. Furthermore, Treisman also argues that serial search is required when there are combinations of pre-attentive features present in the image. These combinations are referred to as conjunctions. The Guided Search Theory challenges these findings and suggests parallel processes may influence serial processes that follow [Wolfe et al., 1989]. Having participants search for conjunctions of color and form or color and orientation, Wolf et al. (1989) found that conjunctions did not necessarily require a serial search. Instead, he proposes that “parallel processes guide the ‘spotlight of attention’ toward likely targets” [Wolfe et al., 1989].
​
Similarity Theory
​
Also contrary to some findings of Treisman and Julesz, Duncan & Humphreys (1989) conducted four visual search experiments and found no evidence that the presence of a single feature versus the presence of conjunctive features impacts whether a visual screen is processed in parallel or serially. Instead, the authors claim that the degree of similarity among features determines search difficulty, regardless of search materials [Duncan & Humphreys, 1989]. They determined that search efficiency decreased when targets and non-targets were more similar and when there was decreasing similarity between non-targets themselves [Duncan & Humphreys, 1989]. Consequently, the authors present a Similarity Theory.
Perceptual Organization
​
Pre-attentive processes enable perceptual organization, where humans group certain environmental elements together. The visual system supports this process by allowing humans to combine elements into a common unit or object that can be distinguished from the background. In the early 1900s, Gestalt psychologists were able to predict how perceptual grouping occurs under different circumstances [Yantis & Abrams, 2017]. These predictions were based on the figure-ground organization, which assumes that humans can distinguish between a figure or object of focus and the background. Thus, the resulting principles of proximity, similarity, common motion, symmetry and parallelism, and good continuation demonstrate how features are grouped into wholes and continue to challenge scientific findings [Wagemans et al., 2012].
​
The principle of proximity suggests that elements that are closer together are more likely to be perceived as related. Proximity “can overpower competing visual cues such as similarity of color or shape” [Harley, 2020]. As a result, white space can be a powerful tool in separating elements [Harley, 2020]. The principle of similarity states that similar elements tend to be grouped, regardless of spacing. Elements can be similar in terms of factors such as color, size, or orientation. Likewise, the principle of common motion states that elements that move in unison are also to be seen as a unified group. However, in a psychophysical study, Northduft (1993) found that color had a stronger impact on figure-ground discrimination than orientation or motion. Looking at alignment, the principle of symmetry and parallelism claims elements that are symmetrically similar or parallel are often grouped together. In addition, several new principles of grouping have evolved since the Gestalt period—one being the principle of common region. Common region suggests that items with a boundary are perceived as a group and may share common functionality [Harley, 2020]. These grouping principles, enabled by the processes in the visual cortex, help designers illustrate relationships among elements that users can effortlessly recognize.
Product Review
​
Applying an understanding of pre-attentive processing, the effectiveness of the Fidelity Investments mobile app can be examined. This application allows users to track spending, deposit checks, pay bills, and make trades online [Fidelity Investments, 2021]. Despite the inherent complexity of financial information, the goal of the Fidelity Investments app is to make managing money easy and efficient for everyday consumers who are not financial experts. For the purpose of this review, we will analyze three major pages of the tablet-compatible design. These pages are information-dense and contain highly variable data.
Throughout the application, the user interface (UI) design leverages pre-attentive features to support the automatic grouping of information. Across all pages, a grey border separates the bottom navigation from the rest of the UI, creating a common region that suggests the inner elements may share similar functionality. Likewise, in the Accounts page shown in Figure 1, common regions established through the use of white tiles help group related information. The user can also distinguish the Accounts sidebar as the listed accounts are unaligned with elements in the All Accounts section. Employing the principle of proximity, the available white space helps separate unrelated information (See Figure 1).
Figure 1
Fidelity Investments Mobile App: Accounts Page
​
​

Note. From Fidelity Investments [Photograph], by Fidelity Investments, 2021.
​
Furthermore, color provides an indication that certain items are actionable—shown via the green hue used on the selected button and navigation icon. In contrast, the darker grey informs the user which elements have already been selected (See Figure 1).
A high level of information density occurs on the Watch List page—outlining detailed stock market information. Here, the design follows the principle of common region to separate information that is close in proximity but not necessarily related. Using a table format, grid lines help to group data that corresponds to the rows on the left-hand column. Red and green hues are also used to distinguish gains and losses, allowing the user to quickly identify market performance without needing to search through each row individually (See Figure 2).
Figure 2
Fidelity Investments Mobile App: Watch List Page

Note. From Fidelity Investments [Photograph], by Fidelity Investments, 2021.
However, the effect of these colors may be limited for those with color blindness. Therefore, shifting the red/green palette towards red-orange/blue-green or even pink/green would help to ensure accessibly while still conforming to color connotations widely used in financial contexts [Kirk, 2019].
On the Quote page, saturated colors draw the eye towards various charts illustrating financial information. Each chart leverages alignment to demonstrate relationships among different variables examined. The bars used on all three charts share similar sizes, shapes, and color palettes to convey they are related. Across the page, the dashboard layout allows for common regions that are defined by white tiles, separating different financial information related to the chosen stock. Using more white space, the Quote page minimizes the use of lines and grids that risk overwhelming the user. For example, the attributes and data shown in Quote Details are solely separated by an adequate about of space and a variation in text color (See Figure 3).
Figure 3
Fidelity Investments Mobile App: Quote Page

Note. From Fidelity Investments [Photograph], by Fidelity Investments, 2021.
Despite strong conformance to grouping principles, many pages have low contrast between the background and dashboard elements. Although this design choice may align with a contemporary minimalist style, low contrast may impair users’ ability to distinguish common regions pre-attentively and have an adverse effect on search and click efficiency [Nothdurft, 1992; Michalski & Grobelny, 2008]. Hence, users may benefit from a darker background color or the use of borders to ensure regions are well-defined.
Conclusion
Pre-attentive processing plays an important role in visual perception. Allowing humans to process information subconsciously, pre-attentive processing requires no focused effort and occurs rapidly. Designers can leverage pre-attentive features and the principles of grouping to optimize the design of both physical and digital interfaces. Adhering to best practices, information can be displayed in a way that supports the functions of the visual system. Moreover, by understanding the neural basis of grouping principles, designers can better justify decisions that may otherwise be viewed as purely aesthetic.
​
References
​
Arcaro, M. J., & Kastner, S. (2015). Topographic organization of areas V3 and V4 and its relation to supra-areal organization of the primate visual system. Visual neuroscience, 32, E014. https://doi.org/10.1017/S0952523815000115
Casagrande, V. A. (1994). A third parallel visual pathway to primate area V1. Trends in neurosciences, 17(7), 305-310.
Duncan, J., & Humphreys, G. W. (1989). Visual search and stimulus similarity. Psychological review, 96(3), 433.
Erskine, L., & Herrera, E. (2014). Connecting the retina to the brain. ASN neuro, 6(6), 1759091414562107. https://doi.org/10.1177/1759091414562107
Fidelity Investments. (2021). Fidelity investments. Retrieved March 06, 2021, from https://apps.apple.com/us/app/fidelity-investments/id348177453#?platform=ipad
Harley, A. (2020, August 2). Proximity principle in visual design. Retrieved March 06, 2021, from https://www.nngroup.com/articles/gestalt-proximity/
Harley, A. (2020, July 12). The principle of common region: containers create groupings. Retrieved March 06, 2021, from https://www.nngroup.com/articles/gestalt-proximity/
Irvin, G. E., Casagrande, V. A., & Norton, T. T. (1993). Center/surround relationships of magnocellular, parvocellular, and koniocellular relay cells in primate lateral geniculate nucleus. Visual neuroscience, 10(2), 363-373.
Jain, R., Kasturi, R., & Schunck, B. G. (1995). Machine vision (Vol. 5, pp. 140-145). New York: McGraw-hill.
Julesz, B. (1981). Textons, the elements of texture perception, and their interactions. Nature, 290(5802), 91-97.
Kirk, A. (2019, August 07). Five ways to... design for red-green colour-blindness. Retrieved March 07, 2021, from https://www.visualisingdata.com/2019/08/five-ways-to-design-for-red-green-colour-blindness/
Michalski, R., & Grobelny, J. (2008). The role of colour preattentive processing in human–computer interaction task efficiency: A preliminary study. International Journal of Industrial Ergonomics, 38(3-4), 321-332.
Nassi, J. J., & Callaway, E. M. (2009). Parallel processing strategies of the primate visual system. Nature reviews. Neuroscience, 10(5), 360–372. https://doi.org/10.1038/nrn2619
Nothdurft, H. C. (1992). Feature analysis and the role of similarity in preattentive vision. Perception & psychophysics, 52(4), 355-375.
Nothdurft, H. C. (1993). The role of features in preattentive vision: Comparison of orientation, motion and color cues. Vision research, 33(14), 1937-1958.
Öhman, A. (1997). As fast as the blink of an eye: Evolutionary preparedness for preattentive processing of threat. Attention and orienting: Sensory and motivational processes, 165-184.
Schwartz, B. L., Krantz, J. H. (2016). Sensation and Perception. United States: SAGE Publications.
Treisman, A. (1985). Preattentive processing in vision. Computer vision, graphics, and image processing, 31(2), 156-177.
Treisman, A. M., & Gelade, G. (1980). A feature-integration theory of attention. Cognitive psychology, 12(1), 97-136.
Warnking, J., Dojat, M., Guérin-Dugué, A., Delon-Martin, C., Olympieff, S., Richard, N., ... & Segebarth, C. (2002). fMRI retinotopic mapping—step by step. NeuroImage, 17(4), 1665-1683.
Wagemans, J., Elder, J. H., Kubovy, M., Palmer, S. E., Peterson, M. A., Singh, M., & von der Heydt, R. (2012). A century of Gestalt psychology in visual perception: I. Perceptual grouping and figure-ground organization. Psychological bulletin, 138(6), 1172–1217. https://doi.org/10.1037/a0029333
Wolfe, J. M., Cave, K. R., & Franzel, S. L. (1989). Guided search: an alternative to the feature integration model for visual search. Journal of Experimental Psychology: Human perception and performance, 15(3), 419.
Yantis, S., & Abrams, R. A. (2017). Sensation and perception. New York: Worth Publishers.
Zarei Eskikand, P., Kameneva, T., Ibbotson, M. R., Burkitt, A. N., & Grayden, D. B. (2016). A Possible Role for End-Stopped V1 Neurons in the Perception of Motion: A Computational Model. PloS one, 11(10), e0164813. https://doi.org/10.1371/journal.pone.0164813