1. Prologue
Given the many flaming arrows that have been shot at work in the digital humanities (DH)—from the charges of cryptotheology (Fish) to neoliberal complicity (Allington et al.)—it’s tempting to feel at ease with the charges of theoretical deficiency (Warwick). With this paper, we want to elaborate on this stance and make the case for three larger points: i) The partial validity of the ‘untheory’ charge for DH (sec. 1), ii) the defensive relevance of the theory of distributed cognition (sec. 2-5), but also iii) for the potential emergence of countercharges, arising from expected revisions of text-based theory concepts by visual perspectives in DH (sec. 6-8).
First, we want to concur that we do not see the creation of theoretical texts at the center of DH practices. Rather we understand them as complementary, computational approaches to cultural data and topics by the development and use of digital tools—and by the evaluation of their contributions to humanities research (Gold; Piper). At first sight, this focus on a largely supportive role to a largely interpretive and theoretical discourse field appears as a feature: Given that a perceived lack of practical or technological relevance is a prominent topos in the humanities’ worsening struggle for reproduction (Plumb; Reitter and Wellmon), DH could be welcomed as a counterbalancing force. Building up tool and technology expertise for more ‘transformative’ (Epstein) humanities departments, however, comes with some costs, including a reduced engagement in existing theory wars.[1] In exchange, DH adds a stronger development focus to the traditional humanities (TH) practice portfolio, and against this labor division-background, the ‘untheory’ charges would rather confirm a subdivision’s corrective course.
Sometimes, this shift also feels like a cyclical trend, as a field can arguably suffer from both—shortage and oversupply. If the prolific decades of high theory and critique in the 20th century soon morphed into rather unproductive wars of discursive attrition—or even into poisonous, post-factual discourse derivates (Drolet and Williams; MacMullen, “What Is ‘Post-Factual’ Politics?”)—a reassessment of the whole genre seems to be indicated (Felski; Latour, “Why Has Critique Run out of Steam?”). Given all of its own cryptotheological issues, even obituaries of the whole field have mustered a certain plausibility (During; Felsch; Knapp and Michaels, “Against Theory”; Knapp and Michaels, “Here Is a Wave Poem”).
It is our understanding though that digital humanists rarely subscribe to such a general narrative of decline—but prefer a material and technological remediation perspective: “Boiled down blithely, the theory is in the tool, and we code tools” (Bianco 99). Working on such translations from elaborate to computational code takes time, which has sometimes been condensed into feisty slogans such as “more hack, less yack” (Nowviskie), or “mak(ing) things, as opposed to talking or writing about them” (Warwick 539). Yet even the founders of these distinctions are not known for taking an anti-theoretical stance but to rather argue for mutually complementing efforts of ‘more hack and yack’ (Bauer; Cecire; Nowviskie; Warwick).
Notwithstanding this widespread symbiotic stance, DH can draw on a whole range of autonomous or affiliated theories, co-created in and around its labs. As such, “nothing ‘needs to be theorized’ in a vague transitive way. […] DH is an intensely interdisciplinary field that already juggles several different kinds of theory, and actively reflects on the social significance of its endeavors” (Underwood).
In the following, we turn to one of these circulating theories—the theory of the extended mind—which provides a luminous lens to reflect on technology-driven times from an anthropological and socio-cognitive point of view (Hayles). The seminal discourse in Cognitive Science has referred to itself with varying emphases on embodied, embedded, extended or enactive aspects of cognition, so due to pragmatic reasons we will refer to the whole cluster as distributed cognition (DC). In their book on DC and the humanities, Anderson (11) states that DC “casts a new light on issues that are central to the humanities and enables us to better explain the nature of forms of human culture and how and why they emerge and evolve” (11). Going beyond TH, we think that distributed cognition has an even higher potential to illuminate and theorize DH constellations due to its relevance for work in human-computer interaction and for the corresponding assessment of digital methods and artifacts.[2]
We elaborate on this with an example from our research field, which is visualization of cultural heritage data. Based on our expectations for this field, we wrap up this paper with an outlook on a possible reversal of the theory deficiency-charges, as visual methods in DH are paving the way for the arrival of novel kinds of “theories”. More in line with the ancient visual practice of “theoria”, these might more blithely tap into the potential of multimodal and visual cognition, which arguably has been rashly neglected in logocentric, theoretical times.
2. Theories of Distributed Cognition
Theories on the extended, situated mind provide a strong theoretical basis and a scholarly rationale for a keen interest and investment into tool and technology development—both from a cognitive and anthropological point of view. Extending the traditional view of cognition as happening exclusively in the human mind, theories of 4E cognition (i.e., embodied, embedded, extended and enactive; Anderson et al.) consider the activities within nervous systems of individual organisms to provide just one component of effective, intelligent behavior, which essentially depends on the productive interplay with further, external entities to form effectively cognizing and problem-solving hybrids. This basic idea of a material and social ecology of the mind (Bateson) has many implications, a complex history, and it caused substantial discussions in cognitive science (Norman, Things)—even though many of its findings are eminently hard to argue with. To cope with the multiple facets of distributed cognition, “philosophers and cognitive scientists who work in this area often adopt what might appear to be a mix-and-match approach: they accept some ‘distributed’ claims but reject others” (Anderson et al. 10), so let’s dive into some of these claims and related debates.
While he built on prior work from other researchers, the origin of the term distributed cognition is attributed to Edwin Hutchins (Cognition in the Wild; Hutchins, “The DC Perspective”). Coming from cognitive anthropology, Hutchins studied the navigation processes on a navy ship and observed that its officers and their cognitive activities inescapably depended on continuous social interaction and on the methodical use of navigational tools. Based on these observations he argued that cognition ‘in the wild’ cannot be well understood as computation of information in the individual human mind (as emphasized by the ‘symbolic paradigm’), but as a systemic activity that is fundamentally distributed a) across humans and their physical environment, including tools, b) across multiple interacting persons, and c) across time—as it constantly builds on and uses (cultural) artifacts, which resulted from prior cognitive processes (see figure 1). Therefore, he claimed that the study of cognition should not center its concepts on the realm of individual brains and minds but should widen its focus to extended socio-technical systems, i.e., to the crucial border-crossing interaction flows and cycles, and thus to hybrid systems extending from individuals into their social, technical or material environments. “Rather than assuming a boundary for the unit of analysis a priori, distributed cognition follows Bateson’s advice and attempts to put boundaries on its unit of analysis in ways that do not leave important things unexplained or unexplainable” (Hutchins, “The DC Perspective” 376).
Hutchins’ concept of cognition as a distributed endeavor influenced many other researchers. With their notion of an extended mind, Andy Clark and David Chalmers proceeded to argue that humans and their environment constitute a coupled system with two-way causal interaction. As such, they attribute an active role to the environment which causes specific forms of cognitive activities and argue that the mind are extended phenomena per se, which cannot be reduced to individual mental activities. Clark (Supersizing the Mind) further assumes that humans do not only use external tools to extend cognitive processes, but also incorporate tools into their cognitive activities. In his sense, important tools which are transmitted culturally, do not only include material artifacts (such as compasses, clocks, or calendars), but also the symbols, concepts, and expressions of human language (Clark, “Language, Embodiment, and the Cognitive Niche”). Language not only structures how we interact with others (in communication processes) and with our material environment (via associated verbal concepts and the language-basis of different kinds of media), but also how we structure and manipulate concepts and conceptual networks in our thoughts during activities of problem solving and self-reflection.
Akin to the idea of an extended mind, Lucy Suchman (Plans) argued that cognition is essentially situated in our environments rather than our brain, as intelligent action is constantly and dynamically adapted based on the interactions with our physical and social world. Central to her work on situated action are plans, which direct the interaction with the environment, and which are frequently changed based on restrictions and options in the environment.
Influenced by DC theories, Donald Norman took a closer look at the technological side of distributed cognition and argued that both external representations and tools (“cognitive artefacts”) can make us smart—or not—depending on their match with our respective cognitive tasks and processes. He coined the term affordance to describe that some tools are well suited for or even trigger some cognitive tasks, but not others. Text processing programs, for instance, either foster or hinder collaborative work on a journal article by (not) enabling simultaneous writing and editing of the text on the web, by offering comments and a suggestion mode, or by providing awareness on others’ presence and activities. Each of these features also triggers specific cognitive activities, like critique, reflection, or discussion, which can fundamentally change the result of a writing process.
One central tenet of all theories related to distributed cognition is that humans benefit from ‘offloading’ cognitive processes to the environment and thus from unburdening their cognitive systems through their interaction with both artifacts and other individuals. Sense-making (thinking, reasoning, problem solving) does not happen in individuals alone, but by perceiving, exchanging, and processing information collectively and by manipulating and utilizing artifacts in hybrid assemblies of human-thing or human-technology interaction. Yet, while theories of distributed cognition strengthen and emphasize environmental, technological, social, and cultural factors, no advocate of distributed cognition believes that the brain is somehow unimportant. Rather, their proposal is that “to understand properly what the brain does, we need to take proper account of the subtle, complex and often surprising ways in which that venerable organ is enmeshed with, and often depends on, non-neural bodily and environmental factors, in what is the co-generation of thought and experience” (Anderson et al. 3).
While these ideas have developed and interconnected mainly in a cognitive-scientific discourse, they resonate with conceptual and theoretical equivalents in other disciplines and their discourses, especially in the fields of media theory and science and technology studies, where various aspects of humans’ fundamental co-evolutionary dependency on technology and nootechnology[3] have been elaborated and analyzed (Latour, We Have Never Been Modern; McLuhan). One main takeaway from these frameworks—and from this first dive into DC concepts—would be that neither human cognition nor culture can be meaningfully understood without taking the tools and networks into account, which make them effective and evolutionary successful (Richerson and Boyd). The results of human sense-making and cultural activities—including the works of humanist scholarship—emerge from the deeply entwined processes of cognitive, social, and technical co-creation.[4]
3. Distributed Cognition in the Wild and throughout History
In his seminal book on “Cognition in the wild”, Hutchins distinguishes three dimensions along which cognition extends into the environment and thus operates distributedly: Its operations a) colonize and instrumentalize objects and artifacts in material environments, b) it extends into the coordinating minds of social composites, and c) it stretches out in concatenated, adaptive processes over time.
3.1. Material distribution – Interaction with the Material Environment
When DC speaks of the material environments in which cognition is situated, it centrally refers to the mind-bending arsenal of materialized (traditional and digital) tools and media infrastructures in our surroundings. The artificial, cultural and technological characteristics of individual environments thus enable or hinder specific ways of cognition and (inter)action by the specific design and availability of artifacts and technologies.
A very short history of the co-evolution of cognition and nootechnology (also referred to as technogenesis in Hayles) can only be invoked with basic pointers here (see also Anderson et al.): The evolution of homo sapiens correlates directly with the creation and use of tools that amplify both physical strength and cognitive skills. In the macrohistorical context, an elegant affordance of distributed cognition is its invitation to look at both—the homo symbolicus-line with its development of the ‘tools’ of human language, abstract concepts, and also theories[5], and the homo faber-line that proceeds from basic toolmaking in prehistoric periods to the advanced creation of cognitive artifacts (i.e., from calendars and compasses to satellites, smart phones, and search engines). Both lines coalesce and amplify each other to enable the complex thinking, planning, realization, and organization activities of modern times. In and around these lines, sign systems and media develop as tools for temporal and spatial transmission: from cave paintings and notches in sticks, to alphanumeric symbols, printed texts, and (interactive) graphical representations (McLuhan). The evolution of media technology picked up speed in the last century, especially with the advent of digital technology. “Digital media and contemporary technogenesis constitute a complex adaptive system, with the technologies constantly changing as well as bringing about change in those whose lives are enmeshed with them” (Hayles 18).
Modern information technologies in particular appear as a game changer across the board as they significantly extend the spectrum of distributed cognitive processes: Computers provide new means to unburden human minds from repetitive, tedious activities (like counting, remembering, searching, calculating, or transmitting)—ideally to free the human mind for its more ambitious, creative or entertaining strivings (Licklider; Grier). But from a DC perspective, computers do not only free the human mind, but further extend and empower it by enabling new kinds of cognitive processes. From a functional perspective, humans and computers build unprecedented hybrid ensembles: “Computers are incredibly fast, accurate and stupid; humans are incredibly slow, inaccurate and brilliant; together they are powerful beyond imagination”.[6]
With the emergence of computational technology, a trifurcation of possible couplings appears (figure 2), with implications for all kinds of societal practices, including humanities scholarship: Obviously, cognition can operate on internal means only and refrain from the use of external information technology (right). Digitally distributed or mediated cognition then happens in constellations where cycles of perception, cognition, and action include computers and software tools as intelligence-amplifying artifacts (center). If humans thirdly engineer systems that do not keep a human in the loop, the realm of automated and artificial intelligence begins, where cognition is fully ‘externalized’ and where autonomous, computational systems act on behalf of their algorithmic specifications (figure 2, left).
While the project of artificial or autonomous cognition continues to accelerate (Jiang et al.), our further reflections will mainly revolve around the distributed center scenario and related cycles of “intelligence amplification”. Brooks refers to the overall computational constellation as “AI vs. IA”—and contends that especially for addressing rather complex and ill-defined problems, intelligence-amplifying systems with humans in the loop perform better than pure AI systems. “That is, a machine and a mind can beat a mind-imitating machine working by itself” (Brooks 64). Hayles (116) seconds that AI methods and systems will continue to evolve and to become more sophisticated, but humans in distributed systems can bring in characteristics (e.g., curiosity, intuition or wisdom, see also Braga and Logan) which extend the behavioral scope of AI systems and which allow them to (re)act with increased behavioral variety in dynamic environments (Ashby).
3.2. Social Distribution – Interaction with Social Others
In addition to its material extensions, the social distribution of cognitive processes is the second pillar of Hutchins’ DC theory (“The DC Perspective”). Early on, humans started to coordinate their labor in groups and to allocate collective tasks based on the individual skills of their members. This division of labor brings about cognitive specialists and experts, who have the best knowledge and skills for the use of different tools. Groups as organized cognition-and-action-systems know more and are better skilled than their individual members—which is mirrored by different conceptualizations of group cognition for cognitive processes located at the social rather than the individual level (Akkerman et al.). On the macrosocial level, the evolutionary success of distributedly cognizing and acting collectives then directly leads to the division of labor, and the self-amplifying societal megatrends of specialization, individualization, industrialization, and finally globalization which sociologies of differentiation reflect upon (Rosa et al.; Ziemann).
In the context of digital humanities research and practice, the social dimension of distributed cognition theory seems to gain similar relevance as the cognitive tool dimension. In comparison to traditional humanities (TH) practice, where scholars are widely used to work individually, DH work is known to essentially depend on social distribution and collaboration (Fitzpatrick; Nyhan and Duke-Williams; Poole; Schnapp et al.). “Traditional humanities scholarship rewards the solitary endeavor (such as the single-authored monograph) and looks askance at collaboration (e.g., edited volumes), but many digital humanities projects are often collaborative in nature. This translates to an ethos of sharing and collegiality in these environments, but the multi-author aspect of these digital projects may cause problems during evaluation” (Koh). Like Koh or McCarty (“Collaborative Research”), we deem it essential to balance the appreciation of the strengths of socially distributed cognition in DH fields with a sharp awareness of omnipresent challenges. We see such social cognition challenges in the need to understand the potential users of DH tools (see sec. 4), in the need to create better cognitive ecologies with traditional humanities scholars (see sec. 8), and obviously in the daily need to collaborate with members of interdisciplinary DH teams in sustainable projects (sec. 7, as well as Reed; Siemens). To perform and to communicate effectively, socially distributed cognition is known to require a certain amount of shared knowledge (common ground, cf. Clark, Brennan, et al.), and the daily challenge to find this common ground for language, terminology, methods, theories, tasks, workflows and values makes DH teams the unique environment they are (Siemens).
3.3. Temporal Distribution – Interaction of the Present with the Past
Finally, cognition is also distributed over time by interacting in the present with knowledge and culture from the past. The evolution of sapiens collectives is strongly driven by building on the knowledge of our ancestors—via oral transmission, via externalized knowledge representations as artifacts (like machines or books as delineated earlier), or via culturally transmitted social practices. Theories of distributed cognition emphasize the importance of culture for the way we think (Norton) by focusing on the cultural context and the cultural history inherent to artifacts and social practices (Hutchins, Cognition in the Wild). This cognitive niche contains “the incrementally, trans-generationally structured socio-cultural environment that provides human organisms with epistemic resources for the completion of cognitive tasks” (Fabry 350). Our cultural background not only forms cognitive activities (like reading or calculating), but also our social behavior and usage of tools. Take the use of digital technologies for literature search as an example: Whereas digital natives have been enculturated with an omnipresent access to the world wide web as a constantly available information and literature resource, former generations were used to search through card catalogues to find relevant resources, fetched books from the library, and read through them to figure out whether they contain relevant information. In line with countless TH initiatives dedicated to the creation, preservation, and investigation of the contents from libraries, archives and museums, many DH projects actually work on the development of tools to further extend and facilitate temporally distributed cognition with regard to different types of cultural heritage collections (sec. 5 and 7).
With these dimensions of analysis, DC provides a generative theoretical lens to reflect on the interwoven dynamics of human cognition, culture, and technology throughout history. On the one hand, it makes obvious that a vast amount of human sense-making and problem-solving depends on distributed architectures and socio-technical extensions. On the other hand, as the next section details, it sharpens our awareness that the specific design, quality and efficiency of our socio-technical extensions—together with their goodness of fit—decide upon our overall performance in countless areas of human activity, but especially in the fields of technology development and technology-driven research.
4. Distributed Cognition & Human-Computer-Interaction
Since their initial conception as ‘thinking machines’ (Turing), computers have been hypostasized as external, electronic brains (Carello et al.). From a DC point of view, digital technologies rather appear as unprecedented nootechnological options to extend and augment the mental activities of human brains with ever more complex processes of computation that depended on the cognition in other human brains before (cp. the human roots of the concept of ‘computer’, Grier). However, such augmentations can only be effective, when a frictionless coupling of human minds and cognitive artifacts can be established. Consequently, DC often provides a theoretical framework for the design and the evaluation of digital technologies in the area of Human Computer Interaction (Hollan et al.; Suchman, Human-Machine). Digital approaches, for instance, can link and model historical information structurally and relationally in knowledge graphs to extend human mental models (Mayr and Windhager; Mayr et al., “Reasoning with Knowledge Graph Visualizations”) and generate external graphical representations for data on complex topics, with which an observer’s internal representation can interact in different ways (Liu and Stasko). Proponents of DC-based tool design focus on maximizing the fit between the external and the internal representation, to make sure that tools and visualizations become both useful and joyful extensions of their users’ perception-cognition-action cycles.
In this context, multiple models for the user-centered design of (digital) technology have been developed. Norman himself builds on the basic DC assumption that people can act smart because they combine knowledge in their minds with (materialized) knowledge in their environment (Things; Norman, Everyday Things). How they use tools depends on affordances, that is the relationships between the clues and qualities of a tool and the expectations and abilities of an actor. Based on the perceived affordances, users build up a conceptual model of how the tool works and plan their distributed action. “For us to function in this social, technological world, we need to develop internal models of what things mean, of how they operate. […] If we are fortunate, thoughtful designers provide the clues for us” (Norman, Everyday Things 16). Tool design directly influences the perceived affordance and the conceptual model by using signifiers (hints on appropriate use) and constraints (physical, cultural, semantic, and logical hints on restrictions of use), by applying intuitive mapping strategies (natural or culturally transmitted analogies, e.g., on reading direction, can be immediately understood), and by feedback communicating the result of a user’s action—immediately, unobtrusively, and informatively. To develop good tools, Norman calls for “human-centered design” (HCD), an approach that puts human needs, capabilities, and behavior first, and then develops and implements designs which can accommodate those needs, capabilities, and ways of behaving: “Good design starts with an understanding of psychology and technology. Good design requires good communication, especially from machine to person, indicating what actions are possible, what is happening, and what is about to happen” (Everyday Things 8). User-centered design starts with an observation phase for the assessment and specification of user needs, followed by the iterative generation of design ideas, their development, and testing (see figure 3).
In a similar fashion, Hollan et al. see the main difference between DC and other HCI-approaches in a different perception of the technology—not as an input to internal cognition, but rather as central part of the distributed cognitive system. They “make a deep commitment to the importance of observation of human activity ‘in the wild’ and analysis of distributions of cognitive processes […] across members of social groups, coordination between internal and external structure, and how products of earlier events can transform the nature of later events” (193). To do so, they begin with an ethnographic observation of the phenomena of interest, for which a certain amount of domain knowledge is beneficial. This knowledge informs the design and development of different technology variants, which are then experimentally tested by users—again with observational methods focusing on distributed cognitive processes. Such observations “in the wild” result in deeper and richer data than the standard (often quick-and-dirty) HCI methods, but also in a deeper understanding of the distributed cognitive processes. Hutchins makes another claim for observational methods to capture also unconscious cognitive processes, which constitute a large part of human as well as distributed cognitive processes.
Distributed cognition-based, user-centered design aims to develop tools and technologies which seamlessly and intuitively extend human perception-cognition-action cycles. While many agree with Norman’s claims on the design of things, why should we bother about time-consuming user-centered design processes in a DH context? We consider in-depth observation of established cognitive processes, immersion into domain culture and co-design workflows with participation of domain experts to be a prerequisite for tool acceptance in TH fields (Lamqaddam et al., “When the Tech Kids Are Running Too Fast”). Particularly novices, who experience failure with badly designed tools, easily generalize their lack of success and tend to reaffirm their proven practices. To us, it seems relevant to avoid such reactions—especially in fields where technological skepticism is quite common—but to aim for technologies and designs with a maximized goodness of fit to established cognition and action cycles.[7] However, that UCD strategies do not have to hamper (radical) innovation in technology development has been frequently established (Lettl et al.; Radnejad et al.). As such, we consider the effective mediation of innovation and design-driven approaches (Hinrichs et al.; Verganti) and user-oriented strategies as the actual challenge for tool development across the board.
5. Distributed Cognition and the Digital Humanities
Regarding the spectrum of internal, distributed and artificial cognition (figure 2), digital humanities obviously focus on the development and study of technologies of the latter type (i.e., AI and IA) for humanist purposes. Due to large-scale digitization initiatives—from libraries and image archives to music, film, and art collections—, countless cultural materials have moved into the operating range of digital processing methods. From a distributed cognition perspective, the related digital tools and methods extend humanities practices on various levels in sweeping ways: Firstly, they transform all kinds of support processes to humanities scholarship, such as archiving and searching for sources, writing, publishing, and teaching, as well as collaboration with students and peers (Hayles). Regarding image-oriented humanities, Drucker summarizes the suddenness with which daily practices have become transformed: “Almost overnight, it seems, the inventories (…) have been digitized. We are suddenly able to avail ourselves of the great corpus of art historical, architectural, archaeological, and other cultural artifacts through a Google image search, snapping our PowerPoints into place in a fraction of the time it took to make our slide-table lectures in the visual resources rooms of an earlier era. Ease, convenience, and availability are signs that an economy of plenty has replaced that of scarcity” (“Is There?” 7). However, the omnipresence of new media practices across all humanities departments does not imply that the core activities of humanities fields have become digital.
Making use of a simplified conception of the TaDiRAH taxonomy[8] (Borek et al.; see figure 4, left) the outlined practices would appear as a preceding and succeeding periphery of the central activities of humanist scholarship. As such, DH technologies enable, augment and support humanist core activities (marked with an asterisk in figure 4), including the multi-faceted practices of analyzing cultural materials and ultimately activities of interpretation, including theoretization, contextualization, evaluation, and critique. Digital humanists thus harness “digital toolkits in the service of the Humanities’ core methodological strengths: attention to complexity, medium specificity, historical context, analytical depth, critique and interpretation” (Schnapp et al. 2). To outline this hybrid research service design space, figure 4 also offers a makeshift extension of the TaDiRAH taxonomy of DH practices (left) with a provisional taxonomy of TH practices (right), to show how the activity chains of current humanist inquiries (from research questions at the top to the publication of results at the bottom) can draw from methods and tools from both sides (Windhager 148).[9]
In this context, it is our working hypothesis that DH processes rarely substitute TH practices in a binary fashion, but they can a) support or augment them to a certain degree, as they can b) impair or obfuscate existing TH workflows and their epistemologies.[10] They can c) do something new, that has no equivalent in TH, and most often DH tools have d) combined, transactional effects on humanist research chains, which requires a nuanced understanding of the strengths and limitations of activities on both sides to find convincing hybrid research solutions.
However, another widely shared working hypothesis is that the core processes of humanist inquiry by and large remain located on the TH (i.e., right hand) side. No doubt—computational methods are making steep inroads on these core processes too, yet—until further notice—digital methods also remain restricted to rather simple practices in these core areas, and to the remediation of rather low hanging fruits of humanist scholarship (Windhager and Mayr; Windhager et al., “Uncertainty”). It seems to be the same picture for the whole range of humanities disciplines: For textual artifacts, it has been stated that the “digital revolution, for all its wonders, has not penetrated the core activity of literary studies, which, despite numerous revolutions of a more epistemological nature, remains mostly concerned with the interpretive analysis of written cultural artifacts” (Ramsay 2). For visual materials, Drucker adds: “To date no research breakthrough has made the field of art history feel its fundamental approaches, tenets of belief, or methods are altered by digital work” (“Is There?” 5).
It seems that the core activities of humanist interpretation and sense-making depend on (and remain tied back to) a different set of ‘tools’ that are rooted in the complex ecologies of conceptual cognition and propositional reasoning, including hermeneutical or theory-guided methods of interpretation. If we follow DC authors and refer to these dynamic, conceptual networks also as cognitive artifacts, they are structures which—until further notice—resist approaches of direct technological remediation but remain located in the realm of the “ultimate hermeneutic machine, the human mind” (Meister 269).
From a distributed cognition point of view, we consider these assessments to provide interim reports of relevance: While DH tools and methods have started to support and assist humanist research processes, they are far from fully remediating, disrupting, or substituting non-digital tools and methods chains. Rather, the need of the hour seems to be the development of a nuanced understanding of traditional and digital tools, so as to foster circular couplings, and symbiotic inter-tool relations. Until further notice, we are working in and on circles: We build HCI ensembles, such as “scalable” or “differential reading” (see sec. 8)—using DH tools to augment, support, and amplify human(istic) core activities in the human mind (“IA > AI”, cp. Brooks). This is particularly true for visualization technologies, which largely subscribe to the paradigm of intelligence amplification or cognition support (Card et al.; Arias-Hernandez), and which utilize the multimodal architecture of distributed cognitive systems for that end.
6. On Visual and Multimodally Distributed Cognition
Our specific interest in DC is in multimodally distributed cognitive operations and corresponding tools, which include the use of graphical representations—and in visually remediating analytical activities that were alphanumerical, verbal and largely hermeneutical before. For many DH tools, the principles of “multimodality and interactivity are not cosmetic enhancements but integral parts of their conceptualization” (Hayles 40). Theories from learning psychology posit that large parts of human cognition operate on a ‘bimodal’ cognitive architecture, which processes iconic (image-based) and symbolic (text-based) content in different but interconnected ways (Schnotz and Kürschner, see figure 5). Based on Paivio’s dual-coding theory, they suggest that multimodal information is processed in parallel in (1) a verbal–propositional and (2) a visual–spatial system and leads to the construction and elaboration of two types of internal representations: visual mental models on the one side, and propositional representations or conceptual networks on the other side, which can be transformed and translated into each other by exchange processes of ‘model construction’ and ‘model inspection’.
While this model is not explicitly rooted in the DC discourse, we regard it as largely compliant to its tenets and highly relevant as it adds the essential dimension of the visual-propositional distribution of human cognition. Complex phenomena can be processed either verbally or visually—or in multimodal combinations—especially in times of “more media” (Manovich, Visualizing Cultural Patterns in the Era of ‘More Media.’).
Another strength of this model of multimodal information processing is that it allows for transmodal interaction and translations (diagonal and vertical arrows). Thus, verbal information can be used to construct visual mental models, or vice versa visual material can also be inspected and processed semantically in a propositional way. Thereby, the visual processing layer, which builds on the evolutionary older system of visual perception and its capabilities of preattentive processing (Healey and Enns), can offer a more natural and intuitive access to DH data collections, than text-based interfaces, which require elaborate semantic processing or the acquisition of a more technical (programming) language first.
Arguably, visualizations are one of the most prominent and visible innovations, which found their way into the DH methods portfolio. “Visual representations and interaction techniques take advantage of the human eye’s broad bandwidth pathway into the mind to allow users to see, explore, and understand large amounts of information at once. Information visualization focused on the creation of approaches for conveying abstract information in intuitive ways” (Thomas and Cook 30). We read them as one of the genuine changes in the field of nootechnology, as they bring back the theoretical castaways—our eyes and our visual-perceptual system—beyond their linear symbol scanning duties of print-based scholarship (see sec. 8). Even though diagrams have a long and venerable history, interactive visualizations are one of the most fascinating tools to augment cognitive processes and practices in face of modern-day complexities. Due to its focus on the augmentation or amplification of cognition, the theory of distributed cognition provides a strong foundation for visualization research (Arias-Hernandez; Liu and Stasko; Windhager and Mayr). Visualizations can augment cognitive processes in several ways, (a) as external storage of information, (b) by organizing information, (c) by offloading cognition on perception, and (d) by offloading cognition to (inter)action (Hegarty).
In the context of the (digital) humanities, manifold developments and experiments have taken place in the field of visualization (Benito-Santos and Therón Sánchez; Bradley et al.): Visualizations can support distant reading of large visual corpora (Arnold and Tilton), of large text collections (Alharbi and Laramee; Jänicke et al.), or of cultural collections (Glinka et al.; Windhager et al., “Visualization of CH”). Further authors explored how visualization impacts thinking in the (digital) humanities and in visualization research (Bradley, El-Assady, et al.; Bradley et al.; Drucker, Visualization and Interpretation; Hinrichs et al.; Kleymann; Lamqaddam et al., “Introducing Layers of Meaning”).
7. Case Study: On Visualizing, Curating and Communicating In/Tangible Cultural Heritage
From a distributed cognition perspective, the digital humanities project InTaVia (https://intavia.eu) is situated on the rather complex end of cognitive distributions (figure 6).
This project develops data infrastructure and tools for the visual analysis, curation and communication of intangible and tangible cultural assets and it follows a user-centered design approach to ensure that the intended tool suite supports complex cognitive activities in a distributed large-scale set-up (Mayr and Windhager; Mayr et al., “The Multiple Faces of Cultural Heritage”). While still under development, the project design illustrates our cognitively grounded approach to user-centered design with its focus on distributed cognition (Hutchins, “The DC Perspective”) over the material and technological environment, over social environments, and over time.
To begin at the back, with a focus on cognition that is distributed over time: InTaVia is situated in the context of digital history and cultural heritage technologies. One of its main objectives is the preservation and remediation of historical knowledge on cultural actors and their works, i.e., to foster the coupling of present-day cognition, reasoning and aesthetic appreciation with cultural achievements of the past. For that matter, the project draws together large stocks of European object collections and archives (e.g., as aggregated by Wikidata or the Europeana platform) and knowledge about cultural biographies and histories (e.g., as collected by the national biographical lexica of Austria, Slovenia, Finland, and the Netherlands). By the means of three user-facing modules, including (i) a data curation lab, (ii) a visual analytics studio, and (iii) a visual storytelling suite, it fosters present-day activities of cultural experts and practitioners to search for cultural information, to create new information on the past, to compile and curate information, to visually analyze it, and to communicate it to a wide range of audiences (Windhager and Mayr; Windhager et al., “Visual Analysis”).
This is already a central aspect of social distribution and coupling of cognitive processes we aim to address: InTaVia harmonizes and connects information on the lives and works of historical figures (e.g. of painters, musicians, or writers from the 19th century) with the cognitive systems of various cultural heritage experts (e.g., historians and curators, but also teachers or tourist guides), and allows them to communicate relevant data and topics to a wide range of audiences by the means of new media formats, including web-based visualizations and rich-media narrations (Kusnick et al.). These connection and development activities themselves are pursued by an interdisciplinary project team, involving traditional and digital humanities scholars, cognitive scientists specialized in HCI and DC, as well as computer scientists specialized in visualization, natural language processing, and AI—which requires a fair amount of continuous knowledge transfer and ongoing coordination. This interdisciplinary set-up and the guiding approach of user-centered design helps to build a bridge between the expectations, requirements and traditional practices of humanities scholars and innovative technical developments.[11]
As for the material and technological dimension of distributions, the project aims to make tangible cultural objects available for cognitive systems that are distributed all over Europe, together with biographical records scattered over multiple national repositories. It does so by the means of data integration and harmonization, but also by new approaches to DH interface development. To access and work with the transnational data collection, three interface modules (a data curation lab, a visual analytics studio, and a visual storytelling suite) will structure the human-computer interaction of future users. Across the board, visualizations will play a central role to support search, curation, analysis and communication activities. While such visualizations most often build on metadata to offer distant reading and viewing perspectives, humanities experts also require means for close reading of the original sources.
To enable both kinds of activities in a fluid, scalable fashion, the tool suite will offer means to integrate and mediate both ways of analysis in an intuitive manner. As such, the project builds on the user-centered creation of a multi-perspective working environment, which will be able to also address macro-analytical questions due to options of data aggregation, and to initiate a multitude of circles of distant and close reading. We consider this combinatory and scalable setup to also provide an outlook on future constellations of synoptic, multimodal theorizing.
8. Towards New Kinds of Dheory?
We began this paper with reflecting on a vector of critique that attributes a lack of theoretical activity to the digital humanities, due to their oversized investment into tool development. The theory of distributed cognition, in turn, guided us to reflect on the essential contribution of tool use for the thinking and reasoning processes of homo sapiens in general, and to trace the rich history of nootechnology up to the advent of digital tools—including those developed in DH and visualization projects. Due to the growing tendency of such projects to support the combined practices of algorithmic analysis and hermeneutic close reading, we arrive at a novel distributed cognition scenario that seems of the essence: a new kind of scholarly cognition that is artfully distributed across traditional and digital means of self-amplification. Arguably, when extrapolating from the last decades of DH and TH developments, we also arrive at a point where a revision of logocentric ‘theory’ concepts becomes plausible.[12]
For that matter, we can build on one of the main arguments for the use of digital methods—and against the carefree use of many TH reflections—which is the argument of scale, and on a related standard conception of DH+TH-coupling. The scale-based argument for the use of digital methods is well established (Hayles 27–31; Piper): Traditional methods of analyzing, interpreting, and theorizing cultural artifacts (including methods of close reading or art-historical interpretation) limit humanities scholars to the study of a humbling fraction of what human culture has created—and keeps creating with accelerating pace. To counter the panoply of biases that traditional strategies of complexity reduction (i.e., canonization) introduce to humanities scholarship, DH has argued that IA and AI methods and tools—including visualizations—are valuable allies, even though limitations of their interpretive powers are well known (Drucker, Visualization and Interpretation; Jänicke et al.; Manovich, Cultural Analytics; Moretti).
There is also a standard conception to combine the strengths—and counterbalance the limitations—of digital, scale-based and traditional, interpretive approaches by combining them sequentially and cyclically (see figure 7): While digital methods are known to lack the analytical depth of hermeneutic approaches to interpretation (Drucker, “Why”; Ramsay), they allow for directing certain questions to vast numbers of objects. Thus, computational and algorithmic approaches can help to “sort the information and make patterns visible. Once the patterns can be discerned, the work of interpretation can begin” (Hayles 33). Ensuing insights from the interpretive detail level can then further enhance the understanding of the macroanalytical patterns in large-scale data collections, and thus to move scholarly sense-making forward in hybrid or circular patterns of “synergistic interaction” (Hayles 31).[13]
This circle has been mainly described for literary studies (Ramsay; Sinclair and Rockwell; Weitin et al.), but is equally relevant for the study of visual cultural materials (Arnold and Tilton; Windhager; Zaharieva et al.), where it mediates digital distant viewing perspectives and the close-up views of art-historical analysis: “(T)he main source of information in art history research remains the artwork itself. For that reason, developed visualizations should have a way to go back to the artwork representation” (Lamqaddam et al., “When the Tech Kids Are Running Too Fast” 3). The provision of ‘immersive’ movements into photographic detail views thus has become a standard feature of advanced visualization tools, together with mental map-preserving transitions (Bludau et al.; Glinka et al.). We consider this circle to be the current de facto standard for a quasi-ecumenical practice, reconnecting TH and DH across their “great divide” (Pfisterer).
However, most of the visualization-based interfaces with distant viewing functionality remain quite restricted regarding the actual viewing distance or the richness of context, which they provide: Current standard designs of distant views predominantly start from one given collection to draw up a ‘bigger picture’ (e.g., a histogram, a network graph, or a timeline, consisting of individual objects) and to contextualize individual objects within. Activities of traditional theorizing and contextualizing in the humanities, though, are free in their choice of the scale, composition, and complexity of context—and it might be one of the main challenges for future DH work to also develop digital and visual representations for such contextual richness. To do so, related efforts will have to connect existing data collections (e.g., object and biography collections, as in the InTaVia project) and to mine relevant knowledge collections for contextual data points thereafter. Arguably, this will also include the foundational texts that revolve around cultural objects in the fields of cultural history and theory.
Theories in the traditional humanities context are complex beasts: They provide interpretive lenses, (onto)logical perspectives, and discursive frames for studying and reflecting on cultural materials. On the one hand, they instruct and guide close reading-practices and interpretations on the micro-level of scholarly activity, which cluster around relevant works, artists, schools, or periods of production. On the other hand, they also create bigger pictures with socio-historical, political, technological, and methodological dimensions, which emerge and draw from local observations, while also guiding and informing them.[14] As larger interpretive and normative frameworks, they also define and co-create the objects of study to begin with (e.g., “images”, “texts”, “authors”, “artists”) and help to prioritize, canonize, select, and reject objects—and to define criteria for which of their related entities might deserve closer analytical or critical attention.[15]
Figure 8 provides a sketch of how designers of future distant views in the DH context (left hand side) could benefit substantially from formalizing and utilizing these theoretical perspectives (right hand side).
With the InTaVia project, we aim to do what Giorgio Vasari (The Lives of the Artists) established centuries ago: to study the biographies of artists in joint with the works and artifacts they created—and vice versa (figure 8, first and second level from bottom). However, if scholars—due to their theoretical preferences—prefer to study and situate cultural objects in the larger stylistic formations of an ‘art history without names’ (a term coined by Wölfflin in 1915), linked data architectures should allow for a shift to the bigger pictures and ‘shapes of time’ (Salisu et al.) that result from taxonomic distinctions (figure 8, third level from bottom). If, by contrast, the reflection on larger external (i.e., socio-economic, political, technological, colonialist, racial, historical) realities is seen as a theoretical key to guide and complement an object’s close-up study, distant views should be able to also represent materialist, critical, postcolonial, or gender-theoretical perspectives and structures, and bring corresponding historical formations into the time-oriented perspectives of macro-level contextualizations (Mayr and Windhager 242, figure 8, top level). Figure 8 draws these layers of contextual magnitude together and outlines with vertical and horizontal movements, what an advanced design space of computational-hermeneutic reasoning could look like.
Data-based visualization systems organize and represent data and topics differently than traditional, language-based texts and theories. They could be argued to operate in an orthogonal fashion to established means of qualitative information processing, and their views can augment and contrast established interpretive perspectives. To that end, “complementarity is key” (Bonfiglioli and Nanni)—and aside from mediating well-established information (most notably for pedagogic purposes), the relevance of visualizations lies in their potential to offer unprecedented macroscopic perspectives, which grant instant perceptual access to “what is at once too great, to slow, and too complex for our eyes” to see (Rosnay 4).[16] Advanced datafication and visualization approaches to cultural materials thus enable new investigation, contemplation, and communication practices, without simply replacing non-digital practices that have dominated the methods portfolio before. They bring about new ways for scholars to observe and perceive cultural complexity—and they tap into different cognitive faculties than the propositional meaning structures of academic prose (Tversky, “Visualizing Thought”).
With this, visualizations re-elevate the role of the scholarly senses of sight and promote them from line-oriented symbol scanning tasks to the more natural callings of wideband vision, visual exploration, pattern recognition, and sense-making. To augment the (in)sights and counterbalance the cognitive challenges, which are known to emerge from reading and logocentric reasoning, visualizations bring the highly evolved faculties of visual perception and sense-making (back) into play, so that a complementary system of image-oriented perception-cognition can join the language-oriented sense-making system of propositional processing (Schnotz and Kürschner). The resulting mental structures—whether as cognitive collages or mental models—are known to interweave aspects of both visual-spatial information and propositional information of language-based, theoretical thought in a multimodal fashion.[17]
Ironically, the concept of “theory”—whose alleged absence is admonished in DH contexts—has a deep cultural history, which goes back to the act of “seeing” (Nightingale). Before theory was defined to be the post- or non-empirical contemplation of ideas by the “blind” eye of philosophical reason[18], the term signified the practice of viewing and interpreting sacred rites, objects, and images (theôria as seeing, beholding, gazing, and viewing) in the Greek theatron, literally a “place for seeing” (Sennet 124). Before Plato cast doubt on the shadowy images of sensory perception—and called for their transcendence by the light of discursive-dialectical reasoning—the senses of sight had a major say in the theophanic perception and interpretation of the world (Sloterdijk 6). Against this background, we feel tempted to argue that the academic arc of logocentric and iconoclastic history is long but eventually bends back to multimodal justice. The late-modern rise of “Visualizationism” (Staley) as a wellspring for new kinds of epistemological images thus could be read as the late renaissance of a pre-traditional, pro-visual practice and interpretation of theory. We might even consider calling it “dheory”, to emphasize its significant scholarly potential with a straining but salient term.
“Dheory” in the realm of the arts and humanities then might serve as an aspirational term and a regulative idea, whose evocation might fall by the post-digital wayside, like a second installment of Wittgenstein’s ladder. Nevertheless, dheory—as a novel practice, and as indicated by its initial letters—would build on the recent achievements of digitization in the humanities, but it would emerge only from multiple further procedures of distributed, multimodal information processing.
9. Outlook
As a theoretical framework, distributed cognition can inform and guide the design, development, and evaluation of DH technologies, but also provide a valuable theoretical lens to reflect on the strengths and limitations of humanities practice in general, and on future work at the essential intersection of digital and non-digital scholarship. Its central concepts reach far below the thin historical layers of digital times and help to trace and understand how human sense-making and reasoning always depends on the art of artificial and socio-technical co-creation. Simultaneously, it directs our focus back to the cognitive processes at the very core of many DH projects and to the question how digital technologies can assist to distribute them across material environments (i.e., towards things and topics via tools), but also to the social environment (e.g., via computer-supported collaborative scenarios), and over time—allowing us to newly befriend the past (Liu).
Regarding the recurring critique of DH’s theoretical deficiency, we elaborated on one of the most interesting implications of distributed cognition: When focusing on the fact that cognition also works in a distributed fashion across a multimodal architecture (figure 5), it becomes obvious that traditional theories are largely built as one-sided (i.e., monomodal and text-based) instruments. Inspired by premodern “theory” conceptions more closely attuned to the visual workings of extended minds, we thus developed an epistemic narrative (Kleymann et al.) which reverses the theoretical deficiency charges and simultaneously shows how to compensate them for the TH side. Due to their strengths in processing and visualizing language data, DH play a key role in making text-based information architectures visually accessible and in closing related “comprehensibility gaps” (see footnote 14). We thus see a key role for DH to augment and enhance the future reception and mediation of traditional theories and their claims with visualization-based perspectives.
As a widely distributed, evolving species, we make use of multiple types and generations of tools to reflect on ourselves. While binary and confrontational conceptions of the current tool titanomachy (the old deities of toolmaking vs. the new ones) provide a certain discourse and entertainment value, a distributed cognition perspective makes plausible that the future of both lineages inevitably depends on hybrid epistemological joint ventures, to break new ground in the synoptic analysis and interpretation of evolving cultural complexity.
Acknowledgments
This work has received funding from the European Union H2020 research and innovation program under grant agreement No. 101004825. We want to thank Hanna Risku for many years of theoretical dialog.
Arrows mostly aim for the resulting breach: “DH is barely worthy of the term scholarship because we do too much and think too little (Fish). There is also a strong implication that our field does not take sufficient cognisance of theory—in effect, that we are not sufficiently expert players in the game of theory wars—and that as a result ours is not a respectable discipline” (Warwick 540).
More specifically, this theory helps to both substantiate and subvert the ‘theory deficiency’ charges against DH from a generalized technology assessment perspective: While DH tools are no theories in the traditional sense, they can augment the complex processes of investigation, interpretation, and theorization in manifold ways. TH theories, on the other hand, count among the core instruments to augment the cognition of experts, but they come with their own costs and limitations. In the larger scheme of things, we consider the DC paradigm to sharpen the view of tool users and builders for both the strengths and limitations of their instruments—and to encourage non-partisan collaborations for the sake of synergetic and synoptic results (sec. 5-8).
We use the term nootechnology (from Greek “nous” mind, and “techné” art/craft/technique) as a shorthand for the sum total of cognitive artifacts and technologies, that make humans smart (Norman, Things) in cycles of individual-technology-group interactions. Such collective doing and learning cycles arguably also act as a main driver for cultural evolution (Henrich and McElreath).
Critical humanists of all stripes thus know to put artificially individualized concepts such as “author” and their “monographs” into quotation marks (Fish; Fitzpatrick). Theorists of distributed cognition would further add that progress in humanist authorship and scholarship essentially depends on the ongoing development of tools and would refer to both the conceptual ‘tools’ in humanists’ minds and all the tools enabling and augmenting the information processing of the world.
While the evolution of abstract concepts has been recently reframed as the first “cognitive revolution” and dated to 70.000 to 30.000 BCE by Harari, the “scientific revolution” with its practice of theory creation is commonly dated to the 16th century and exemplified by Copernicus theory on the movement of celestial bodies.
While it is frequently attributed to Einstein, the actual source of this quote is unknown (Shoemate). An attributable variant of the same symbiotic notion is laid out by Hayles: “The more one works with digital technologies, the more one comes to appreciate the capacity of networked and programmable machines to carry out sophisticated cognitive tasks, and the more the keyboard comes to seem an extension of one’s thoughts rather than an external device on which one types” (3).
According to our best knowledge, DH teams and projects rarely utilize DC-based design approaches until now. Recently, we explored such an approach for the development of visualizations in the cultural heritage realm (Mayr and Windhager; Mayr et al., Integrated Visualization of Space and Time: A Distributed Cognition Perspective).
For a more detailed description see https://vocabs.dariah.eu/tadirah/.
While no established taxonomy of TH practices exists—and given the heterogeneity of TH domains and their diversified methodologies—this figure can only be of heuristic nature, and act as an invitation for local TH/DH communities to collect and map their practices in a more detailed fashion.
“When humanities scholars turn to digital media, they confront technologies that operate […] in significantly different cognitive modes, than human understanding.” (Hayles 13)
Our approach of DC-based user-centered design aims to align complex technology designs with the established cognitive activities of future users. For that matter, we invited scientists from the humanities and cultural heritage professions to a series of co-design workshops to better understand their actual practices, to refine the user requirements and to gather feedback on the intended project architecture. In a next iteration, we will ask cultural heritage experts to test early prototypes and observe their interaction to understand what kinds of distributed cognitive activities the technologies afford. Overall, three cycles of test and refinement are planned to iteratively develop DH tools which work in balance with their users’ cognition.
This line of reasoning is largely based on Windhager (2020).
In absence of a centralized discourse or terminology, this movement has been given multiple names, including the interplay of “immersion and abstraction” (Dörk et al.), “rapid shuttling” (Kirschenbaum, according to Hayles 31), “screwmeneutic” or “hermenumerical” reasoning (Van Zundert), “differential reading” (Sinclair and Rockwell), “scalable reading” (Weitin et al.; Fickers and Clavert), or the basic operation of “algorithmic criticism” (Ramsay).
Bigger pictures in the TH sense of the word are obviously made from words almost exclusively, and from rather abstract ones at that. A rarely spelled out challenge of theories—at least in academic (con)texts—thus is the enormous cognitive effort needed to decode and interpret their propositional complexity. In contrast to other media and modalities, text “is terribly cumbersome. It is dispersed, sequential rather than simultaneous, poorly structured and extremely bulky” (Miles and Huberman 11). Big TH pictures thus tend to become ‘visible’ and comprehensible only for experts or readers with considerable amounts of time, dedication, and education, whereas they remain anemic, hermetic, and largely inaccessible for everyone else.
The ways by which these essential frameworks are generated yet often remains precarious from a ‘scale’-based cultural analytics point of view. Piper (5–7) summarizes related issues and derives countercharges from an inverted “theory gap” argument: TH theories rarely make their genesis transparent or account for gaps in their knowledge creation procedures. Starting from selective studies of cultural materials, they arrive on scholarly stages as “black boxes of charisma and insight”, yet also as constructs with generalized—and heavily conflicting—claims of validity. Debates in TH and in cultural criticism then create agonistic scenarios, where proponents contest and overturn the views of each other in a seemingly endless process without mediation. “As a cultural critic, one feels like the child of parents who argue incessantly out of a sense of sport or even boredom, all in the name of a higher principle.” (7) By contrast, the model of knowledge creation in DH and in cultural analytics—from data selection to exploration and from hypothesis-creation to testing, but also regarding processes of tool creation—tends to be more transparent, explicit, and consensus-driven. Others can share in the steps, correct those steps, and challenge them, or build on them and refine them because those steps have been made more legible. As such, the study of culture becomes “more architectonic rather than agonistic, more social and collective.” (Piper 7)
Metaphorical suggestions to refer to such nootechnologies include “telescopes of the mind” (Masterman; McCarty, “A Telescope”), as well as “macroscopes” (Rosnay; Börner; Stefaner), which both draw analogies to established scientific observation technologies, whose optical apparatus brought formerly hidden data dimensions into a perceptually and cognitively accessible format.
Addressing the essential dual role of visualizations as models on our screens and in our minds, Tversky summarizes: “Models are necessary for thinking; by omitting, adding, and distorting the information they represent they can recraft the information into a multitude of forms that the mind can work with to understand extant ideas and create new ones. Models take elements and relations among them in the represented world and map them onto elements and relations in the representing world. In the case of tangible, diagrammatic, and gestural models, the elements and relations are spatial. The fundamental elements are dots and lines, nodes and links. A dot can represent any concept from a place in a route to an idea in a web of concepts. Lines represent relations, any relation, between dots. As such, spatial models rely on more direct and accessible mappings than language, which bears only arbitrary relations to meaning. These mappings can be put into the world and made visible or visceral in graphics and gesture.” (“Multiple Models” 63)
“The philosopher must accept the condition of blindness as the precondition for philosophic insight. He goes blind in order that he may see.” (Nightingale 104)