International Symposium “SIGN AND COGNITION” – February 9, 2026

On February 9, 2026, the International Symposium ” SIGN AND COGNIION” was held in Room 33 of General Education and Research Buiding, Nagasaki University.

Mario Verdicchio explored the problem of symbol grounding and its implications for understanding meaning in language, artificial intelligence, and philosophy in his lecture titled “Foundations of Language: Semantics, Signs, and Symbol Grounding.” Drawing on traditions from analytic philosophy, semiotics, cognitive neuroscience, and AI, he examined how symbols come to possess semantic content rather than merely formal structure. Central to the discussion was John Searle’s Chinese Room thought experiment, which illustrates the distinction between syntactic symbol manipulation and genuine semantic understanding. While critics argue that the experiment relies on the unrealistic assumption of a complete rulebook, this objection ultimately reinforces Searle’s point: natural language cannot be fully systematized by finite, rule-based systems, suggesting limits to computational models of meaning. The lecture then considered generative AI, noting that despite its flexibility and apparent creativity, it is physically and functionally continuous with conventional software and remains grounded in numerical and logical operations. Through examples such as vagueness (the concept of a “mountain”) and logical reasoning (modus ponens), he argued that meaning resists full formalization. Consequently, AI appears ill-suited for explaining the foundations of meaning, which may be better addressed through philosophical and cognitive approaches.

Abdurrahman Gülbeyaz introduced a sign-centred model for making an individual’s language repertoire measurable in the lecture titled “Quantifying the Linguistic Sign Arsenal: A Model for Grounding the Language Repertoire.” Starting from lifelong brain plasticity, it treats everyday “languaging” as dense environmental input that can contribute to cognitive reserve, with relevance for cognitive aging, cognitive decline, and dementia.
 Instead of counting named “languages,” the approach focuses on deployable linguistic signs. “Languages” are seen as largely institutional labels, while the repertoire is modelled as one internally structured inventory of signs at the Brain–Environment Interface; labels like “German” or “Turkish” may be used only as practical indices for data collection.
 Because a full inventory is impossible, the repertoire is quantified through modelling, especially via valence (how many form variants can express a concept), producing a valence distribution profile. An elicitation procedure uses 20 core nouns and a five-level competence scale to compute a continuous Language Repertoire Index (lr) that captures repertoire structure rather than simple counts.
 Overall, the framework links theory to measurement and supports language-based prevention/intervention by strengthening everyday linguistic practice and the underlying sign inventory to help buffer cognitive decline.

Yasuko Nakamura and Wanwan Zheng offered a theoretical reframing of the transformation of writing systems from the perspective of the co-evolution of language and devices in “The Transformation of Writing Systems – Language, Media, and Anthropotechnics –” (title revised). Using Bourdieu’s concept of habitus, they examined the processes through which social structures are stabilized as embodied practices. Drawing on Friedrich Kittler’s Discourse Networks 1800/1900, they discussed the shift from an educational regime of inscription to a technological and mechanical regime of recording. The 1800 system established literacy as a condition for social success but did not accommodate subjects who deviated from this norm. The case of Daniel Paul Schreber and his father, situated in this transitional period, provides a concrete example of how bodily correction and technical apparatuses oriented toward “beauty” intervened in processes of subject formation. To examine how such educational correction becomes inscribed within the internal structure of thought, a corpus analysis of Freud’s writings was conducted using BERTopic, visualizing the temporal distribution of lexical clusters and the internal fault lines within his theoretical framework. Furthermore, by connecting Sloterdijk’s theory of anthropotechnics with Damasio’s theory of homeostasis, the writing system is reconceptualized as an apparatus for human self-formative training. At the same time, this training co-evolves with the development of technical devices and forms a new habitus as a socially stabilized structure of embodiment. Yet it remains an ever-evolving process that invariably leaves something unincorporated.

Hideki Ohira gave a presentation titled “Generation of Subjective Experiences Based on Predictive Processing.” In predictive processing theories that have become prominent in recent cognitive neuroscience, the brain is understood not as a passive organ but as one that actively constructs experience by predicting signals from both the external world and the body itself, and by minimizing the prediction error between those signals and actual sensory input. This view has been supported by substantial empirical evidence in the domains of perception and action, and recent research suggests that interoception—the perception of bodily states—operates according to similar principles. If this perspective is correct, it implies that numerous prediction errors constantly arise within the brain. It is thought that the brain creates and maintains coherent, continuous experiences by flexibly modifying the hierarchical structure and precision weighting of prediction errors across all domains. Furthermore, such individual predictive processing is shared, maintained, and dynamically transformed among multiple others through symbols like language via collective predictive coding. Assuming this principle makes it possible to provide a unified explanation of human and societal phenomena, and more detailed examination of this principle is desirable.

Daisuke Ueno presented an integrative framework for organizing determinants of cognitive reserve in cognitive aging, with two timely extensions: multilingualism and large language models (LLMs), in a talk titled “An Integrative Review of Determinants of Cognitive Reserve: cognitive reserve, multilingualism, and LLMs.” Building on contemporary conceptual clarifications of reserve, the reserve is divideded into brain reserve (structural capacity), cognitive reserve (adaptive processing via efficiency, compensation, and flexibility), and brain maintenance (slower or reduced neurobiological change), and positioned as a moderator that weakens the link between pathology and clinical symptoms.
 He then reviewed commonly used proxies—education, occupational complexity, cognitively and socially stimulating activities, and physical activity—while emphasizing that proxies are not equivalent to causal mechanisms due to confounding and reverse causation. To address this, he proposed a translational step from “factors” to mechanisms (e.g., education → vocabulary/abstraction/learning habits → more efficient and compensatory processing) and classified determinants by whether they primarily contribute to brain reserve, brain maintenance, or cognitive reserve.
 Next, he discussed multilingualism as a potential contributor to cognitive reserve. While multilingual experience may strengthen language control (inhibition/switching) and semantic processing, the evidence remains mixed and appears strongly dependent on boundary conditions (e.g., proficiency, frequency of use, language distance, and code-switching), as well as confounding factors such as migration and socioeconomic status.
 Finally, he considered whether LLMs could contribute to cognitive reserve through three hypothesized pathways: increasing cognitive stimulation (complexity), supporting social connection, and providing compensation for everyday functioning. He argued that any benefit is conditional on avoiding overtrust, dependence, and misinformation, implying that future work should focus on designing and measuring calibrated trust in older adults—particularly in face-to-face, remote, and chatbot-mediated contexts. The talk concluded with discussion questions on whether LLM-based interventions function primarily as stimulation or compensation, how calibrated trust should be operationalized, and whether multilingual experience moderates the impact of LLM use.

Tetsuya Yamamoto delivered a lecture titled “Reconfiguring Signs and Cognition through Generative AI and Augmented Expression: Digital Embodiment and Well-being.” He examined how augmented expression technologies such as generative AI, AR/VR, and robots can reconfigure the relationship between “signs” and “cognition,” focusing on the perspectives of digital embodiment and well-being. Here, “signls” are not limited to language but refer to perceptible cues that direct the recipient’s attention, emotions, and interpretation—such as bodily movements, voice, light, and the otherness inherent in artifacts. Generative AI and augmented reality technologies hold the potential to transform symbols from fixed media of meaning transmission into “regulatory cues” that shift cognitive states through interactivity, physicality, and continuity.
 As key practical examples, embodied augmentation performances using projection mapping and AR technology were shown to induce high immersion and emotional arousal. Furthermore, data suggesting that continuous interaction with generative AI may be associated with reduced depression and improved self-esteem was presented, and the formation of emotional bonds was also examined.
 Based on these findings, he discussed the potential for augmented symbols to influence cognitive states through the body, creating new psychological support and research methods, while also outlining safety and ethical considerations.

Hiroki Ozawa delivered a lecture titled “An Encounter between Eastern and Western Psychotherapies: Naikan Therapy and the Reconstruction of Meaning.” Drawing on his clinical experience with cases of alcohol dependence, he positioned Naikan therapy as an intervention that updates one’s self-narrative through the redistribution of salience and attention. He discussed its underlying mechanism as a bridge between Eastern and Western psychotherapeutic traditions. Naikan therapy transforms vague self-understanding into “structured introspective tasks” by repeatedly recalling, without interpretation, specific questions about a particular other: ① what they did for you, ② what you did in return, and ③ how you caused them trouble. From a neurocognitive perspective, he focused on the interaction between DMN responsible for past reference, CEN for cognitive control, and SN mediating between them. He proposed that “awareness” accompanied by interoceptive sensations and emotional responses can arise as network reorganization via the SN and as an update in precision and attention allocation within predictive processing. Furthermore, in conditions such as schizophrenia where salience hyperactivity is presumed, the “transparency” of meaning may serve as an aggravating factor; thus, careful assessment of suitability and implementation within a protective environment are critical. Based on the above, Naikan therapy can be redefined as a universal process involving attention, precision, and narrative updating while retaining its form as an Eastern practice. He demonstrated a framework that can be complementarily integrated with CBT and mindfulness.

Kazunori Hayayanagi presented a lecture entitled “Between Labor Power and the Human Beings: The Double Valence of Signs in Max Frisch’s Lecture ‘Überfremdung II.’” The phrase most commonly associated with Max Frisch appears at the beginning of his prose text “Überfremdung I”: “We called for labor, but human beings came.” However, the first half of the same sentence states that “’Herrenvolk’ (ruling population) of a small country becomes aware of its crisis.” The complete sentence, including this first half, is rarely cited in migration studies discourse. He examined why Frisch employed the sign Herrenvolk to designate Swiss citizens by tracing the changing connotations of the sign Überfremdung. Herrenvolk was a Nazi term. In that context, non-Aryan fremd (foreign) groups were positioned as Knecht (slaves), with Jews placed at the very bottom. However, in postwar German-speaking regions, fremd primarily came to refer to “foreign workers.” In other words, Frisch deliberately employed Herrenvolk in his theory of Überfremdung in order to reinterpret the mentality of “exclusion of outsiders” shared by Swiss citizens and the Nazis within the historical layering of Überfremdung’s connotations—a shift from an excess of Jews to an excess of foreign workers.

List of symposium participants(presentation order)
– Mario VERDICCHIO, University of Bergamo
– Abdurrahman GÜLBEYAZ, Nagasaki University
– Yasuko NAKAMURA, Nagoya University
– Wanwan ZHENG, Nagoya University

– Hideki Ohira, Nagoya University
– Daisuke UENO, Kyoto Women’s University Women’s University
– Tetsuya YAMAMOTO, Tokushima University
– Hiroki OZAWA, Nagasaki University
– Kazunori HAYANAGI, Nagasaki University