Osservazioni sulla Filosofia della Linguistica Computazionale. Chiarificazione dei presupposti teorici del Natural Language Processing.

  • Luca Capone
Parole chiave: Semantics, Wittgenstein, Natural Language Processing, Computational Linguistics, Artificial Intelligence

Abstract

The article presents an overview of the literature concerning the relationship between recent Language Models and the concept of meaning. Technical advancements have prompted extensive reflections on the implications of these findings for language and semantics studies. These implications are currently fueling a lively debate among scholars across various disciplines, who are engaging in speculative discussions regarding the nature of meaning and its representation. From a philosophical perspective, the theories of meaning that emerged from such reflections often replicate several misconceptions about the nature of language outlined by Ludwig Wittgenstein in his works. The literature exhibits many of the positions criticised by the Austrian philosopher: a psychological understanding of words comprehension, a logicist interpretation of language functioning, a referentialist view of the meaning of linguistic signs, and so forth. This article endeavors to clarify these misunderstandings by drawing upon classic insights from Wittgenstein’s work, in order to avoid the theoretical impasses encountered by scholars when analysing Language Models (LM). The benefits of this approach are twofold. Firstly, the phenomenon of meaning is placed in its natural context, that is language, while avoiding interference from unrelated disciplinary fields (psychology, sociology, logic, cognitive sciences, etc.); secondly, the field of theoretical investigation concerning the performance of LMs is cleared of conceptual confusions and it becomes possible to describe the relationship between meaning and the vector representations of models.

Riferimenti bibliografici

Andreas, Jacob (2019), «Measuring compositionality in representation learning» in arXiv: 1902.07181.

Bender, Emily, e Koller, Alexander (2020), «Climbing towards NLU: On meaning, form, and understanding in the age of data» in Proceedings of the 58th Annual Meeting of the ACL, pp. 5185–5198.

Bender, Emily, et al. (2021), «On the dangers of stochastic parrots: Can language models be too big?» in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 610–623.

Bengio, Yoshua (2009), « Learning deep architectures for AI » in Foundations and Trends in Machine Learning, 2(1), pp. 1-127.

Bisk, Yonatan, et al. (2020), «Experience grounds language» in arXiv: 2004.10151.

Buder-Gröndahl, Tommi (2023), «The ambiguity of BERTology: What do large language models represent?» in Synthese, 203(1).

Capone, Luca (2021), «Which theory of language for deep neural networks? Speech and cognition in humans and machines» in Technology and Language, 2(4).

Chomsky, Noam (1957). Syntactic structures, Mouton & Co., The Hague Paris (Le strutture della sintassi, trad. Di, F. Antinucci, Laterza, Roma-Bari, 1974).

Chomsky, Noam (2011), «Keynote panel: the golden age — a look at the original roots of artificial intelligence, cognitive science, and neuroscience» http://languagelog.ldc.upenn.edu/myl/PinkerChomskyMIT.html

De Mauro, Tullio (1969), Introduzione alla semantica, Roma, Laterza, 1999.

De Mauro, Tullio (1982), Minisemantica dei linguaggio non verbali e delle lingue, Roma, Laterza, 2019.

Dupre, Gabe (2021), «(What) Can deep learning contribute to theoretical linguistics?» in Minds and Machines, 31(4), pp. 617-635.

Eco, Umberto (1974), Trattato di semiotica generale, Milano, La nave di Teseo, 2016.

Eco, Umberto (1997), Kant e l’ornitorinco, Milano, La nave di Teseo, 2016.

Ethayarajh, Kawin (2019), «How contextual are contextualized word representations? Comparing the geometry of bert, elmo, and gpt-2 embeddings» in Proceedings of the 2019 EMNLP-IJCNLP, pp. 55-65.

Fodor, Jerry (1987), Psychosemantics. The problem of meaning in the philosophy of mind, Cambridge, MIT Press, 1993.

Fodor, Jerry, e Pylyshyn, Zenon (1988), «Connectionism and cognitive architecture: a critical analysis», in Cognition, 28(1-2), pp. 3-71.

Friedman, Robert (2023), «Large Language Models and Logical Reasoning», in Encyclopedia 3, pp. 687-697.

Gastaldi, Juan Luis (2021), «Why can computers understand natural language? The structuralist image of language behind word embeddings», in Philosophy & Technology, 34(1), pp. 149-214.

Harnad, Stevan (1990), «The symbol grounding problem», in Physica D: Nonlinear Phenomena, 42(1-3), pp. 335-346.

Jakobson, Roman (2002). Saggi di linguistica generale, Milano, Feltrinelli.

Kovaleva, Olga et al. (2019), «Revealing the dark secrets of bert», in arXiv: 1908.08593.

Klumizev, Artur e Nivre, Joakim (2021), «Schrodinger’s tree — On syntax and neural language models», in arXiv: 2110.08887.

Lenci, Alessandro (2023), «Understanding natural language understanding systems» in Sistemi Intelligenti 2, pp. 277-302.

Lo Piparo, Franco (2003), Aristotele e il linguaggio. Cosa fa di una lingua una lingua, Roma-Bari, Laterza.

Merrill, Williamo et al. (2021), «Provable limitations of acquiring meaning from ungrounded form: What will future language models understand?», in Transactions of the ACL, 9, pp.1047-1060.

Mikolov, Tomas et al. (2013). «Efficient Estimation of word Representations in Vector Space», in arXiv: 1301.3781.

Mu, Jesse e Andreas, Jacob (2020), «Compositional explanations of neurons», in 34th Conference on NeurIPS.

Murty, Shikhar et al. (2022), «Characterizing intrinsic compositionality in transformers with tree projections», in arXiv: 2211.01288.

Pater, Joe (2019), « Generative linguistics and neural networks at 60: Foundation, friction, and fusion», in Language 95(1).

Piantadosi, Steven, e Hill, Felix (2022). «Meaning without reference in large language models», in arXiv: 2208.02957.

Pilehvar, Mohammad, e CamaCho-Collados, Jose (2021), Embeddings in natural language processing: theory and advances in vector repreentations of meaning. Cham, Springer.

Potts, Chris (2020). «Is it possible for language models to achieve language understanding?» https://chrisgpotts.medium.com/is-it-possible-for-language-models-to-achieve-language-understanding-81df45082ee2

Rogers, Anna et al. (2020). «A primer in bertology: what we know about how bert works», in Transaction of the ACL 8, pp. 842-866.

Sahlgren, Magnus e Carlsson, Fredrik (2021), «The singleton fallacy: why current critiques of language models miss the point», in arXiv: 2102.04310.

Saussure, Ferdinand, (1916), Cours de linguistique Générale, Losanna, Payot (Corso di linguistica generale, trad. di, T. De Mauro, Laterza, Roma-Bari, 2015).

Smilkov, Daniel et al. (2016). «Embedding projector: Interactive visualization and interpretation of embeddings», in arXiv: 1611.05469.

Søgaard, Anders (2022), « Understanding models understanding language», in Synthese 200(6).

Vaswani, Ashish et al. (2017). «Attention is all you need», in arXiv: 1706.03762.

Wittgenstein, Ludwig (1953), Philosophische Untersuchungen (Ricerche filosofiche, trad. di, M. Trinchero, Einaudi, Torino, 2014).

Wittgenstein, Ludwig (1921), «Tractatus Logico-Philosophicus», in Ostwald's Annalen der Naturphilosophie (Tractatus Logico-Philosophicus e quaderni 1914-1916, trad di. G. Conte, Einaudi, Torino, 2009).

Wolf, Thomas (2018). «Learning meaning in natural language processing – The semantics mega-thread», https://medium.com/huggingface/learning-meaning-in-natural-language-processing-the-semantics-mega-thread-9c0332dfe28e

Pubblicato
2024-11-02
Come citare
Capone, L. (2024) «Osservazioni sulla Filosofia della Linguistica Computazionale. Chiarificazione dei presupposti teorici del Natural Language Processing.», Rivista Italiana di Filosofia del Linguaggio. doi: 10.4396/SFL202302.