HyperAI
Back to Headlines

Noam Chomsky Discusses the True Capabilities and Risks of AI Chatbots Like ChatGPT

3 days ago

Artificial intelligence (AI) has been transforming various aspects of life, raising significant ethical concerns. ChatGPT, an AI-powered chatbot developed by OpenAI, is a prominent example. It relies on large language models (LLMs) to generate human-like responses and is increasingly prevalent in social media and other fields. Noam Chomsky, one of the world's most respected linguists and public intellectuals, was interviewed by C.J. Polychroniou to discuss the implications of AI, particularly in relation to human capabilities and ethical concerns. Chomsky draws a crucial distinction between AI as pure engineering and AI as science. Early AI pioneers like Alan Turing, Herbert Simon, and Marvin Minsky saw AI as a scientific endeavor, aiming to understand human and animal cognition. Over time, this scientific focus has shifted towards an engineering orientation, with the primary goal being the creation of useful products rather than deep understanding. Chomsky clarifies that while programs can surpass human performance in certain tasks—such as a calculator outperforming human arithmetic skills or a chess-playing AI defeating human grandmasters—they do not necessarily represent a higher form of intelligence. He emphasizes that many biological organisms, such as desert ants and birds, have cognitive abilities that far exceed human performance in specific domains. However, these are fundamentally different from human capabilities, which include rich, innate knowledge and the ability to navigate complex cognitive processes. When it comes to language acquisition, Chomsky argues that LLMs, despite their impressive performance, fail to provide any insight into the underlying biological endowment for language. Unlike scientific theories that aim to explain and predict, LLMs merely simulate behavior based on vast amounts of data. They cannot distinguish between possible and impossible languages, a fundamental requirement for any theory of human language. Chomsky illustrates this with an analogy: if a biologist proposed a theory that could not differentiate between real and mythical organisms, it would be immediately dismissed. Similarly, he suggests that LLMs lack the necessary scientific rigor to contribute meaningfully to the understanding of language. Chomsky also addresses the potential ethical and societal threats posed by AI, especially LLMs. While these tools can be incredibly useful, they can also facilitate disinformation, defamation, and other harmful activities. The combination of AI-generated text, artificial images, and synthetic voices exacerbates these risks. Recognizing the gravity of these issues, thousands of AI researchers have called for a moratorium on development to assess and mitigate potential dangers. The philosopher also highlights the importance of distinguishing between linguistic competence (the internal cognitive system that humans possess from birth) and performance (the observable use of language). While LLMs excel in simulating human performance, they fall short in explaining the deep, internal processes that enable humans to acquire and use language. He points out that performance data, such as the conversations generated by chatbots, is inherently limited and can be misleading. True scientific inquiry requires experimental methods to refine and understand the underlying systems. Furthermore, Chomsky discusses the unique human attributes, primarily language and thought, which set us apart from other species. These attributes are the focus of genuine scientific research, aiming to understand what makes humans unique. In contrast, the uncritical acceptance of LLMs and similar technologies, driven by their apparent performance, can overshadow the deeper scientific questions and contribute to skepticism about the value of scientific inquiry. This is reminiscent of corporate campaigns that have sown doubt about scientific evidence, often with disastrous consequences, such as the denial of the harmful effects of tobacco, lead, and fossil fuels. Chomsky is cautious about the current enthusiasm surrounding LLMs, labeling it "absurd" when compared to the standards of normal science. He stresses that even with future advancements, the basic design flaws of LLMs will prevent them from achieving a true understanding of human language. However, it is entirely possible for future engineering projects to surpass human performance in specific tasks, much like calculators or specialized robotic systems. Regarding the ethical threats, Chomsky notes that AI, if not carefully controlled, can pose significant risks. For example, automating patient care could lead to errors that human judgment and empathy would otherwise prevent. Similarly, removing human oversight from missile defense systems could lead to catastrophic failures. He supports regulatory efforts, such as the European Union's attempts to classify AI tools based on their risk levels, but remains skeptical about the effectiveness of such measures. Malicious actors can likely find ways to circumvent safeguards, making the challenge formidable. Chomsky's insights highlight the need for a balanced approach to AI development, recognizing both its potential benefits and inherent limitations. Industry experts and researchers echo his concerns, emphasizing the importance of rigorous scientific methods and ethical considerations. OpenAI, the company behind ChatGPT, acknowledges the limitations of its models and is actively working on improving them, while also engaging in responsible AI practices. The broader tech community must prioritize understanding and safety to harness the full potential of AI without compromising human welfare.

Related Links