The Laws of Thought: A Review of Cognitive Research and Artificial Intelligence
Tom Griffiths, in his book “The Laws of Thought: The quest for a mathematical theory of the mind,” delves into the longstanding debate within cognitive research between computationalism and connectionism. Computationalism posits that intelligence can be explained through rules, symbols, and logic, while connectionism argues that intelligence emerges from interconnected networks resembling the brain’s neurons.
Griffiths traces the evolution of thinking in three mathematical frameworks: rules and symbols, neural networks, and probability. The first framework views thinking as problem-solving, breaking tasks into manageable steps. Neural networks, on the other hand, learn from examples and interactions to produce complex behavior. Lastly, probability and statistics introduce uncertainty, reflecting how humans weigh evidence and update beliefs.
The author argues that a comprehensive understanding of intelligence requires a blend of all three frameworks. By exploring the historical attempts to map the mind’s processes using mathematics, Griffiths offers a detailed and engaging perspective on the development of artificial intelligence.
In contrast, neuroscientists Gaurav Suri and Jay McClelland present a different viewpoint in “The Emergent Mind: How intelligence arises in people and machines.” They propose that the mind emerges from interacting networks of neurons, both biological and artificial, generating thoughts, emotions, and decisions.
While both books offer insights into the generative AI revolution, Griffiths emphasizes the need for a hybrid approach combining rules, neural networks, and probability. In comparison, Suri and McClelland advocate for purely neural architectures to drive autonomous, goal-driven AI.
Despite the differing perspectives, both books highlight the challenges and possibilities in AI research. Griffiths’ book provides a historical context for understanding the evolution of AI, while Suri and McClelland offer a provocative vision for the future of artificial intelligence.
In conclusion, Griffiths leaves readers with a nuanced understanding of the diverse frameworks used to describe thought processes. As the field of AI continues to evolve, the integration of these frameworks may pave the way for a more comprehensive and effective approach to artificial intelligence.
For further reading on machine intelligence, two additional books are recommended:
1. “Algorithms to Live By” by Brian Christian and Tom Griffiths: This non-technical exploration of computing concepts sheds light on everyday decision-making and the potential of algorithmic approaches to enhance human cognition.
2. “Rebooting AI: Building artificial intelligence we can trust” by Gary Marcus and Ernest Davis: This book advocates for hybrid AI systems that combine the strengths of neural networks with rules and symbols, offering a more robust and reliable approach to artificial intelligence.
In conclusion, the ongoing debate between computationalism and connectionism continues to shape the field of artificial intelligence. As researchers strive to unravel the complexities of human intelligence, a multidimensional approach that integrates various mathematical frameworks may hold the key to unlocking the full potential of AI.

