The evolution of macroeconomic modeling has been a subject of debate and discussion among economists for years. The introduction of the rational expectations revolution was seen as a significant step forward in understanding the complexities of the macroeconomy. However, some experts, like the author of this article, remain skeptical about the impact of this theoretical innovation on the field.
In the late 1970s, luminaries like John Taylor and Stanley Fischer integrated rational expectations into sticky wage and price models, leading to the New Keynesian revolution. Since then, progress in macroeconomic modeling seems to have stagnated, with some exceptions like the innovations from the Princeton School related to the zero lower bound issue.
The author draws parallels between the rapid advancements seen in competitive fields like economics, science, and the arts with the slow progress in macroeconomics since the rational expectations revolution. They argue that the most useful applications of new conceptual approaches tend to emerge quickly in highly competitive environments.
Recently, conversations with experts in artificial intelligence have sparked the author’s skepticism about the future pace of improvement in large language models like ChatGPT. They question the effectiveness of exposing these models to additional datasets, citing the law of diminishing returns. Using the analogy of reading multiple textbooks on a subject, the author suggests that there may be a limit to how much knowledge can be gained from expanding datasets.
A Bloomberg article highlights the challenges faced by leading AI companies in developing more advanced models, indicating diminishing returns from their efforts. This news raises questions about the timeline for achieving super general intelligence through artificial intelligence.
Despite these concerns, the author acknowledges the impressive advancements in large language models and the transformative potential of AI in the economy. They recognize that AI may eventually revolutionize various industries but caution that the path to super general intelligence might be slower than anticipated.
The discussion on alternative methods to boost artificial intelligence beyond expanding datasets offers a glimmer of hope for overcoming the data wall and unlocking new possibilities in AI development. However, the current lull in LLM development suggests that the field may be facing challenges in achieving significant breakthroughs.
Ultimately, the implications of these developments depend on individual perspectives on the risks associated with the development of artificial super intelligence. The ongoing debate and research in the field of AI will continue to shape the future of technology and its impact on society.