The launch of ChatGPT was a significant event in the world of AI, sparking debates about the future of artificial intelligence. Some see it as a step towards a superintelligent future, while others view it as a tool for AI snake-oil salespeople. However, a recent experience with vibe coding has led to a surprising revelation that challenges both perspectives.
Vibe coding, popularized by AI researcher Andrej Karpathy, involves developing software by interacting with an AI model in plain language to generate code. New tools like Claude Code and ChatGPT Codex have shown impressive coding capabilities, leading to claims of a new era of AI disruption.
In a personal experiment with these tools, I was able to create useful apps within a short period, despite limited coding experience. This hands-on approach allowed me to explore the capabilities of AI models like ChatGPT in a deeper way. What I discovered was the inherent flaws in how large language models (LLMs) are commercialized and productized, shaping a machine that may not align with individual values.
Most users interact with AI technology that has been trained through reinforcement learning from human feedback (RLHF), which influences the chatbot voice and behavior. This process often reflects the values and ideologies of the creators, limiting the AI’s ability to express uncertainty or challenge the user.
By challenging ChatGPT to question itself and prioritize evidence-based analysis, I was able to create a customized model that aligned with my preferences and values. This personalized approach transformed the AI tool into a cognitive mirror, prompting critical thinking and engagement rather than passive consumption.
While AI-generated text may not offer substantial value on its own, prompting AI tools to generate content based on individual needs and perspectives can enhance the user experience. The true potential of AI lies in empowering individuals to solve unique problems in their own way, rather than relying on pre-packaged solutions.
To maximize the benefits of AI technology, there is a need for decentralized models that give users full control over their AI tools. This approach addresses concerns related to privacy, copyright infringement, and environmental impact associated with centralized AI systems.
In conclusion, while AI technology presents fascinating possibilities, it is essential to use these tools mindfully and with caution. Rather than embracing one-size-fits-all solutions, we should approach AI with increased awareness of its potential risks and limitations. By treating AI as a tool for individual empowerment rather than mass consumption, we can harness its true potential for innovation and problem-solving.

