The Future of Self-Help: How AI is Changing the Landscape
What if artificial intelligence can help us help ourselves?
Deposit Photos
Should you give a f*ck about life?
Back in 2016, author Mark Manson shook up the self-help world with a novel answer to this question in his book provocatively titled The Subtle Art of Not Giving a F*ck.
Instead of white-knuckling your way to unattainable positivity, chasing after endless shiny things, itâs better to narrow the aperture. Manson advised deeply investing in what you deem as significant for your life, not what the prevailing culture insists you should care about.
After that? Accept full responsibility for all that comes your way.
Mansonâs message struck a nerve, selling millions of books. A #1 New York Times bestseller, he camped out on this lofty literary list for years.
From Bestseller to Entrepreneur: Self Help in the AI Age
In the nine years since publication, the world has shifted, especially technologically due to artificial intelligence. As a result, many of those people who once turned to personal improvement books now consult AI tools like ChatGPT and other applications.
Young people are especially keen to do so. â13.1% of US youths, representing approximately 5.4 million individuals, use generative AI for mental health advice, with higher rates (22.2%) among those 18 years and older,â as reported in a November 7, 2025 research letter published in JAMA Network Open. âOf these 5.4 million users, 65.5% engaged at least monthly and 92.7% found the advice helpful.â
Despite this high adoption rate, Manson told me via interview thereâs something sorely lacking in AI self-help: specificity. âValues are extremely personal,â he said. âBut most of the AI models out there offer broad advice.â
When Inoffensive, Sweeping Advice Fails to Deliver
To Mansonâs point, trained for general-purposes, most available AI applications tend to be articulate yet inoffensive, emphasizing agreeability and validation, not candor. âBecause [generative] AI chatbots are coded to be affirming, there is a validating quality to responses, which is a huge part of relational support,â says Douglas Mennin, Professor of Clinical Psychology, Director of Clinical Training at TC and co-developer of Emotion Regulation Therapy, as quoted in Teachers College Columbia University.
Seeing the need for a chatbot geared to the user and their unique values and challenges, Manson partnered with futurist Raj Singh to launch Purpose, an AI personal growth mentor. âOptimized to quickly identify your blind spots and suggest practical improvements you can take action on today,â itâs also meant to challenge you to be the best version of yourself. (Singh is no stranger to artificial intelligence. Backed by Google, he built Go Moment, the first ever AI-augmented hotel concierge, later acquired by Revinate.)
An AI Mentor Built to Recall and Rejoin
Key to their model is a âpersistent memory architectureâ that gets to know you deeply, including your history, so that it can not just tell you what you are doing right, but actually push back on your ideas and thoughts, leading to sustainable growth, the kind that comes from honesty, not obsequious praise. âWeâve tried to make it challenging and get it to question your assumptions, so it understands who you are and how youâre different from everybody else,â said Manson.
In this way, Purposeâs approach echoes existentialismâs emphasis on the individual versus classical philosophyâs universalism. When its most visible proponent, Jean Paul Sartre, wrote, âExistence precedes essenceâ he was pointing out the need for a personalized philosophy, not a one-size-fit all way of doing things.
Thatâs because people arenât widgets. We donât come into this world with prefabricated values. We create them through our unique experiences. âIn contrast to other entities, whose essential properties are fixed by the kind of entities they are, what is essential to a human beingâwhat makes her who she isâis not fixed by her type but by what she makes of herself, who she becomes,â according to the Stanford Encyclopedia of Philosophy Advice.
An Ethos For This Moment
If we stop to think about it, existentialist ideals, the type on display with Purposeâs value proposition, have quietly informed so many of daily lifeâs tools. Silicon Valley realized years ago the utility consumers place on customization. We experience this in many ways, from Amazon offering tailored book lists based on our reading habits to Netflixâs movie recommendations informed by our viewing history to YouTubeâs algorithms based on how long we stayed watching a video or what weâve liked or commented on.
Search has also been affected by the existentialist bug. Thereâs a reason why fewer people simply âGoogle itâ anymore and instead rely on language model optimization (LMO) to obtain specifically relevant information, something I recently covered in Forbes. As Claude Zdanow, CEO of Onar Holding Corporation, told me: âLanguage model optimization is about creating content thatâs actually relevant and useful so that AI, not just a search engine, can interpret it, trust it, and serve it up as the best answer. Itâs no longer about finessing the system. Itâs about genuinely solving a userâs problem.â
The Market Response to Fear and Doubt
Whether weâre discussing Purpose or other applications like Replika, a widely known companion app; PI, a personal AI offering empathetic everyday guidance; or Wysa, a mental health advisor employing mindfulness techniques, all these technologies trade in alleviating uncertainty. Philosophy itself arose as a solution to solve the vexing challenge everyone must face: not knowing what will come next yetâand having to push forward anyway. For as SĂžren Kierkegaard, another existentialist luminary, put it so well, âLife can only be understood backwards; but it must be lived forwards.â
Another aspect of uncertainty plaguing the therapy market is privacy. âIf the data used to train a chatbot include sensitive patient or business information, it becomes part of the data set used by the chatbot in future interactions,â according to Journal of Medical Internet Research.
In todayâs digital age, data privacy is a major concern for users of AI platforms like Purpose. The risk of data being disclosed to unintended audiences and used for various purposes without authorization is a real threat that cannot be overlooked. This is why Purpose has taken a âprivacy-firstâ approach, implementing strict encryption measures commonly used by banks and financial institutions. By operating on a subscription model and avoiding ads, Purpose ensures that user data is protected and not exploited for profit.
Recent headlines have highlighted the dangers of AI therapy gone wrong, with incidents of suicides linked to inappropriate human/AI interactions. This underscores the importance of not only privacy but also the need for careful oversight and regulation of AI apps offering advice. As users, we must be vigilant in ensuring that the AI platforms we choose to engage with prioritize our well-being and safety.
When it comes to choosing an AI platform to navigate lifeâs uncertainties, the real test lies in how users interact with it. Whether you opt for Purpose or another AI platform, the key is to take personal responsibility for your choices. Itâs about being proactive in seeking guidance and support, whether from a human or AI accountability partner. This individual or system should serve as a mirror, reflecting back truths that may be uncomfortable but necessary for personal growth.
Ultimately, the path to self-improvement and personal development is a personal choice. Life demands that we confront our struggles and challenges head-on, and the support we choose to seek along the way plays a crucial role in our journey. As we embrace the opportunities offered by AI technology, let us do so with a sense of mindfulness and responsibility, ensuring that our interactions with these platforms are guided by our best interests and well-being.

