AI Struggles to Replace Knowledge Work: New Research Reveals Challenges
It’s been nearly two years since Microsoft CEO Satya Nadella predicted that AI would replace knowledge work — the white-collar jobs held by lawyers, investment bankers, librarians, accountants, IT, and others.
Despite significant progress made by foundation models, the transformation of knowledge work has been slow to materialize. While models have excelled in in-depth research and agentic planning, white-collar work has remained largely unaffected.
However, new research from training-data giant Mercor sheds light on why this transition has been challenging. The study examines how leading AI models perform tasks in consulting, investment banking, and law, resulting in a benchmark called APEX-Agents. The findings reveal that even the best AI models struggle to correctly answer more than a quarter of the questions posed by real professionals. Most of the time, the models provide incorrect or no responses at all.
One of the key findings of the research is that AI models struggle with multi-domain reasoning, a crucial aspect of many knowledge work tasks. Real professionals operate across various tools and platforms, requiring the ability to integrate information from different sources seamlessly.
The scenarios used in the benchmark were sourced from professionals on Mercor’s expert marketplace, setting a high standard for AI performance. The complexity of the tasks highlights the challenges that AI models face in emulating human professionals.
For instance, a question from the “Law” section reads:
During the first 48 minutes of the EU production outage, Northstar’s engineering team exported one or two bundled sets of EU production event logs containing personal data to the U.S. analytics vendor….Under Northstar’s own policies, it can reasonably treat the one or two log exports as consistent with Article 49?
The correct answer requires a detailed analysis of company policies and EU privacy laws, showcasing the depth of knowledge needed to tackle such tasks in the legal field.
While AI models have made progress, they are still far from being able to replace professionals in high-value professions like investment banking. The results show that some models, such as Gemini 3 Flash and GPT-5.2, performed better than others but still fell short of the desired accuracy.
Despite the initial challenges, the AI field has a track record of overcoming difficult benchmarks. With the APEX-Agents test now public, it presents an opportunity for AI labs to improve their models and strive for better performance in the future.
According to researcher Brendan Foody, the rapid improvement in AI capabilities suggests that the technology could have a significant impact on knowledge work in the near future. While current models may perform like interns with limited accuracy, ongoing advancements indicate a promising trajectory toward greater proficiency.

