

Anyone who follows the current news from science and technology can get the impression that everything currently revolves around AI and machine learning and their integration into as many areas of human life as possible. With the many announcements, pronouncements and promises, questions of a fundamental nature often remain unanswered. Anyone who wants to get an overview often has to go through a lot of research literature in order to painstakingly collect individual aspects. Alternatively, go to a conference and listen to people who have already done this work. Apple held one in July 2025 – the overarching theme was “Reasoning & Planning”. The group is now publishing the recordings of eight lectures on its in-house machine learning blog. Seven months is a long time given the current pace of scientific publications in the field of machine learning and large language models (LLMs). Nevertheless, the videos recorded last summer are worth watching as they provide an overview of key aspects of research and development. The lecture by Melanie Mitchell from the Santa Fe Institute is dedicated to the question “How do LLMs work and why?” Her research area lies at the intersection of cognitive science and AI. She shows that, despite all the progress, it often remains unclear how LLMs achieve (correct) results. LLMs specializing in visual tasks fail in graphical logic tasks, which are relatively easy for text LLMs (and humans) to solve.

Optimal training for agents
Philipp Krähenbühl teaches at the University of Texas and also works as a researcher at Apple. In his lecture he addresses the question “How do you design efficient training methods for interactive agents?” The goal is a universal LLM that helps tackle complex questions. In his lecture he will highlight current findings from the “reinforcement learning” required for this.

Links to 30 publications
In addition to the eight lectures by scientists from various US universities and Apple developers, all of which are less than half an hour long, the blog post also lists 30 publications that were discussed at the event. Apple is conducting extensive basic research in the area of machine learning and is increasingly focusing on practical applications. The current version of Xcode allows agentic coding using external LLMs, such as Anthropic or OpenAI.















