At WWDC 2025, Apple introduced new features for its Apple Intelligence platform, highlighting Image Playground enhancements with ChatGPT integration, live translations in Messages and FaceTime, and Visual Intelligence upgrades. The Foundation Models framework for developers aims to enhance app functionality. Despite continued challenges, Apple remains committed to improving its AI tools and features for users.
Apple has kicked off a new chapter for its AI tools at the WWDC 2025 conference, focusing on updates for its Apple Intelligence platform. The company recognized the mixed reception of its AI offerings thus far but remains committed to refining its features. Among the notable announcements are enhancements to Genmoji and Image Playground, aiming to boost user creativity in messaging apps.
One major new feature is Image Playground in Messages, which now allows users to create vibrant backgrounds for group chats. Moreover, the tool has been integrated with ChatGPT, broadening its capabilities. This means that if users choose to generate images through ChatGPT on their iPhones, their information will only be shared with OpenAI with explicit permission.
Furthermore, Genmoji now enables users to meld two emoji from the Unicode library to devise new, humorous characters. For instance, users can combine a sloth and a light bulb emoji to illustrate the struggle of understanding a joke promptly.
The upgrades do not stop there. Apple has introduced live translation services across its Messages, FaceTime, and Phone apps. A user typing in Messages will see their words translated to their recipient’s preferred language instantly. Responses will also be translated in real-time. FaceTime users can expect live captions during calls, while spoken translations will be available on phone calls.
Visual Intelligence is also being enhanced. It now has the ability to scan screens in addition to working with the iPhone’s camera. ChatGPT’s integration will allow users to ask questions about visuals directly. Users can even search for images or products through platforms like Google and Etsy. Intriguingly, if Visual Intelligence identifies that a user is looking at an event, it will suggest adding a reminder to their calendar — a pretty handy feature, if you ask me. Accessing this will be as simple as pressing the buttons normally used for taking screenshots on the iPhone.
Developers, too, have something to look forward to, as Apple announced the Foundation Models framework. This will empower developers to harness Apple Intelligence in their applications, promising offline availability and privacy protection. For example, educational apps like Kahoot! could leverage this model to craft personalized quizzes, and Apple has noted that it’s user-friendly enough to implement with minimal code.
The Shortcuts app is set to receive upgrades for both iOS and macOS, now supporting actions driven by Apple Intelligence. This could be particularly useful for students who want a shortcut to compare class lectures with their notes, and once again, ChatGPT can be integrated.
Additional improvements include Apple Wallet’s new capability to summarize tracking information from merchants and delivery carriers, consolidating everything in one convenient spot. After a year since the feature’s debut, there are persistent questions regarding the efficiency of Apple Intelligence. Last year, Apple touted a smarter Siri as a highlight, yet users are still waiting for this enhanced digital assistant to arrive after the company postponed its release earlier this year. Some earlier features rolled out with bugs, like the notification summaries that needed reworking for better clarity. While Apple still trails behind big competitors like Google, today’s announcements suggest a continued focus on practical utility in their features.