Key points
- Apple introduces Visual Intelligence in iOS 18, rivaling Google Lens.
- The feature identifies objects via the new Camera Control button.
- Visual Intelligence offers on-device privacy and no stored images.
- It integrates third-party models for enhanced object recognition.
Apple has always been a leader in innovation, and its latest addition to iOS 18 is a direct challenge to Google’s AI capabilities. Introducing Visual Intelligence, a feature set to make identifying real-world objects a breeze.
This feature will debut as part of the Apple Intelligence suite later this year, offering iPhone users a seamless way to gather information about their surroundings through their device’s camera.
Apple launches Google Lenses disguised as Apple Intelligence
And Tech Bro’s like WOW $AAPL
Missing Steve Jobs BIG TIME
— Technocrat 🥷 (@itechnosmith) September 9, 2024
The Power of Apple’s Visual Intelligence
Apple’s Visual Intelligence allows users to identify objects by simply pointing their iPhone camera at them. It’s powered by Camera Control, a new capacitive button on the iPhone 16 and 16 Pro that adds more depth to the camera experience.
By clicking and holding this button, users can trigger Visual Intelligence, which will scan the object and provide relevant information immediately.
Whether it’s identifying a dog breed or learning about the architecture of a building, the technology promises real-time, useful insights.
The goal, according to Apple’s Craig Federighi, is to help users “instantly learn about everything you see.”
This sets the stage for Apple to directly compete with Google Lens, an already well-established tool for object recognition. You can learn more about the competition in AI-based object recognition on Google Lens’s official page.
Privacy-Focused and Third-Party Integration in Visual Intelligence
One of the standout features of Visual Intelligence is Apple’s commitment to privacy.The system combines on-device intelligence with Apple services, ensuring no images are stored on their servers.
This on-device processing ensures that personal data remains private, aligning with Apple’s overarching privacy-focused strategy. You can read more about Apple’s privacy policies here.
Apple’s push toward third-party integration makes this feature even more exciting. Federighi hinted that Visual Intelligence would serve as a gateway to third-party models.
This could mean that users might soon search for a product on Google or use external models for more advanced queries directly from the Apple platform.
Imagine pointing your camera at a bike, and within moments, not only learning its brand but also getting purchase options from various online retailers.
Apple didn’t provide an exact release date for Visual Intelligence, only stating that it would arrive later this year. However, the mere mention of third-party integration has sparked excitement in the tech community.
A New Era of On-the-Go Information with Visual Intelligence
As tech enthusiasts eagerly await the rollout of iOS 18, Visual Intelligence promises to enhance how iPhone users interact with the world around them.
Apple’s built-in take on Google Lens seems to take user privacy a step further while offering seamless integration with third-party models.
This sets a new benchmark for real-time object recognition and could pave the way for broader AI integrations within the Apple ecosystem.
To learn more about how Visual Intelligence works, keep an eye on upcoming updates from Apple.
You can also find more details once iOS 18 rolls out, offering a practical look at how this AI-driven feature will change your day-to-day smartphone experience.