3 ways Apple can win the AI race

Intro

I ended my last post by mentioning one of Apple’s advantages in the emerging AI race, but it’s a topic that deserves its own writeup. Apple’s been taking a lot of heat for lagging behind in AI, which is pretty funny when you consider Apple’s history. This is not a company that races to be the first in a space, this is a company that waits until they can be the best in a space. They didn’t release the first computer, music player, smartphone, tablet, watch, headphones, or VR headset. They released the best version of those products — and I don’t expect AI to be any different.

It’s worth mentioning that Apple rarely uses the term “AI”, preferring to use “machine learning” instead. Technically speaking, AI is an umbrella term and machine learning is a subset, but when people talk about AI today they’re mostly referring to generative AI and large language models like ChatGPT. When it comes to certain AI / machine learning tasks like facial recognition and computational photography, you could argue that Apple is leading the race! But of course, the field of AI that everyone cares about right now is generative AI, and in that category Apple is currently behind.

With all that said, here are the three main reasons why I think Apple still wins this race.

Hardware

In 2017, Apple announced that the iPhone 8 would feature a “Neural Engine”, which is a dedicated AI coprocessor. They’ve continued to build on that technology ever since, rolling it out to iPads & Apple Watches in 2018, M-series computers in 2020, the Apple TV in 2021, and even Studio Displays in 2022. Basically this is a dedicated chip designed to accelerate machine learning tasks. So far it’s been used for things like FaceID, speech and image recognition, computational photography, power optimization, and more.

The most significant part of Apple Silicon and its Neural Engine is efficiency — the ability to run complex machine learning instructions with extremely low power consumption. Apple’s hardware might not yet rival the performance of a high-end PC with a $2000 graphics card, but you can’t carry that desktop computer in your pocket. Efficient power consumption means that advanced machine learning tasks are possible on existing smartphones, tablets, laptops, and even watches…without heating up or eating through battery.

Whenever we use ChatGPT or a similar LLM, we’re sending requests to the cloud and waiting for a response. This can be slow, and in the case of ChatGPT-4 (the “real” ChatGPT), it comes with a limit of 40-50 messages every 3 hours. During peak use periods, the speed and capability of ChatGPT is further throttled. The fact is, good AI is computationally expensive — which is how OpenAI can justify charging $20USD/month for ChatGPT-4.

A dreamy depiction of cloud computing made tangible, with decision trees emanating from floating islands.

As our personal computing devices get more powerful, it becomes downright wasteful to use the cloud for every task. I use ChatGPT almost every day, but a lot of the things I’m using it for are simple enough to be handled on my local device. As the cost of storage decreases, locally hosting large model files becomes relatively trivial without impacting normal usage. In terms of efficiency, ideally all requests should be triaged locally, processed on device if possible, and sent to the cloud only if more computation is needed. A good example of this is voice recognition; it doesn’t make sense to upload large audio files to the cloud for processing when even the lowly current version of Siri does a great job of processing voice locally. I can speak into my 4 year old Apple Watch while offline, and it works instantly and flawlessly.

There are a lot of rumours that Siri is due for a big upgrade at WWDC 2024, which seems to be supported by some of its recent actions. Apple recently published a research paper on how to optimize LLMs for devices for limited memory , which could provide a major boost to ML processing on smartphones. They also released a new open-source machine learning framework called MLX, which allows developers to build models that run more efficiently on its powerful Apple Silicon chips. In terms of image-based ML, Apple also released an open-source model called Ferret. It’s still early days, but the fact that Apple is releasing public research papers and open-source projects is great for the AI development community, and bodes well for future improvement.

Apple won’t be the only company providing on-device AI services, but it’s an area where they already have a head start. Between the huge existing proliferation of Apple devices with Neural Engine chips and their years of experience handling advanced Machine Learning tasks, Apple is well positioned from a hardware perspective.

Personalization

One of the frustrating things about working with LLMs is that they don’t know anything about you. ChatGPT Pro has the ability to provide a custom instruction set, which includes a 1500-character field to provide background information. As useful as this is for improving context, it’s not nearly enough data to truly personalize your experience.

Even if these limits were removed, most people wouldn’t be comfortable uploading all of their personal information to the cloud. That also presumes there’s an easy way of exporting all your personal information, which there isn’t. This is where the Apple ecosystem has a major advantage: it has the ability to create a secure local private database of all your personal information. Every text, photo, email, and document you’ve ever created or received could be used as a standalone personalization model. Same goes for your location and internet history, the apps you use regularly…everything you’ve ever seen or done could be used to power the ultimate personal assistant. With great power comes great responsibility, which is where Apple’s focus on privacy and security comes into play.

An artistic depiction of AI personalization, represented by a tree with icons for each type of data and roots stretching to the horizon in every direction.

I’m sure some people would cringe at the idea of giving all their personal information to a virtual assistant, even if it doesn’t leave their device. For that reason, I’d imagine the personalization component to be an opt-in feature, with the ability to choose which pieces of data the assistant has access to. As ChatGPT has shown, there’s plenty that can be done without personalization, but those of us willing to opt-in would gain a ton of new functionality.

Data

One of the latest developments in AI is that the New York Times is suing OpenAI and Microsoft for copyright infringement . There’s also an ongoing copyright claim by Thompson Reuters against multiple tech companies. Meanwhile, Apple has recently started negotiating with major news and publishing organizations, offering multiyear data licensing deals starting at $50 million. For traditional content companies that have spent decades struggling to monetize, making tens to hundreds of millions from their archived data is a pretty appealing offer.

One of the more common and valid criticisms of LLM’s is that they’re essentially ripping off content creators without compensation. With their licensing offers, Apple is effectively positioning itself as a company that pays fair value for the data they’re using. If these AI lawsuits are successful, this becomes a position of strength from a legal standpoint. If the lawsuits fail, Apple is still positioning itself favourably from a morality standpoint.

An artistic depiction of the role of data in large language models, with a meadow of trees. One of the trees sits atop a circuit board with apples on the ground, loosely representing Apple's deals with content publishers.

Now, this isn’t to say that OpenAI won’t be able to offer their own licensing deals. Just recently they partnered with Axel Springer for access to data from Politico, Business Insider, and more. Where Apple has the potential for a competitive advantage is their massive amount of cash on hand. They’ve been sitting on multi-billion dollar rainy-day fund for years, and we’re approaching monsoon season.

Another interesting potential advantage for Apple is their ability to provide favourable App Store or Apple News placement in exchange for exclusive licensing rights. In an industry that’s all about attention, Apple has an unfair advantage in terms of promoting apps and content. It remains to be seen whether they’ll leverage that advantage, but it’s worth pointing out. We haven’t yet seen any deals for exclusive data rights, but as the AI race goes on it’s a very real possibility. Unfortunately the biggest losers in this licensing war will be the open-source models, since the open-source community can’t compete on licensing budgets.

One more thing

This wouldn’t be a proper Apple post without one more thing. In this case, the bonus competitive advantage I want to highlight is the same one that Apple has leveraged in all of its other products: quality.

While I won’t pretend to know what Apple has up its sleeve, as a thought experiment it’s worth considering what ‘high quality’ would look like in the context of AI. Here’s a rundown of my current thinking:

Fully replacing Siri with a more capable version

Siri has been around for 13 years, and is reportedly built on a clunky database that takes weeks to be updated with basic features. A ground-up redesign seems inevitable, and the current AI revolution seems like the perfect time to rip off the band-aid and start fresh. Thanks to ChatGPT, people’s expectations of an AI assistant have increased dramatically. Being able to perform complex multi-step tasks has become table stakes.

A robo-depiction of a new powerful version of Siri.

Cross-platform integration

Similar to how Microsoft is building ChatGPT into all of its Office suite of tools, a quality AI assistant should be able to seamlessly work across applications. Apple’s overall app ecosystem is much more well-rounded than Microsoft’s Office suite, which means more things become possible. When interacting with a virtual assistant, you shouldn’t have to think about constraints.

Higher quality training data

More trusted sources of data, with better associated weights. Scrubbing out lower quality data sources that could pollute output. This is easier said than done, but ultimately you need high quality inputs to get high quality outputs.

Built-in safeguards to verify information

Solving the hallucination problem is a major hurdle of LLMs, so it’s reasonable to expect that Apple would put a lot of effort here. For any information that’s ‘verifiable’, this would mean performing online lookups & citing sources before returning results. If offline, flagging unproven information would go a long way toward building trust.

Copilot for Xcode & new no-code/low-code app building tools

Xcode is in desperate need of a copilot upgrade. GPT-4 is great at writing SwiftUI code, but coding assistants work best inside a code editor. We’ve seen some amazing things in the world of AI-based no-code tools, from simple sketching interfaces like makereal to complex but powerful apps like FlutterFlow. There’s a massive opportunity for Apple to combine its world-class design with AI to build a no-code app platform for building SwiftUI apps.

Wrap-up

A depiction of a giant Apple VOLTRON robot surrounded by Apple devices.

Apple has all of the pieces in place to create the VOLTRON of AI assistants, and I have no reason to believe they won’t. This is a company that’s been at the leading edge of machine learning for over a decade; just because they haven’t rushed to put out their own LLM in 2023 doesn’t mean they haven’t been working on something big. There have been multiple reports that Apple has been spending millions of dollars a day training AI, so I’m inclined to believe there’s something there.

Apple has always chosen the slower and more deliberate approach, and for something as nascent and volatile as LLMs it’s easy to understand why. There’s a lot at stake to get this right, and I’m sure Apple is just as tired of hearing all the Siri jokes as the rest of us.

During a recent CNBC interview with senior apple execs Johny Srouji and John Ternus, the question was asked about Apple falling behind in AI. John’s reaction was pretty telling:

Apple SVP John Ternus responding to a question about Apple lagging behind in AI by laughing and saying 'Not too worried. Not too worried'.

So yeah, don’t sleep on the giant fruit company.