X
Innovation

Apple is building a high-security OS to run its AI data centers - here's what we know so far

Apple plans to make the hardened derivation of iOS and MacOS, running on Apple's Private Cloud Compute servers, available for security researchers to verify the company's privacy and security promises.
Written by Tiernan Ray, Senior Contributing Writer
private-cloud-compute-splash.png
Apple

During last week's introduction of Apple Intelligence, Apple software engineering head Craig Federighi announced that the company will run some generative AI models in a secure cloud computing environment.

Called Private Cloud Compute (PCC), the service will be subject to scrutiny by outside security experts. Federighi said: "Just like your iPhone, independent experts can inspect the code that runs on these servers to verify this privacy promise." The goal is to verify Apple's privacy promises, including that user data will never be stored on PCC servers and will be expunged from memory once a request is fulfilled. 

Also: Here's how Apple's keeping your cloud-processed AI data safe (and why it matters)

Federighi did not go into detail about how security researchers will inspect or audit the PCC servers, but a subsequent Apple blog post states the PCC servers will run a distinct version of the company's operating system software that researchers will be allowed to inspect. 

"When we launch Private Cloud Compute, we'll take the extraordinary step of making software images of every production build of PCC publicly available for security research," the Apple Security Engineering and Architecture and collaborating teams wrote.

The blog post goes on to say that Apple will "periodically also publish a subset of the security-critical PCC source code, [and] in a first for any Apple platform, PCC images will include the sepOS firmware and the iBoot bootloader in plaintext, making it easier than ever for researchers to study these critical components."

Apple emphasizes that its devices "will be willing to send data only to PCC nodes that can cryptographically attest to running publicly listed software" as a means to ensure its privacy and security guarantees. 

wwdc24-keynote-privacy-promise-jpeg.png

Apple makes various promises about the safety and security of using Private Cloud Compute to process some AI tasks.

Apple

Apple provided little detail about the nature of the server software, other than the fact that it is a derivation of the iOS and MacOS operating systems. 

The servers will run on Apple's own computer chips, analogous to the iPhone, iPad, and Mac, powered by "a new operating system: a hardened subset of the foundations of iOS and macOS tailored to support large language model (LLM) inference workloads while presenting an extremely narrow attack surface. This allows us to take advantage of iOS security technologies such as Code Signing and sandboxing."

Also: Apple's AI extravaganza left out 2 key advances - maybe next time?

Apple's iOS and macOS are based on a combination of open-source technologies such as the Darwin operating system, developed at Apple in the 1990s, freeBSD, and closed-source software developed at Apple.

It's unclear when developers will get a look at the new software. In the blog post, Apple researchers say they will give security researchers a "first look" at the software "soon." A note on Apple's developer site says Apple Intelligence will be available "in an upcoming beta" without mentioning anything specific about PCC timing.

ZDNET's Maria Diaz speculates that iOS 18 betas will become available in July, although Apple's website states in a footnote that "Apple Intelligence will be available in beta on iPhone 15 Pro, iPhone 15 Pro Max, and iPad and Mac with M1 and later, with Siri and device language set to US English, as part of iOS 18, iPadOS 18, and macOS Sequoia this fall."

Editorial standards