Apple shares how its M1 Mac chips bring a lot of iPhone security tech to its computers
The tech giant says it's been building privacy protections into the iPhone for years, but it couldn't bring all of them to the Mac until the M1.
Ian SherrContributor and Former Editor at Large / News
Ian Sherr (he/him/his) grew up in the San Francisco Bay Area, so he's always had a connection to the tech world. As an editor at large at CNET, he wrote about Apple, Microsoft, VR, video games and internet troubles. Aside from writing, he tinkers with tech at home, is a longtime fencer -- the kind with swords -- and began woodworking during the pandemic.
For years, Apple has touted the security built into its iPhones and iPads. More than a decade ago, it added ways to encrypt information on the iPhone. In 2010, it introduced encrypted messaging with iMessage. And in 2013, it introduced TouchID biometric sensors to help people unlock phones. Over the years, it's been able to bring those technologies to the Mac too -- but now, with its new M1 chips for the MacMini, MacBook Air and MacBook Pro, it'll be able to supercharge those efforts.
On its website Thursday, Apple updated its Platform Security documents, describing how Mac computers now work in much more similar ways to their iPhone counterparts. The documents dive into nitty-gritty details of how various security systems within computers and phones talk to one another, and how they're designed to protect an Apple user's privacy.
"Secure software requires a foundation of security built into hardware," Apple said in its security update, which came in at nearly 200 pages long. "That's why Apple devices -- running iOS, iPadOS, MacOS, TVOS, or WatchOS -- have security capabilities designed into silicon."
It may seem odd for a company as secretive as Apple to share so much detail about nearly anything. The tech giant is as much known for its marketing as it is for its devices, and while the company does share some technical details about its products on its website, it's meant for general audiences.
The Platform Security information though is different. Apple said it began publishing this information for business customers more than a decade ago. But the company soon learned that security researchers it works with to identify vulnerabilities in its devices found it helpful too. That's part of why you'll find terms like "kernel integrity protection" and "pointer authentication codes," both of which are part of the company's various security systems.
Apple isn't the only company that works with security researchers, of course. Over the past decade, the tech industry at large has instituted "bug bounty" programs to pay outside researchers to encourage them to help identify vulnerabilities in its devices. Companies including Microsoft, Google and Facebook have paid out large sums and have publicly thanked some security experts for identifying security issues before they become widely exploited by hackers. Apple itself pays up to $1.5 million for such bounties.
Apple said part of the way it designs security systems is to encourage people to use them, or to have them running in the background without people having to know how they work and what to do to use them.
For example, iMessage has encryption built in -- users don't have to turn it on. And it also built its TouchID fingerprint sensor and FaceID face-unlock system to encourage people to use its encryption systems, which are activated when people set a passcode. Before it built TouchID, for example, Apple said less than 49% of people used the passcodes on their phones. After its introduction, 92% of people did.
"This is important because a strong passcode or password forms the foundation for how a user's iPhone, iPad, Mac, or Apple Watch cryptographically protects that user's data," Apple said in its security document.