What Is Apple’s Neural Engine and How Does It Work?
Your iPhone, iPad, Mac, and Apple TV make use of a specialized neural processing unit called Apple Neural Engine (ANE) that’s way faster and more energy efficient than the CPU or GPU.
ANE makes possible advanced on-device features such as natural language processing and image analysis without tapping into the cloud or using excessive power.

Let’s explore how ANE works and its evolution, including the inference and intelligence it powers across Apple platforms and how developers can use it in third-party apps.
What Is Apple Neural Engine (ANE)?
Apple Neural Engine is a marketing name for a cluster of highly specialized compute cores optimized for the energy-efficient execution of deep neural networks on Apple devices. It accelerates machine learning (ML) and artificial intelligence (AI) algorithms, offering tremendous speed, memory, and power advantages over the main CPU or GPU.
ANE is a big part of why the latest iPhones, iPads, Macs, and Apple TVs are responsive and don’t get hot during heavy ML and AI computations. Unfortunately, not all Apple devices have an ANE—the Apple Watch, Intel-based Macs, and devices older than 2016 lack one.

The first ANE that debuted within Apple’s A11 chip in 2017’s iPhone X was powerful enough to support Face ID and Animoji. By comparison, the latest ANE in the A15 Bionic chip is 26 times faster than the first version. Nowadays, ANE enables features like offline Siri, and developers can use it to run previously trained ML models, freeing up the CPU and GPU to focus on tasks that are better suited to them.
How Does Apple’s Neural Engine Work?
ANE provides control and arithmetic logic optimized for performing extensive computing operations like multiplication and accumulation, commonly used in ML and AI algorithms such as image classification, media analysis, machine translation, and more.
According toApple’s patenttitled “Multi-Mode Planar Engine for Neural Processor,” ANE consists of several neural engine cores and one or more multi-mode planar circuits.

The design is optimized for parallel computing, where many operations, like matrix multiplications running in trillions of iterations, must be carried out simultaneously.
To speed up inference in AI algorithms, ANE uses predictive models. In addition, ANE has its own cache and supports just a few data types, which helps maximize performance.

AI Features Powered by ANE
Here are some on-device features you may be familiar with that ANE makes possible.
Some of the features mentioned above, like image recognition, also function without an ANE present but will run much slower and tax your device’s battery.

A Brief History of the Apple Neural Engine: From iPhone X to M2 Macs
In 2017, Apple deployed its very first ANE in the form of two specialized cores within the iPhone X’s A11 chip. By today’s standards, it was relatively slow, at just 600 billion operations per second.
The second-generation ANE appeared inside the A12 chip in 2018, sporting four times the cores. Rated at five trillion operations per second, this ANE was almost nine times faster and used one-tenth of the power of its predecessor.
2019’s A13 chip had the same eight-core ANE but ran one-fifth faster while using 15% less power, a product of TSMC’s enhanced 7nm semiconductor node. TSMC (Taiwan Semiconductor Manufacturing Company) fabricates Apple-designed chips.
The Evolution of Apple Neural Engine
Apple Silicon
Semiconductor Process Node
Launch Date
Operations Per Second
Additional Notes
A11 Bionic
10nm TSMC FinFET
600 billion
Apple’s first ANE
A12 Bionic
7nm TSMC FinFET
5 trillion
9x faster than A11, 90% lower power consumption
A13 Bionic
7nm TSMC N7P
6 trillion
20% faster than A12, 15% lower power consumption
A14 Bionic
5nm TSMC N5
11 trillion
Nearly 2x faster than A13
A15 Bionic
5nm TSMC N5P
15.8 trillion
40% faster than A14
A16 Bionic
5nm TSMC N4
17 trillion
8% faster than A15, better power efficiency
Same ANE as A14 Bionic
22 trillion
2x faster than M1/M1 Pro/M1 Max
40% faster than M1
Same ANE as M2
The following year, the Apple A14 nearly doubled ANE performance to 11 trillion operations per second, achieved by increasing the number of ANE cores from 8 to 16. In 2021, the A15 Bionic benefited from TSMC’s second-generation 5nm process, which further boosted ANE performance to 15.8 trillion operations per second without adding more cores.
The first Mac-bound M1, M1 Pro, and M1 Max chips had the same ANE as the A14, bringing advanced, hardware-accelerated ML and AI to the macOS platform for the first time.
In 2022, the M1 Ultra combined two M1 Max chips in a single package using Apple’s custom interconnect dubbed UltraFusion. With twice the ANE cores (32), the M1 Ultra doubled ANE performance to 22 trillion operations per second.
The Apple A16 in 2022 was fabricated using TSMC’s enhanced N4 node, bringing about 8% faster ANE performance (17 trillion operations per second) versus the A15’s ANE.
The first ANE-enabled iPads were the fifth-generation iPad mini (2019), the third-generation iPad Air (2019), and the eighth-generation iPad (2020). All iPads released since have an ANE.
How Can Developers Use ANE in Apps?
Many third-party apps use ANE for features that otherwise wouldn’t be feasible. For example, the image editor Pixelmator Pro provides tools such as ML Super Resolution and ML Enhance. And in djay Pro, ANE separates beats, instrumentals, and vocal tracks from a recording.
However, third-party developers don’t get low-level access to ANE. Instead, all ANE calls must go through Apple’s software framework for machine learning, Core ML. With Core ML, developers can build, train and run their ML models directly on the device. Such a model is then used to make predictions based on new input data.
“Once a model is on a user’s device, you’re able to use Core ML to retrain or fine-tune it on-device, with that user’s data,” according to the Core ML overview on theApple website.
To accelerate ML and AI algorithms, Core ML leverages not just ANE but also the CPU and GPU. This allows Core ML to run a model even if no ANE is available. But with an ANE present, Core ML will run much faster, and the battery won’t be drained as quickly.
Many Apple Features Wouldn’t Work Without ANE
Many on-device features wouldn’t be possible without the fast processing of AI and ML algorithms and the minimized memory footprint and power consumption that ANE brings to the table. Apple’s magic is having a dedicated coprocessor for running neural networks privately on-device instead of offloading those tasks to servers in the cloud.
With ANE, both Apple and developers can implement deep neural networks and reap the benefits of accelerated machine learning for various predictive models like machine translation, object detection, image classification, etc.
The second generation of Apple silicon is here with the M2 chip, but how does it compare to the existing M1 lineup?
When your rival has to bail out your assistant.
Every squeak is your PC’s way of crying for help.
Your phone is a better editor than you give it credit for.
I plugged random USB devices into my phone and was pleasantly surprised by how many actually worked.
This small feature makes a massive difference.