Apple’s A11 Bionic Chip Leave a comment

With the A10 Fusion, Apple had introduced what it calls “FUSION” technology. That chip had four processor cores: two “HIGH-PERFORMANCE” cores, and two “HIGH-EFFICIENCY CORES” The advantage was twofold: It delivered improved performance on apps that needed it, but when only lightweight tasks were running on the phone, it could power down the more power-hungry, high-performance cores, resulting in less battery drain.

Basic Understanding on A11 BIONIC Processor

The Apple A11 Bionic is a 64-bit ARM-based system on a chip (SOC), designed by Apple Inc. and manufactured by TSMC. It first appeared in the iPhone 8, iPhone 8 Plus, and iPhone X which were introduced on September 12, 2017. It has two high-performance cores which are 25% faster than the Apple A10 and four high-efficiency cores which are up to 70% faster than the energy-efficient cores in the A10.

Design Specifications

The A11 features an Apple-designed 64-bit ARMv8-A six-core CPU, with two high-performance cores at 2.39 GHz, called Monsoon, and four energy-efficient cores, called Mistral. The A11 uses a new second-generation performance controller, which permits the A11 to use all six cores simultaneously, unlike its predecessor the A10. The A11 also integrates an Apple-designed three-core graphics-processing unit (GPU) with 30% faster graphics performance than the A10. Embedded in the A11 is the M11 motion coprocessor. The Apple A11 bionic chip includes a new image processor which supports computational photography functions such as lighting estimation, wide colour capture, and advanced pixel processing.
The A11 is manufactured by TSMC using a 10 nm FinFET process and contains 4.3 billion transistors on a die 87.66 mm2 in size, 30% smaller than the A10. It is manufactured in a package on package (POP) together with 2 GB of LPDDR4X memory in the iPhone 8 and 3 GB of LPDDR4X memory in the iPhone 8 Plus and iPhone X.

Neural Engine

The A11 also includes dedicated neural network hardware that Apple calls a “Neural Engine”. This neural network hardware can perform up to 600 billion operations per second and is used for Face ID, Animoji and other machine learning tasks. That means in addition to applying sophisticated effects to a photo, as Apple has been doing in previous generations of its ISP, it can now perform effects on live video. Beyond effects, this also appears to be what enables the camera system to identify objects and their composition in a scene, allowing it to track and focus on the subject you are filming. The neural engine allows Apple to implement neural network and machine learning in a more energy-efficient manner than using either the main CPU or the GPU.

Leave a Reply

Your email address will not be published. Required fields are marked *