Are You Using AI In Your Job?

We want to understand the real-world applications of AL and ML in business and the impact it will have on all our jobs.

Want to help? Complete the survey, your insights could make a big difference. It will just take one minute.
You'll be the first to get access to the final report.-->

4 Ways Deep Learning Is Changing iOS And iPhones

Bobby Gill | February 21, 2020

Machine learning (ML) and artificial intelligence (AI) have been coming in hot the past few years – this year, iOS deep learning and iPhone deep learning, thanks largely in part to iOS CoreML, will be for changes we’ll see in future incarnations of the iOS platform. The next iterations of the smart devices we’ll be using – including Android, Amazon and more – will be heavily influenced by ML.

It’s important to understand how this technology will be modifying our everyday devices for a few reasons. For one, there is a lot of fear of AI and while some concerns are valid, there’s a good amount of misinformation amuck on the web. So, let’s look at deep learning and explore the ways this is changing iOS development for the better as well as look at some of the components, for example, the Bionic Chip A13, are contributing.

Answering “What is deep learning” and “How neural networks work”

The most common answer you’ll find when researching deep learning is that it’s a process machine learning that can either be supervised, where you feed the information into the system, or unsupervised meaning the system configured to “learn how to learn” mostly on its own.

Think about when you’re asked to select images identifying crosswalks for a Recaptcha test designed to make sure a real-life human – this is actually a sneaky way you’ve been unwittingly training AI for Google over the past several years.

For Google who offers one of the most trusted mapping and direction systems on the market, it makes sense as they are continually evolving the Google Maps platform so it’s a bit like double-dipping for the company. Thanks to the open API, solutions like Apple Maps are also learning from Google data which is a plus no matter where you sit on Android vs iOS debate. The ironic part is that this method of supervised input will eventually lead to a machine (which surely already exists) that can differentiate between crosswalks, hippos, traffic lights, armored tanks and so on.

However, there is a more sophisticated way we can train AI systems through ML and that’s with a neural network. A neural network is an interconnected system loosely modeled on the human brain that’s designed to heuristically learn a task (read section 1.0 followed by section 5.1 if interested) often with the intent of enabling the system to learn on its own.

Think about when you’re teaching a child to read. You start with flashcards where, for example, there is the word ‘duck’ and you have the child go through the phonetics until they read the word. Eventually, the child can do the process themselves after they have “learned how to learn.”

How neural networks fit into iOS deep learning

If you think about the number of devices connected to the web, this opens a huge opportunity for growing platforms like iOS. This opens the doors for iPhone deep learning as each device (iPads too) act as a node in this network that can send non-personally identifiable information (non-PII) to the learning engine for processing.

Quite simply, iOS deep learning is facilitated by the various devices we’re already using. The core AI is using ML to essentially “learn how to learn” like a child who diligently studies their flashcards to develop and improve their language skills.

The top 4 ways deep learning is changing iOS and iPhone

There are a handful of interesting ways that this unsuspecting neural network we’re all now a part of, one way or another – and whether we like it or not – will be responsible for changing platforms like iOS. 

Unsupervised A/B testing. One of the processes we do when developing applications is A/B testing which allows us to determine what layout is best for the UX design. With all the new specs for this current and future era of iOS devices, the specs don’t exactly detail how they will ultimately be used. By making slight deviations to layout, the right programming (i.e. learning algorithm) in the ML engine can determine how a user is interacting with a device and tweak subtle features to, say, the layout that’s most conducive to how a user interacts with a system. In time, we’ll likely see slight variants between each individual device as each user could have their own personalized version of iOS running on their device, thanks to deep learning.

Better use of peripherals like the camera. Google has been making headway in using AI to improve image recognition and cameras, beginning with their image search after the acquisition of DNNresearch and iOS is right on their tail. This platform works like a supervised training model where the search itself in conjunction with what page a user ultimately visits helps both train the computer vision engine as well as improve the search engine itself by introducing a visual method to search for information. It’s further aided by a new onboard peripheral call a neural processing unit (NPU) that works alongside other processors such as the device’s primary process as well as image and graphic processing units.

Android is already incorporating machine learning into the camera and iPhone deep learning will be applied much the same. Features like stabilization, “portrait-style” pics (i.e. a sharp central image and blurred background) and many new features that are yet to be seen will emerge throughout the year.

The most relevant examples on the market are platforms like SnapChat, Facebook and Pokemon Go – these tweaks to the camera system will open doors for the augmented reality (AR) platforms and more. We’ll see more realistic applications in business, for example, merchants who can accurately superimpose an image on you or in a space of their product. Real estate apps will help too as users will be able to accurately capture images of their furniture which can be stitched together so buyers can see how furnishings fit in a living space.

A better Siri. One of the most prolific methods AI helps modern is through language recognition – we used an example in Natasha’s blog about mental healthcare apps where we pointed out some of the metadata that ended up in a contractor’s voicemail transcription as seen in the image immediately after paragraph seven. 

Recognizing dialects, syntax, colloquialisms and other linguistic functions is an ongoing training process for machine learning to adapt its own way to (hopefully) master morphology of multiple languages. Siri is already learning through AI which is why it (and search engines in general) is able to understand more natural language, compared to how we’d search a couple of decades ago where we’d just plug in the right keywords and hope for the best.

Using the A13 Bionic Chip iOS’s CoreML to optimize hardware usage. Dynamic data sets have been used to tweak how internal peripherals on iOS and Android devices. While this is easier than manually tweaking performance for various onboard hardware, iOS deep learning powered by the Bionic Chip A13 and the iOS CoreML framework over the past two years and beyond will improve architecture as part of the system’s neural network. Data compiled by Apple will allow the system to improve not only the device itself but other user devices. The Bionic Chip A13 in conjunction with iOS CoreML affect various areas from the display, camera, system speed, and battery life as indicated in Apple benchmark tests.

The CoreML framework is further enhancing the power of server-based frameworks like TensorFlow and PyTorch. These ML-based tools run on the device, allowing developers to build, test and use AI-powered models with either (or both) libraries without requiring connections to backend serves that would typically be responsible for the heavy lifting.

Blue Label Labs engages with modern ML for ideal app performance

As developers, knowing how AI will change the underlying system so having a knowledge of iOS deep learning is crucial as we help build the next wave of apps. We keep up to date to ensure our apps perform at their peak by incorporating modern features like new chipsets and neural networks. Get ahold of us at Blue Label Labs and we will explain how we use AI to improve your project and go into more detail if you’re still asking yourself, “What is iOS deep learning?”

Bobby Gill
Co-Founder & Chief Architect at BlueLabel | + posts

Get the latest from the Blue Label Labs’ blog in your inbox

Subscribe

* indicates required