Build more intelligent apps with machine learning.

by Hiran Stephan

on August 14, 2017

Take advantage of Core ML, a new foundational machine learning framework used across Apple products, including Siri, Camera, and QuickType. Core ML delivers blazingly fast performance with easy integration of machine learning models enabling you to build apps with intelligent new features using just a few lines of code.

One of the most underrated announcements at Apple’s Worldwide Developers Conference Monday was the company’s unveiling of Core ML, a programming framework designed to make it easier to run machine learning models on the company’s mobile devices.

Core ML will be part of iOS 11, which is expected to launch later this year. It allows developers to load trained machine learning models onto an iPhone or iPad and then use them for generating insights inside applications. While it was possible for developers to do that on their own in the past, the new framework is designed to make it easier for apps to process data locally using machine learning without sending user information to the cloud.

In addition, the framework is designed to optimize models for Apple’s mobile devices, which should reduce RAM use and power consumption — both important for computationally-intensive tasks like machine learning inference.

Processing machine learning data on-device provides a number of benefits. Apps don’t need an internet connection in order to get the benefits of machine learning models, and may also be able to process data faster without having to wait for information to get passed back and forth over a network. Users also get privacy benefits, since data doesn’t have to leave the device in order to benefit from intelligent results.

Apple isn’t the only company working on bringing machine learning to mobile devices. Google announced a new TensorFlow Lite programming framework at its I/O developer conference a couple weeks ago that’s supposed to make it easier for developers to build models that run on lower-powered Android devices.

Developers have to convert trained models into a special format that works with Core ML. Once that’s done, they can load the model into Apple’s Xcode development environment and deploy it to an iOS device. The company released four pre-built machine learning models based on popular open source projects, and also made a converter available so that developers can port their own.

The converter works with popular frameworks like Caffe, Keras, scikit-learn, XGBoost and LibSVM. In the event developers have a model created with a different framework that’s not supported, Apple has made it possible to write your own converter.

It’s the latest in Apple’s set of Core frameworks, which include Core Location, Core Audio and Core Image. They’re all designed to help developers create more advanced applications by abstracting out complicated tasks.

Core ML could also hold the key to Apple’s future hardware moves. The company is rumored to be working on a dedicated chip to handle machine learning tasks, and it’s possible this framework would be developers’ portal for using that silicon.

Core ML

Core ML got most of the attention at WWDC and it’s easy to see why: this is the framework that most developers will want to use in their apps.

The API is pretty simple. The only things you can do are:

  1. loading a trained model
  2. making predictions
  3. profit!!

This may sound limited but in practice loading a model and making predictions is usually all you’d want to do in your app anyway.

Core ML will decide for itself whether to run the model on the CPU or the GPU. This allows it to make optimal use of the available resources. Core ML can even split up the model to only perform certain parts on the GPU (tasks that need to do a lot of computations) and the other parts on the CPU (tasks that need a lot of memory).

Core ML’s ability to use the CPU has another big benefit to us developers: you can run it from the iOS simulator (something that’s not possible with Metal, which also does not play well with unit tests).

What models does Core ML support?

Core ML can handle several different types of models, such as:

  • support vector machines (SVM)
  • tree ensembles such as random forests and boosted trees
  • linear regression and logistic regression
  • neural networks: feed-forward, convolutional, recurrent

All of these can be used for regression as well as classification. In addition your model can contain typical ML preprocessing steps like one-hot encoding, feature scaling, imputation of missing values, and so on.

Apple makes a number of trained models available for download, such as Inception v3, ResNet50, and VGG16, but you can also convert your own models with the Core ML Tools Python library.

Currently you can convert models that are trained with Keras, Caffe, scikit-learn, XGBoost, and libSVM. The conversion tool is a little particular about which versions it supports — for example Keras 1.2.2 works but 2.0 doesn’t. Fortunately, the tool is open source so no doubt it will support more training toolkits in the future.

And if all else fails, you can always write your own converter. The mlmodel file format is open and fairly straightforward to use (it’s in protobuf format and the specs are published by Apple).

Limitations

Core ML is great for quickly getting a model up and running in your apps. However, with such a simple API there are bound to be some limitations.

  • The supported model types are for supervised machine learning only. No unsupervised learning algorithms or reinforcement learning. (Although there is support for a “generic” neural network type, so you might be able to use that.)
  • There is no training on the device. You need to train your models using an offline toolkit and then convert the model to Core ML format.
  • If Core ML does not support a certain layer type, you can’t use it. At this point it’s impossible to extend Core ML with your own compute kernels. Where tools like TensorFlow are used to build general-purpose computational graphs, the mlmodel file format is nowhere near that flexible.
  • The Core ML conversion tools only support specific versions of a limited number of training tools. If you trained a model in TensorFlow, for example, you can’t use this tool and you’ll have to write your own conversion script. And as I just mentioned: if your TensorFlow model is doing something that mlmodel does not support, you can’t use your model with Core ML.
  • You cannot look at the output produced by intermediate layers; you only get the prediction that comes out the last layer of the network.
  • I’m not 100% sure but it seems that downloading a model update could be problematic. If you need to re-train often and you don’t want to push out a new version of your app every time you update the model, then maybe Core ML is not for you.
  • Core ML hides whether it runs on the CPU or the GPU — which is convenient — but you have to trust that it does the right thing for your app. You can’t force Core ML to run on the GPU, even if you really really want it to.

If you can live with these limitations, then Core ML is the right framework for you.

If not, or if you want full control, you’re going to have to roll your own with Metal Performance Shaders or the Accelerate framework — or both!

  • Share this Article

GET IN TOUCH WITH iLEAF

  • We can initiate development process at the hour of your convenience
  • Discussion on the projects can be held for a stipulated duration
  • We will sign NDA and the talks will be secured
  • We’ll show you around our cherished designs
  • Briefing on technology
  • Guaranteed source code delivery
  • Option to re-start a closed venture

See how we can unwrap your app idea and proceed towards success