Apple aims to simplify AI models with CreateML and Core ML 2

apple coreml createml ai

Ryan Daws is a senior editor at TechForge Media with over a decade of experience in crafting compelling narratives and making complex topics accessible. His articles and interviews with industry leaders have earned him recognition as a key influencer by organisations like Onalytica. Under his leadership, publications have been praised by analyst firms such as Forrester for their excellence and performance. Connect with him on X (@gadget_ry) or Mastodon (@gadgetry@techhub.social)


During its annual WWDC event, Apple announced the launch of its CreateML tool alongside the sequel of its Core ML framework.

CreateML aims to simplify the creation of AI models. In fact, because it’s built in Swift, it’s possible to use drag-and-drop programming interfaces like Xcode Playgrounds to train models.

Core ML, Apple’s machine learning framework, was first introduced at WWDC last year. This year, the company has focused on making it leaner and meaner.

Apple claims Core ML 2 is 30 percent faster using a technique called batch prediction. Quantization has enabled the framework to shrink models by up to 75 percent.

This is how Apple describes Core ML:

“Core ML lets you integrate a broad variety of machine learning model types into your app. In addition to supporting extensive deep learning with over 30 layer types, it also supports standard models such as tree ensembles, SVMs, and generalised linear models.

Because it’s built on top of low-level technologies like Metal and Accelerate, Core ML seamlessly takes advantage of the CPU and GPU to provide maximum performance and efficiency.

You can run machine learning models on the device so data doesn’t need to leave the device to be analysed.”

An effort is evidently made to iterate that no information leaves the device as people become ever more wary about how their data is collected and used.

Google launched ML Kit last month at its I/O developer conference. Most of its abilities can run offline but are more limited than when connected to Google’s cloud. For example, the on-device version of the API could detect a dog is in a photo – but when connected to the internet – it could recognise the specific breed.

Apple says the developers of Memrise, a language-learning app, previously took 24 hours to train a model using 20,000 images. CreateML and Core ML 2 reduced it to 48 minutes on a MacBook Pro and 18 minutes on an iMac Pro. Furthermore, the size of the model was reduced from 90MB to just 3MB.

For developers who like Core ML, but use TensorFlow, Google released a tool in December 2017 which converts AI models into a compatible file type. You can find it on the now Microsoft-owned GitHub here.

What are your thoughts on Apple’s machine learning strategy? Let us know in the comments.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

Tags: , , , , , , , , ,

View Comments
Leave a comment

Leave a Reply