Apple doesn't share much about its plans for generative AI. Now, with the release of a family of large open-source language models, it seems the Cupertino tech giant wants to get AI running natively on Apple devices. Apple's LLMs, which the company calls OpenELM (Open-source Efficient Language Models), are designed to run on the device rather than on cloud servers. These LLMs are available on the Hugging Face Hub, a central platform for sharing AI code and datasets.
WWDC 2024 Summary: Is Apple Intelligence Legit?
In the test, Apple observes that OpenELM delivers similar performance to the other open language models, but the former has less training data. Image Courtesy: Hugging Face
As shown in the white paper, there are a total of eight OpenELM models. Four models were pretrained using the CoreNet library, while the other four are instruction-set models. To improve overall accuracy and efficiency, Apple uses a layered scaling strategy in these open source LLMs.
"To this end, we are releasing OpenELM, a state-of-the-art open language model. OpenELM uses a layered scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to increased accuracy. For example, with a parameter budget of approximately one billion parameters, exhibiting OpenELM a 2.36% improvement in accuracy over OLMo while requiring 2x fewer pre-training tokens.” – Apple