There were reports that Apple might utilize Google's Gemini to support some of its new AI features in iOS 18.
But that doesn't mean that the company has stopped working on its own AI models. In fact, Apple has revealed more details about the development of its new MM1 AI model.
The tech giant said it will use diverse data sets that include embedded image-text documents, image caption pairs, and text-only data to help develop MM1.
According to Apple, the method will allow the MM1 model to make breakthroughs in AI's ability to annotate images, answer visual questions, and even respond with natural language inference, with the hope of providing the highest level of accuracy possible.
This approach allows Apple to maximize multiple types of training data and model architecture. It will also provide greater scope for AI to understand and generate language based on linguistic and visual cues.
Apple hopes that by combining training methods from other AIs with its own methods, it will be able to produce better results and achieve competitive performance.
As we know, the company is quite behind its competitors that have delved into AI development, such as Google and OpenAI, and of course it wants to catch up.
So far, Apple has a good reputation for paving its own way. It continues to innovate to deal with the same situations other companies are experiencing, including in the way it designs its hardware and software.
Apple's ongoing efforts to create competitive AI will always come with a unique approach, and now the company seems to have found a way to make progress in this area.

Comments
Post a Comment