If, like me, you’ve been following Microsoft’s Generative AI investment strategy ever since it first announced its partnership with ChatGPT creator, OpenAI, you might be wondering about MAI-1.
For some time now, Microsoft generative AI apps and tools, including its range of Copilot generative AI assistants, have been developed with support from external AI experts.
In fact, Microsoft has invested more than $11 billion into its partnership with OpenAI alone. However, similar to its competitor Google, the technology giant is reluctant to rely exclusively on partner support. That’s why, as of 2024, it’s working on its own AI model: MAI-1.
Similar to Google’s “Gemini” collection of AI frameworks, and OpenAI’s GPT offerings, MAI-1 is a large language model (LLM) designed to power generative AI experiences.
So, how does it work, what makes it different, and why is Microsoft moving AI development in-house?
What is Microsoft MAI-1? The Basics
Information about MAI-1 is still sparse, as Microsoft hasn’t officially announced that it’s working on an LLM of its own. All we currently know, based on the circulating rumors, is that MAI-1 will be a new “in-house” model, created by Microsoft’s own AI experts.
Some reports suggest that the model will rival competing solutions like GPT-4 (the OpenAI technology Microsoft already uses for tools like Copilot), and Google’s Gemini Ultra.
That makes sense, as the new model’s creation is reportedly being overseen by Mustafa Suleyman, the CEO behind the Inflection AI startup, and the former co-founder of Google’s DeepMind company.
Inflection sold its IP rights to Microsoft a few months ago for $650 million, and most of the company’s staff also moved into roles with Microsoft.
However, despite this, according to a report from “The Information”, MAI-1 won’t just be a rebrand of one of Inflection’s AI models.
It will be a new solution designed entirely by the Microsoft team. However, Microsoft may take advantage of some of the training data and technologies Inflection has to offer.
While there aren’t a lot of insights out there on how Microsoft will be training this model, some rumors suggest that data could be taken from various sources, such as Inflection models, text generated by GPT-4, and content scraped from the web.
How Does Microsoft MAI-1 Stand Out?
Since we don’t know much about Microsoft’s new model yet, it’s hard to accurately compare it to competing tools.
Microsoft seems to be developing the solution with a large cluster of servers, powered by Nvidia Corp graphics cards (like many of its competitors).
According to estimations, the model will feature 500 billion parameters, making it far larger than the smaller open-source models Microsoft previously trained.
For instance, Microsoft’s Phi-3 Mini model, launched in March 2024, only has 3.8 billion parameters (making it more of a small language model). Competitor Meta’s Llama 2 model, on the other hand, has about 70 billion parameters.
However, MAI-1 may still fall slightly short of GPT-4, which currently has 1 trillion parameters. Typically, more parameters in an LLM equals better performance, so GPT-4 may still have an edge in certain tasks that require more nuanced understanding.
However, a slightly smaller configuration should still allow MAI-1 to provide highly accurate responses to queries, while using less power than OpenAI’s LLM. This could mean Microsoft doesn’t have to spend as much to run and manage the model.
Why is Microsoft Creating MAI-1?
There’s a chance that Microsoft’s experiments with MAI-1 are nothing more than a rumor, but it does make sense that the company would want to get involved in LLM development.
Generative AI is predicted to become a $1.3 trillion market by 2032, and Microsoft is already seeing the benefits of creating it’s own AI tools, in the form of its “Copilot” apps.
Though Microsoft already has plenty of resources from its partners like OpenAI, and French startup, “Mistral”, creating an in-house model will give the company more scope to compete in the generative AI landscape.
This is particularly true now that competitors like Google have shown they can produce their own LLMs. If Microsoft debuts MAI-1 during it’s Build Developer Conference, as some industry analysts predict, this will further cement its position in the Gen AI industry.
Plus, creating an in-house model gives Microsoft a lot more control over its technology, meaning it can make more granular changes to its model frameworks, and rely less on the support of its partners. This could be important as conflict continues to arise in the AI landscape.
The decision to create a new model could also help Microsoft sidestep some reputation issues, given the scrutiny regulators are giving the company’s current AI deals.
Microsoft’s Ongoing AI Journey
Clearly, Microsoft has been investing heavily in the potential of generative AI for some time now. Not only are most of the company’s products, from Microsoft Teams to its Office apps, and the Microsoft Edge browser infused with some manner of AI model, but the company has also been creating dedicated solutions for different teams in the finance, security, and customer service space.
Plus, the release of Copilot Studio indicates the company wants to give its customers more control over the generative AI apps they create and customize. Creating a new model would definitely support this approach.
If Microsoft is planning on promoting MAI-1 at its Build conference, then we won’t have long to wait until we have more information about what this solution can do.
The conference will take place at the end of May 2024. In the meantime, it might be a good idea for business leaders to start updating their knowledge of large language models and generative AI applications, in preparation for the new tool.
If you haven’t gotten to grips with concepts like generative AI prompting, and strategies you can use to preserve data privacy and ethics when using AI, now’s the time to make a start.
Comments 0 Responses