La version du navigateur que vous utilisez n'est pas recommandée pour ce site. Nous vous conseillons de mettre à niveau vers la version la plus récente de votre navigateur en cliquant sur l'un des liens suivants.
Discover how Intel IT achieves push-button productization of AI models, enabling them to deploy AI faster and at scale.
Intel IT's large AI group works across Intel to transform critical work, optimize processes, eliminate scalability bottlenecks and generate significant business value (more than USD 1.5B return on investment in 2020). Our efforts unlock the power of data to make Intel’s business processes smarter, faster and more innovative, from product design to manufacturing to sales and pricing.
Intel IT’s AI group includes over 200 data scientists, machine-learning (ML) engineers and AI product experts. We systematically work across Intel’s core activities to deliver AI solutions that optimize processes and eliminate various scalability bottlenecks. We use AI to deliver high business impact and transform Intel’s internal operations, including engineering, manufacturing, hardware validation, sales, performance and Mobileye. Over the last decade, we have deployed over 500 ML models to production—more than 100 models were deployed just during the last year.
To enable this operation at scale, we developed Microraptor, a set of machine-learning operations (MLOps) capabilities. MLOps is the practice of efficiently developing, testing, deploying and maintaining ML in production. It automates and monitors the entire ML lifecycle and enables seamless collaboration across teams, resulting in faster time to production and reproducible results.
To enable MLOps, we build an AI productization platform for each business domain that we work with, such as sales or manufacturing. Models and AI services are delivered, deployed, managed and maintained on top of the AI platforms.
Our MLOps capabilities are reused in all of our AI platforms. Microraptor enables world-class MLOps to accelerate and automate the development, deployment and maintenance of ML models. Our approach to model productization avoids the typical logistical hurdles that often prevent other companies’ AI projects from reaching production. Our MLOps methodology enables us to deploy AI models to production at scale through continuous integration/continuous delivery, automation, reuse of building blocks and business process integration.
Microraptor uses many open-source technologies to enable the full MLOps lifecycle while abstracting the complexity of these technologies from the data scientists. Data scientists do not have to know anything about Kubernetes or Elasticsearch. They can focus their efforts on finding or developing the best ML model. Once the model is ready, a data scientist can simply register the model to MLflow (an open-source platform for managing the end-to-end ML lifecycle) while complying with some basic coding standards. Everything else—from building to testing to deploying—happens automatically. The model is first deployed as a release candidate that can be later activated with another push of a button into the relevant business domain’s AI platform.
Our MLOps methodology provides many advantages:
The AI platforms abstract deployment details and business process integration so that data scientists can concentrate on model development.
We can deploy a new model in less than half an hour, compared to days or weeks without MLOps.
Our systematic quality metrics minimize the cost and effort required to maintain the hundreds of models we have in production.