Passer au Contenu principal
logo Intel - Retour à la Page d'accueil

Se connecter

Votre nom d'utilisateur n'a pas été renseigné
Votre mot de passe n'a pas été renseigné

En vous inscrivant, vous acceptez nos Conditions de service.

Vous avez oublié votrenom d'utilisateur oumot de passe?

Foire aux questions

Travaillez-vous pour Intel ? Inscrivez-vous ici.

Vous n'avez pas de compte Intel ? Inscrivez-vous ici pour un compte de base.

Mes outils

Sélectionnez votre région

Asia Pacific

  • Asia Pacific (English)
  • Australia (English)
  • India (English)
  • Indonesia (Bahasa Indonesia)
  • Japan (日本語)
  • Korea (한국어)
  • Mainland China (简体中文)
  • Taiwan (繁體中文)
  • Thailand (ไทย)
  • Vietnam (Tiếng Việt)

Europe

  • France (Français)
  • Germany (Deutsch)
  • Ireland (English)
  • Italy (Italiano)
  • Poland (Polski)
  • Spain (Español)
  • Turkey (Türkçe)
  • United Kingdom (English)

Latin America

  • Argentina (Español)
  • Brazil (Português)
  • Chile (Español)
  • Colombia (Español)
  • Latin America (Español)
  • Mexico (Español)
  • Peru (Español)

Middle East/Africa

  • Israel (עברית)

North America

  • United States (English)
  • Canada (English)
  • Canada (Français)
Se connecter pour accéder au contenu confidentiel

Utiliser la recherche Intel.com

Vous pouvez facilement rechercher l'ensemble du site Intel.com de plusieurs manières.

  • Marque: Core i9
  • numéro de document: 123456
  • Nom de code: Alder Lake
  • Opérateurs spéciaux: « Ice Lake », Ice AND Lake, Ice OR Lake, Ice*

Liens rapides

Vous pouvez également essayer les liens rapides ci-dessous pour voir les résultats des recherches les plus populaires.

  • Produits
  • Assistance
  • Pilotes et logiciels

Recherches récentes

Se connecter pour accéder au contenu confidentiel

Recherche avancée

Rechercher uniquement dans

Sign in to access restricted content.

La version du navigateur que vous utilisez n'est pas recommandée pour ce site.
Nous vous conseillons de mettre à niveau vers la version la plus récente de votre navigateur en cliquant sur l'un des liens suivants.

  • Safari
  • Chrome
  • Edge
  • Firefox

AI Frameworks

Get performance gains ranging up to 10x to 100x for popular deep-learning and machine-learning frameworks through drop-in Intel® optimizations.

AI frameworks provide data scientists, AI developers, and researchers the building blocks to architect, train, validate, and deploy models, through a high-level programming interface. All major frameworks for deep learning and classical machine learning have been optimized by using oneAPI libraries that provide optimal performance across Intel® CPUs and XPUs. These Intel® software optimizations help deliver orders of magnitude performance gains over stock implementations of the same frameworks. As a framework user, you can reap all performance and productivity benefits through drop-in acceleration without the need to learn new APIs or low-level foundational libraries. 

 

Performance Gains

Deep-Learning Frameworks

  

  

Intel® Optimization for TensorFlow*

TensorFlow* is a widely used deep-learning framework that's based on Python*. It's designed for flexible implementation and extensibility on modern deep neural networks. 

Intel is collaborating with Google* to optimize its performance on platforms based on the Intel® Xeon® processor. The platforms use the Intel® oneAPI Deep Neural Network Library (oneDNN), an open-source, cross-platform performance library for deep-learning applications. These optimizations are directly upstreamed and made available in the official TensorFlow release via a simple flag update, which enables developers to seamlessly benefit from the Intel® optimizations.

The latest version of Intel® Optimization for TensorFlow* is included as part of the Intel® oneAPI AI Analytics Toolkit (AI Kit). This kit provides a comprehensive and interoperable set of AI software libraries to accelerate end-to-end data science and machine-learning workflows. 

oneDNN

AI Kit

Download as Part of the AI Kit

  

Additional Download Options

Anaconda*

Docker*

PIP*

 

Documentation

Installation Guide

Performance Guide

Intel® Optimization for PyTorch*

The PyTorch* for Python* package provides one of the fastest implementations of dynamic neural networks to achieve speed and flexibility. Intel and Facebook* extensively collaborated to: 

  • Include many Intel optimizations in this popular framework 
  • Provide superior PyTorch performance on Intel® architectures, most notably Intel® Xeon® Scalable processors

The optimizations are built using oneDNN to provide cross-platform support and acceleration. 

Intel also provides Intel® Extension for PyTorch* for more capabilities that have not yet been upstreamed, including:

  • Support for automatic mixed precision
  • Customized operators
  • Fusion patterns 

This optimization adds bindings with Intel® oneAPI Collective Communications Library (oneCCL) for efficient distributed training and is a consolidated package. It provides the best out-of-box experience to get all of the performance benefits from PyTorch. The package has the latest versions of:

  • Stock PyTorch with Intel® optimizations
  • Intel Extension for PyTorch
  • oneCCL

Intel® Optimization for Pytorch* is made available as part of the AI Kit that provides a comprehensive and interoperable set of AI software libraries to accelerate end-to-end data science and machine-learning workflows. 

Intel Extension for PyTorch

oneCCL

AI Kit

Download as Part of the AI Kit

  

Additional Download Options

Anaconda

Docker

 

Documentation

Installation Guide

Apache MXNet*

This open-source, deep-learning framework is highly portable, lightweight, and designed to offer efficiency and flexibility through imperative and symbolic programming. MXNet* includes built-in support for Intel optimizations to achieve high performance on Intel Xeon Scalable processors.

  

Additional Download Options

Docker

PIP

Documentation

Installation Guide

Optimization Techniques

Performance Tips

PaddlePaddle*

This open-source, deep-learning Python* framework from Baidu* is known for user-friendly, scalable operations. Built using oneDNN, this popular framework provides fast performance on Intel Xeon Scalable processors and a large collection of tools to help AI developers.

Download Options

Docker

PIP

Documentation

Get Started Guide

Machine-Learning Frameworks

  

  

Intel® Extension for Scikit-learn*

Scikit-learn* is one of the most widely used Python packages for data science and machine learning. Intel provides a seamless way to speed up the many algorithms of scikit-learn on Intel® CPUs and GPUs through the Intel® Extension for Scikit-learn*. This extension package dynamically patches scikit-learn estimators to use Intel® oneAPI Data Analytics Library (oneDAL) as the underlying solver. It achieves the speedup for machine learning algorithms on Intel architectures, both single and multi-nodes. 

The latest version of Intel Extension for Scikit-learn is also included as part of the AI Kit. It provides a comprehensive and interoperable set of AI software libraries to accelerate end-to-end data science and machine-learning workflows. 

Intel Extension for Scikit-learn

oneDAL

AI Kit

Download as Part of the AI Kit

  

Additional Download Options

Anaconda

Conda*-Forge

PIP

Documentation

Installation Guide

More Details

XGBoost Optimized by Intel

This is a well-known machine-learning package for gradient-boosted decision trees. It includes seamless, drop-in acceleration for Intel architectures to significantly speed up model training and improve accuracy for better predictions. In collaboration with XGBoost community, Intel has been directly upstreaming many optimizations to provide superior performance on Intel CPUs. 

The latest version of XGBoost that Intel optimizes is included as part of the AI Kit. It provides a comprehensive and interoperable set of AI software libraries to accelerate end-to-end data science and machine-learning workflows. 

AI Kit

Download as Part of the AI Kit

  

Additional Download Options

Anaconda

 

Documentation

Installation Guide

Explore Our Comprehensive Portfolio of End-to-End AI Tools

Get Access to Our Development Sandbox to Test and Run Workloads

Browse Our Production-Quality AI Containers and Solutions Catalog

Voir plus Voir moins
  • Deep-Learning Frameworks
  • Machine Learning Frameworks
  • Informations sur l'entreprise
  • Notre engagement
  • Diversité et inclusion
  • Relations investisseurs
  • Contact
  • Espace presse
  • Recrutement
  • © Intel Corporation
  • Conditions d'utilisation
  • * Marques
  • Cookies
  • Confidentialité
  • Transparence de la chaîne logistique
  • Plan du site
  • Ne partagez pas mes informations personnelles

Les technologies Intel peuvent nécessiter du matériel, des logiciels ou l'activation de services compatibles. // Aucun produit ou composant ne saurait être totalement sécurisé en toutes circonstances. // Vos coûts et résultats peuvent varier. // Les performances varient en fonction de l'utilisation, de la configuration et d'autres facteurs. // Consultez l'ensemble de nos avertissements et avis juridiques. // Intel s'engage à respecter les droits de l'homme et à éviter toute complicité dans la violation des droits de l'homme. Voir les Principes mondiaux d'Intel relatifs aux droits de l'homme. Les produits et logiciels d'Intel sont destinés à être utilisés exclusivement dans des applications qui ne causent pas ou ne contribuent pas à violer des droits de l'homme internationalement reconnus.

Logo de bas de page Intel