Contents

optimum 1.23.1

0

Optimum Library is an extension of the Hugging Face Transformers library, providing a framework to i

Optimum Library is an extension of the Hugging Face Transformers library, providing a framework to integrate third-party libraries from Hardware Partners and interface with their specific functionality.

Stars: 2513, Watchers: 2513, Forks: 449, Open Issues: 413

The huggingface/optimum repo was created 3 years ago and the last code push was 34 minutes ago.
The project is very popular with an impressive 2513 github stars!

How to Install optimum

You can install optimum using pip

pip install optimum

or add it to a project with poetry

poetry add optimum

Package Details

Author
HuggingFace Inc. Special Ops Team
License
Apache
Homepage
https://github.com/huggingface/optimum
PyPi:
https://pypi.org/project/optimum/
GitHub Repo:
https://github.com/huggingface/optimum

Classifiers

  • Scientific/Engineering/Artificial Intelligence
No  optimum  pypi packages just yet.

Errors

A list of common optimum errors.

Code Examples

Here are some optimum code examples and snippets.

GitHub Issues

The optimum package has 413 open issues on GitHub

  • optimum for onnx pipelines do not work with tranformers >=4.30
  • Corrupted-tflite-weights while getting a model from huggingface
  • Add ViT to ORTConfigManager
  • Lower GPU memory requirements at ONNX export
  • add diffusers extra
  • Pix2struct to ONNX execution error
  • Onnxruntime support for multiple modalities model types
  • IO Binding for ONNX Non-CUDAExecutionProviders
  • SDPA using the C++ path (math) in fp16 may yield nans
  • Adamlouly/fix unwrap model eval
  • Installation issue on Openvino NNcf
  • T5 & gpt_neox cannot be exported with with opset 9
  • Gradients greatly change after BetterTransformer.transform application
  • Update: fix typo from_pretained to from_pretrained
  • Enable use_io_binding = True on CPU

See more issues on GitHub

Related Packages & Articles

PennyLane 0.38.0

PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network.

farm-haystack 1.26.3

LLM framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data.

onnx 1.17.0

Open Neural Network Exchange (ONNX) is an open ecosystem that empowers AI developers to choose the right tools as their project evolves. ONNX provides an open source format for AI models, both deep learning and traditional ML. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types. Currently we focus on the capabilities needed for inferencing (scoring).

deepspeed 0.15.2

DeepSpeed is a Python package developed by Microsoft that provides a deep learning optimization library designed to scale across multiple GPUs and servers. It is capable of training models with billions or even trillions of parameters, achieving excellent system throughput and efficiently scaling to thousands of GPUs.

DeepSpeed is particularly useful for training and inference of large language models, and it falls under the category of Machine Learning Frameworks and Libraries. It is designed to work with PyTorch and offers system innovations such as Zero Redundancy Optimizer (ZeRO), 3D parallelism, and model-parallelism to enable efficient training of large models.