Contents

scalene 1.5.38

0

Scalene: A high-resolution, low-overhead CPU, GPU, and memory profiler for Python with AI-powered op

Scalene: A high-resolution, low-overhead CPU, GPU, and memory profiler for Python with AI-powered optimization suggestions

Stars: 194, Watchers: 194, Forks: 10, Open Issues: 0

The emeryberger/scalene repo was created 2 years ago and the last code push was 8 months ago.
The project is popular with 194 github stars!

How to Install scalene

You can install scalene using pip

pip install scalene

or add it to a project with poetry

poetry add scalene

Package Details

Author
Emery Berger
License
Apache License 2.0
Homepage
https://github.com/plasma-umass/scalene
PyPi:
https://pypi.org/project/scalene/
GitHub Repo:
https://github.com/emeryberger/scalene

Classifiers

  • Software Development
  • Software Development/Debuggers
No  scalene  pypi packages just yet.

Errors

A list of common scalene errors.

Code Examples

Here are some scalene code examples and snippets.

Related Packages & Articles

keras-flops 0.1.2

FLOPs calculator with tf.profiler for neural network architecture written in tensorflow 2.2+ (tf.keras)

h2o 3.46.0.1

H2O, Fast Scalable Machine Learning, for python

gpustat 1.1.1

An utility to monitor NVIDIA GPU status and usage

fastai 2.7.14

fastai simplifies training fast and accurate neural nets using modern best practices

deepspeed 0.14.0

DeepSpeed is a Python package developed by Microsoft that provides a deep learning optimization library designed to scale across multiple GPUs and servers. It is capable of training models with billions or even trillions of parameters, achieving excellent system throughput and efficiently scaling to thousands of GPUs.

DeepSpeed is particularly useful for training and inference of large language models, and it falls under the category of Machine Learning Frameworks and Libraries. It is designed to work with PyTorch and offers system innovations such as Zero Redundancy Optimizer (ZeRO), 3D parallelism, and model-parallelism to enable efficient training of large models.