Contents

bitsandbytes 0.44.1

0

k-bit optimizers and matrix multiplication routines.

k-bit optimizers and matrix multiplication routines.

Stars: 6153, Watchers: 6153, Forks: 616, Open Issues: 205

The bitsandbytes-foundation/bitsandbytes repo was created 3 years ago and the last code push was 4 days ago.
The project is extremely popular with a mindblowing 6153 github stars!

How to Install bitsandbytes

You can install bitsandbytes using pip

pip install bitsandbytes

or add it to a project with poetry

poetry add bitsandbytes

Package Details

Author
Tim Dettmers
License
MIT
Homepage
https://github.com/TimDettmers/bitsandbytes
PyPi:
https://pypi.org/project/bitsandbytes/
GitHub Repo:
https://github.com/TimDettmers/bitsandbytes

Classifiers

  • Scientific/Engineering/Artificial Intelligence
No  bitsandbytes  pypi packages just yet.

Errors

A list of common bitsandbytes errors.

Code Examples

Here are some bitsandbytes code examples and snippets.

GitHub Issues

The bitsandbytes package has 205 open issues on GitHub

  • Getting errors when attempting to run on remote cluster
  • UserWarning: WARNING: Compute capability < 7.5 detected! Only slow 8-bit matmul is supported for your GPU! warn(msg) Detected 8-bit loading: activating 8-bit loading for this model
  • RuntimeError: Something when wrong when trying to find file. Maybe you do not have a linux system?
  • add sort to get_compute_capabilities
  • Update cuda_install.sh
  • Added scipy to requirements.txt
  • error: identifier "__cudaDeviceSynchronizeDeprecationAvoidance" is undefined
  • Need help with train error after install
  • AdamW8 bit and Lion8bit not working - CUDA_SETUP: WARNING! libcudart.so not found in any environmental path.
  • Cuda ERROR
  • Example throws Tokenizer class LLaMATokenizer does not exist
  • 3.5x slow 4bit inference for tiiuae/falcon-7b-instruct on RTX 3060 as compared to 8bit
  • CUDA exception! Error code: no CUDA-capable device is detected
  • AttributeError: module 'bitsandbytes.nn' has no attribute 'Linear8bitLt'
  • what's the differences between torch.optim.Adam and bnb.nn.Adam32bit?

See more issues on GitHub