auto-gptq 0.7.1
0
An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
Contents
An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
Stars: 4372, Watchers: 4372, Forks: 467, Open Issues: 251The AutoGPTQ/AutoGPTQ
repo was created 1 years ago and the last code push was 4 days ago.
The project is very popular with an impressive 4372 github stars!
How to Install auto-gptq
You can install auto-gptq using pip
pip install auto-gptq
or add it to a project with poetry
poetry add auto-gptq
Package Details
- Author
- PanQiWei
- License
- Homepage
- https://github.com/PanQiWei/AutoGPTQ
- PyPi:
- https://pypi.org/project/auto-gptq/
- GitHub Repo:
- https://github.com/PanQiWei/AutoGPTQ
Classifiers
Related Packages
Errors
A list of common auto-gptq errors.
Code Examples
Here are some auto-gptq
code examples and snippets.
GitHub Issues
The auto-gptq package has 251 open issues on GitHub
- 2 bit quant Quip
- xformers integration
- Will AutoGPTQ support Lora traning for llama2?
- Apple silicon cannot install the AutoGPT
- Llama2-70b to autogptq error.
- [Feature] Modular quantization using AutoGPTQ
- Add exllama q4 kernel
- [BUG] Auto_GPTQ is not recognized in python script
- [BUG] gibberish text inside last oobabooga/text-generation-webui
- [BUG] RuntimeError: expected scalar type Half but found Float when I try to load LoRA to AutoGPTQ base model
- Llama 2 70B (with GQA) + inject_fused_attention = "Not enough values to unpack (expected 3, got 2)"
- [FEATURE] Merge Peft Adapter to base model
- CUDA extension are not installing
- [BUG] desc_act is not support for BaiCuan Model?
- [BUG]torch._C._LinAlgError: linalg.cholesky: The factorization could not be completed because the input is not positive-definite (the leading minor of order 18163 is not positive-definite).