open-clip-torch 2.26.1
0
Open reproduction of consastive language-image pretraining (CLIP) and related.
Contents
Open reproduction of consastive language-image pretraining (CLIP) and related.
Stars: 9985, Watchers: 9985, Forks: 962, Open Issues: 116The mlfoundations/open_clip
repo was created 3 years ago and the last code push was 3 days ago.
The project is extremely popular with a mindblowing 9985 github stars!
How to Install open_clip_torch
You can install open_clip_torch using pip
pip install open_clip_torch
or add it to a project with poetry
poetry add open_clip_torch
Package Details
- Author
- Gabriel Ilharco, Mitchell Wortsman, Romain Beaumont
- License
- MIT
- Homepage
- https://github.com/mlfoundations/open_clip
- PyPi:
- https://pypi.org/project/open-clip-torch/
- GitHub Repo:
- https://github.com/mlfoundations/open_clip
Classifiers
- Scientific/Engineering
- Scientific/Engineering/Artificial Intelligence
- Software Development
- Software Development/Libraries
- Software Development/Libraries/Python Modules
Related Packages
Errors
A list of common open_clip_torch errors.
Code Examples
Here are some open_clip_torch
code examples and snippets.
GitHub Issues
The open_clip_torch package has 116 open issues on GitHub
- Support for training on spot instances
- Pretrained weights not found
- Convnext not found
- Intermediate Checkpoints
- This machine is not connected to the Internet, how to adapt the code to prevent the pre-model from being downloaded online.
- Save intermediate checkpoints when sampling without replacement (take 2)
- Table in README
- Implement Locking of Text Tower for
CLIP
Models - Entry Not Found Error for JSON file when using OpenCLIP ViT-H/14
- How to use the open_clip model I have trained on my own dataset?
- Add text-text (audio) CLIP
- [WIP] Testing the lion optimizer
- Add video support
- [WIP] Support FSDP
- Add TextTextCLIP