Contents

ludwig 0.10.4

0

Declarative machine learning: End-to-end machine learning pipelines using data-driven configurations

Declarative machine learning: End-to-end machine learning pipelines using data-driven configurations.

Stars: 11133, Watchers: 11133, Forks: 1193, Open Issues: 362

The ludwig-ai/ludwig repo was created 5 years ago and the last code push was 5 days ago.
The project is extremely popular with a mindblowing 11133 github stars!

How to Install ludwig

You can install ludwig using pip

pip install ludwig

or add it to a project with poetry

poetry add ludwig

Package Details

Author
Piero Molino
License
Apache 2.0
Homepage
https://github.com/ludwig-ai/ludwig
PyPi:
https://pypi.org/project/ludwig/
GitHub Repo:
https://github.com/ludwig-ai/ludwig
No  ludwig  pypi packages just yet.

Errors

A list of common ludwig errors.

Code Examples

Here are some ludwig code examples and snippets.

GitHub Issues

The ludwig package has 362 open issues on GitHub

  • Update comment for predict to update Ludwig docs
  • [bug] Support preprocessing datetime.date date features
  • Add effective_batch_size to auto-adjust gradient accumulation
  • Lamma2 training on dataset downloaded from Huggingface.
  • Implement batch size tuning in for None type LLM trainer (used for batch inference)
  • [WIP] Enable strict schema enforcement
  • Bug in the Tutorial of Tabular Data Classification
  • ValueError: Unexpected keyword arguments: top_k
  • Re-enable Horovod installation and unit tests for torch nightly.
  • Missing documentation for Ludwig Explainer
  • [llm_text_generation] RuntimeError: Expected all tensors to be on the same device,
  • Image Classification: Config
  • Initial implementation of DaftDataFrameEngine
  • refactor: Remove dict support for initializer fields. (2/2)
  • Not uploading confusion_matrix (and others) figure to Comet ML

See more issues on GitHub

Related Packages & Articles

deeplake 3.9.26

Deep Lake is a Database for AI powered by a unique storage format optimized for deep-learning and Large Language Model (LLM) based applications. It simplifies the deployment of enterprise-grade LLM-based products by offering storage for all data types (embeddings, audio, text, videos, images, pdfs, annotations, etc.), querying and vector search, data streaming while training models at scale, data versioning and lineage for all workloads, and integrations with popular tools such as LangChain, LlamaIndex, Weights & Biases, and many more.

onnx 1.17.0

Open Neural Network Exchange (ONNX) is an open ecosystem that empowers AI developers to choose the right tools as their project evolves. ONNX provides an open source format for AI models, both deep learning and traditional ML. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types. Currently we focus on the capabilities needed for inferencing (scoring).

kornia 0.7.3

Open Source Differentiable Computer Vision Library for PyTorch

horovod 0.28.1

Horovod is a powerful distributed training framework for Python that allows you to train deep learning models across multiple GPUs and servers quickly and efficiently. It falls under the category of distributed computing libraries. Built on top of TensorFlow, PyTorch, and other popular deep learning frameworks, Horovod simplifies the process of scaling up your model training by handling the complexities of distributed training under the hood.

datasets 3.0.1

HuggingFace community-driven open-source library of datasets

spacy 3.8.2

Industrial-strength Natural Language Processing (NLP) in Python