Contents

dgl 2.1.0

0

Deep Graph Library

Deep Graph Library

Stars: 12959, Watchers: 12959, Forks: 2950, Open Issues: 500

The dmlc/dgl repo was created 5 years ago and the last code push was an hour ago.
The project is extremely popular with a mindblowing 12959 github stars!

How to Install dgl

You can install dgl using pip

pip install dgl

or add it to a project with poetry

poetry add dgl

Package Details

Author
License
APACHE
Homepage
https://github.com/dmlc/dgl
PyPi:
https://pypi.org/project/dgl/
GitHub Repo:
https://github.com/dmlc/dgl

Classifiers

No  dgl  pypi packages just yet.

Errors

A list of common dgl errors.

Code Examples

Here are some dgl code examples and snippets.

GitHub Issues

The dgl package has 500 open issues on GitHub

  • OSError: Too many open files. Occurs when running DistDGL with 'num_samplers' = 0
  • [NN] Fix GINConv
  • [Feature] Add check for aggregator_type enum in SAGEConv init
  • DGL Enter
  • [Feature] Launch Long Live Servers and Multiple Client Groups
  • Fix dist example padding problem
  • Building from source failed
  • Error should be raised if aggregator_type provided is outside allowed values
  • How to download the guide and tutorials in pdf?
  • [NN] HeteroLinearLayer and HeteroEmbeddingLayer
  • [Feature] CUDA UVA sampling for MultiLayerNeighborSampler
  • [Feature] Add reverse edge types
  • wrong if conditions in dist_graph
  • [Bug or feature request?] Semantics of ndata and edata is confusing when updates are involved
  • dgl.add_reverse_edges duplicates self-loops

See more issues on GitHub

Related Packages & Articles

deepspeed 0.14.2

DeepSpeed is a Python package developed by Microsoft that provides a deep learning optimization library designed to scale across multiple GPUs and servers. It is capable of training models with billions or even trillions of parameters, achieving excellent system throughput and efficiently scaling to thousands of GPUs.

DeepSpeed is particularly useful for training and inference of large language models, and it falls under the category of Machine Learning Frameworks and Libraries. It is designed to work with PyTorch and offers system innovations such as Zero Redundancy Optimizer (ZeRO), 3D parallelism, and model-parallelism to enable efficient training of large models.

deepdish 0.3.7

Deep Learning experiments from University of Chicago.

datasets 2.18.0

HuggingFace community-driven open-source library of datasets

clearml 1.15.1

ClearML - Auto-Magical Experiment Manager, Version Control, and MLOps for AI

barbar 0.2.1

Progress bar for deep learning training iterations

albumentations 1.4.4

An efficient library for image augmentation, providing extensive transformations to support machine learning and computer vision tasks.