taichi 1.6.0


The Taichi Programming Language

The Taichi Programming Language

Stars: 23532, Watchers: 23532, Forks: 2213, Open Issues: 710

The taichi-dev/taichi repo was created 6 years ago and the last code push was an hour ago.
The project is extremely popular with a mindblowing 23532 github stars!

How to Install taichi

You can install taichi using pip

pip install taichi

or add it to a project with poetry

poetry add taichi

Package Details

Taichi developers
Apache Software License (
GitHub Repo:


  • Games/Entertainment/Simulation
  • Multimedia/Graphics
  • Software Development/Compilers
No  taichi  pypi packages just yet.


A list of common taichi errors.

Code Examples

Here are some taichi code examples and snippets.

GitHub Issues

The taichi package has 710 open issues on GitHub

  • [ci] Fix concurrent run issue
  • [Refactor] Expose runtime/snode ops properly
  • [Refactor] Remove KernelDefError, KernelArgError, and InvalidOperationError
  • [Refactor] Rename and move scoped profiler info under ti.profiler
  • [MISC] Add new issue template for bug reporting
  • Improve the error message when the type of a argument of the kernel doesn't match the type hint
  • [refactor] Remove dependency on get_current_program() in lang::Ndarray
  • [ci] Properly clean up self-hosted runners
  • [refactor] Remove bit_vectorize from top level.
  • [doc] Major revision to the field (advanced) document
  • GGUI example not working
  • [autodiff] Optimize the IB checker for global atomics and purely nested loops
  • Improve the image resize implementation in
  • [Refactor] Add require_version configuration in ti.init()
  • Incorrect implementation of polar decomposition in taichi._funcs.polar_decompose2d

See more issues on GitHub

Related Packages & Articles

scalene 1.5.23

Scalene: A high-resolution, low-overhead CPU, GPU, and memory profiler for Python


H2O, Fast Scalable Machine Learning, for python

gpustat 1.1

An utility to monitor NVIDIA GPU status and usage

fastai 2.7.12

fastai simplifies training fast and accurate neural nets using modern best practices

deepspeed 0.10.0

DeepSpeed is a Python package developed by Microsoft that provides a deep learning optimization library designed to scale across multiple GPUs and servers. It is capable of training models with billions or even trillions of parameters, achieving excellent system throughput and efficiently scaling to thousands of GPUs.

DeepSpeed is particularly useful for training and inference of large language models, and it falls under the category of Machine Learning Frameworks and Libraries. It is designed to work with PyTorch and offers system innovations such as Zero Redundancy Optimizer (ZeRO), 3D parallelism, and model-parallelism to enable efficient training of large models.