Contents

ydata-profiling 4.10.0

0

Generate profile report for pandas DataFrame

Generate profile report for pandas DataFrame

Stars: 12449, Watchers: 12449, Forks: 1678, Open Issues: 258

The ydataai/ydata-profiling repo was created 8 years ago and the last code push was 1 weeks ago.
The project is extremely popular with a mindblowing 12449 github stars!

How to Install ydata-profiling

You can install ydata-profiling using pip

pip install ydata-profiling

or add it to a project with poetry

poetry add ydata-profiling

Package Details

Author
YData Labs Inc
License
MIT
Homepage
https://github.com/ydataai/ydata-profiling
PyPi:
https://pypi.org/project/ydata-profiling/
GitHub Repo:
https://github.com/ydataai/ydata-profiling

Classifiers

  • Scientific/Engineering
  • Software Development/Build Tools
No  ydata-profiling  pypi packages just yet.

Errors

A list of common ydata-profiling errors.

Code Examples

Here are some ydata-profiling code examples and snippets.

GitHub Issues

The ydata-profiling package has 258 open issues on GitHub

  • fix: ignore none alias name when render categorical
  • chores(deps): upgrade to pydantic-2
  • MemoryError for particular input WITHOUT large outliers
  • Feature Request: box plots
  • feat: fist version of the gap analysis tab for ts
  • chore(deps): update dependency pydantic to v2
  • chore(deps): update dependency coverage to v7
  • Bug Report: profile.to_widgets() or .to_html() hangs with ndarray-type field from BQ repeated record
  • Bug Report
  • Bug Report
  • Bug Report: Colab tuto doesn't work anymore
  • fix: {{ file_name }} error in HTML wrapper
  • TypeError: type object got multiple values for keyword argument 'visible'
  • Comparing datetime and str columns crashes with TypeError
  • Further analysis

See more issues on GitHub

Related Packages & Articles

sweetviz 2.3.1

A pandas-based library to visualize and compare datasets.

pyoptimus 23.5.0b0

PyOptimus is a Python library that brings together the power of various data processing engines like Pandas, Dask, cuDF, Dask-cuDF, Vaex, and PySpark under a single, easy-to-use API. It offers over 100 functions for data cleaning and processing, including handling strings, processing dates, URLs, and emails. PyOptimus also provides out-of-the-box functions for data exploration and quality fixing. One of the key features of PyOptimus is its ability to handle large datasets efficiently, allowing you to use the same code to process data on your laptop or on a remote cluster of GPUs.

optimuspyspark 2.2.32

Optimus is the missing framework for cleaning and pre-processing data in a distributed fashion with pyspark.

fiftyone 1.0.0

FiftyOne: the open-source tool for building high-quality datasets and computer vision models