Distributed Machine Learning

Description

Training of machine learning models across multiple servers and/or GPUs.

Projects

3

Lines Committed vs. Age Chart (click to view)

Lines Committed vs. Age Chart (click to view)

Projects

Project

Size Score

Trend Score

Byline

Analytics Zoo

5.0

8.25

Distributed Tensorflow, Keras and PyTorch on Apache Spark/Flink & Ray

Horovod

5.25

5.0

Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.

Ray

8.0

9.0

An open source framework that provides a simple, universal API for building distributed applications. Ray is packaged with RLlib, a scalable reinforcement learning library, and Tune, a scalable hyperparameter tuning library.