Distributed Machine Learning

Description

Training of machine learning models across multiple servers and/or GPUs.

Projects

3

Lines Committed vs. Age Chart (click to view)

Lines Committed vs. Age Chart (click to view)

Projects

Project

Size Score

Trend Score

Byline

Analytics Zoo

4.25

1.75

Distributed Tensorflow, Keras and PyTorch on Apache Spark/Flink & Ray

Horovod

6.0

4.75

Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.

Ray

9.0

8.75

An open source framework that provides a simple, universal API for building distributed applications. Ray is packaged with RLlib, a scalable reinforcement learning library, and Tune, a scalable hyperparameter tuning library.