Description
This library aims to be a complete deep learning framework with extreme flexibility written in Rust. The goal would be to satisfy researchers as well as practitioners making it easier to experiment, train and deploy your models.
**Disclamer**
Burn is currently in active development, and there will be breaking changes. While any resulting issues are likely to be easy to fix, there are no guarantees at this stage.
burn alternatives and similar packages
Based on the "Web programming" category.
Alternatively, view burn alternatives based on common mentions on social networks and blogs.
-
gutenberg
A fast static site generator in a single binary with everything built-in. https://www.getzola.org -
License
DISCONTINUED. Tiny, no-nonsense, self-contained, Tensorflow and ONNX inference [Moved to: https://github.com/sonos/tract] -
rust-musl-builder
Docker images for compiling static Rust binaries using musl-libc and musl-gcc, with static versions of useful C libraries. Supports openssl and diesel crates. -
heroku-buildpack-rust
A buildpack for Rust applications on Heroku, with full support for Rustup, cargo and build caching. -
url-crawler
DISCONTINUED. Rust crate for configurable parallel web crawling, designed to crawl for content
InfluxDB - Purpose built for real-time analytics at any scale.
Do you think we are missing an alternative of burn or a related project?
Popular Comparisons
README
This library aims to be a complete deep learning framework with extreme flexibility written in Rust. The goal would be to satisfy researchers as well as practitioners making it easier to experiment, train and deploy your models.
Sections
Features
- Flexible and intuitive custom neural network module 🔥
- Training with full support for
metric
,logging
andcheckpointing
📈 - Tensor crate with backends as pluging 🔧
- Dataset crate with multiple utilities and sources 📚
Get Started
The best way to get started with burn
is to clone the repo and play with the examples.
This may also be a good idea to take a look the main components of burn
to get a quick overview of the fundamental building blocks.
Examples
For now there is only one example, but more to come 💪..
- MNIST train a model on CPU/GPU using different backends.
Components
Knowing the main components will be of great help when starting playing with burn
.
Backend
Almost everything is based on the Backend
trait, which allows to run tensor operations with different implementations without having to change your code.
A backend does not necessary have autodiff capabilities, the ADBackend
trait is there to specify when autodiff is required.
Tensor
The Tensor
struct is at the core of the burn
framework.
It takes two generic parameters, the Backend
and the number of dimensions D
,
Backpropagation is also supported on any backend by making them auto differentiable using a simple decorator.
use burn::tensor::backend::{ADBackend, Backend};
use burn::tensor::{Distribution, Tensor};
use burn_autodiff::ADBackendDecorator;
use burn_ndarray::NdArrayBackend;
use burn_tch::TchBackend;
fn simple_function<B: Backend>() -> Tensor<B, 2> {
let x = Tensor::<B, 2>::random([3, 3], Distribution::Standard);
let y = Tensor::<B, 2>::random([3, 3], Distribution::Standard);
x.matmul(&y)
}
fn simple_function_grads<B: ADBackend>() -> B::Gradients {
let z = simple_function::<B>();
z.backward()
}
fn main() {
let _z = simple_function::<NdArrayBackend<f32>>(); // Compiles
let _z = simple_function::<TchBackend<f32>>(); // Compiles
let _grads = simple_function_grads::<NdArrayBackend<f32>>(); // Doesn't compile
let _grads = simple_function_grads::<TchBackend<f32>>(); // Doesn't compile
type ADNdArrayBackend = ADBackendDecorator<NdArrayBackend<f32>>;
type ADTchBackend = ADBackendDecorator<TchBackend<f32>>;
let _grads = simple_function_grads::<ADNdArrayBackend>(); // Compiles
let _grads = simple_function_grads::<ADTchBackend>(); // Compiles
}
Module
The Module
derive let your create your own neural network modules similar to PyTorch.
use burn::nn;
use burn::module::{Param, Module};
use burn::tensor::backend::Backend;
#[derive(Module, Debug)]
struct MyModule<B: Backend> {
my_param: Param<nn::Linear<B>>,
repeat: usize,
}
Note that only the fields wrapped inside Param
are updated during training, and the other ones should implement Clone
.
Forward
The Forward
trait can also be implemented by your module.
use burn::module::Forward;
use burn::tensor::Tensor;
impl<B: Backend> Forward<Tensor<B, 2>, Tensor<B, 2>> for MyModule<B> {
fn forward(&self, input: Tensor<B, 2>) -> Tensor<B, 2> {
let mut x = input;
for _ in 0..self.repeat {
x = self.my_param.forward(x);
}
x
}
}
Note that you can implement multiple times the Forward
trait with different inputs and outputs.
Config
The Config
derive lets you define serializable and deserializable configurations or hyper-parameters for your modules or any components.
use burn::config::Config;
#[derive(Config)]
struct MyConfig {
#[config(default = 1.0e-6)]
pub epsilon: usize,
pub dim: usize,
}
The derive also adds useful methods to your config.
fn main() {
let config = MyConfig::new(100);
println!("{}", config.epsilon); // 1.0.e-6
println!("{}", config.dim); // 100
let config = MyConfig::new(100).with_epsilon(1.0e-8);
println!("{}", config.epsilon); // 1.0.e-8
}
Learner
The Learner
is the main struct
that let you train a neural network with support for logging
, metric
, checkpointing
and more.
In order to create a learner, you must use the LearnerBuilder
.
use burn::train::LearnerBuilder;
use burn::train::metric::{AccuracyMetric, LossMetric};
fn main() {
let dataloader_train = ...;
let dataloader_valid = ...;
let model = ...;
let optim = ...;
let learner = LearnerBuilder::new("/tmp/artifact_dir")
.metric_train_plot(AccuracyMetric::new())
.metric_valid_plot(AccuracyMetric::new())
.metric_train(LossMetric::new())
.metric_valid(LossMetric::new())
.with_file_checkpointer::<f32>(2)
.num_epochs(10)
.build(model, optim);
let _model_trained = learner.fit(dataloader_train, dataloader_valid);
}
See this example for a real usage.
License
Burn is distributed under the terms of both the MIT license and the Apache License (Version 2.0). See [LICENSE-APACHE](./LICENSE-APACHE) and [LICENSE-MIT](./LICENSE-MIT) for details. Opening a pull request is assumed to signal agreement with these licensing terms.
*Note that all licence references and agreements mentioned in the burn README section above
are relevant to that project's source code only.