Skip to content

Commit

Permalink
Fixing cargo doc
Browse files Browse the repository at this point in the history
  • Loading branch information
coreylowman committed Oct 25, 2023
1 parent 929602e commit 2ac9dc9
Show file tree
Hide file tree
Showing 24 changed files with 41 additions and 42 deletions.
26 changes: 13 additions & 13 deletions dfdx/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,10 @@
//! that a model is stored on.
//!
//! For example, a linear model has a couple pieces:
//! 1. The architecture configuration type: [LinearConfig]
//! 2. The actual built type that contains the parameters: [Linear]
//! 1. The architecture configuration type: [nn::LinearConfig]
//! 2. The actual built type that contains the parameters: [nn::Linear]
//!
//! There's a third piece for convenience: [LinearConstConfig], which let's you specify dimensions at compile time.
//! There's a third piece for convenience: [nn::LinearConstConfig], which let's you specify dimensions at compile time.
//!
//! For specifying architecture, you just need the dimensions for the linear, but not the device/dtype:
//! ```rust
Expand All @@ -22,11 +22,11 @@
//! ```
//! **Note** that we don't have any idea on what device or what dtype this will be.
//!
//! When we build this configuration into a [Linear] object, it will be placed on a device and have a certain dtype.
//! When we build this configuration into a [nn::Linear] object, it will be placed on a device and have a certain dtype.
//!
//! # Building a model from an architecture
//!
//! We will use [BuildModuleExt::build_module()], an extension trait on devices, to actually construct a model.
//! We will use [nn::BuildModuleExt::build_module()], an extension trait on devices, to actually construct a model.
//!
//! ```rust
//! # use dfdx::prelude::*;
Expand All @@ -40,7 +40,7 @@
//!
//! # Using a model
//!
//! There are many things you can do with models. The main action is calling [Module::forward()] and [Module::forward_mut()]
//! There are many things you can do with models. The main action is calling [nn::Module::forward()] and [nn::Module::forward_mut()]
//! during inference and training.
//!
//! ```rust
Expand All @@ -63,9 +63,9 @@
//!
//! Under the hood, the code generated for Sequential vs tuples are identical.
//!
//! ## Deriving [Sequential]
//! ## Deriving [nn::Sequential]
//!
//! See [Sequential] for more detailed information.
//! See [nn::Sequential] for more detailed information.
//!
//! ```rust
//! # use dfdx::prelude::*;
Expand Down Expand Up @@ -114,16 +114,16 @@
//!
//! # Optimizers and Gradients
//!
//! *See [optim] for more information*
//! *See [nn::optim] for more information*
//!
//! dfdx-nn supports a number of the standard optimizers:
//!
//! | Optimizer | dfdx | pytorch |
//! | --- | --- | --- |
//! | SGD | [optim::Sgd] | `torch.optim.SGD` |
//! | Adam | [optim::Adam] | torch.optim.Adam` |
//! | AdamW | [optim::Adam] with [optim::WeightDecay::Decoupled] | `torch.optim.AdamW` |
//! | RMSprop | [optim::RMSprop] | `torch.optim.RMSprop` |
//! | SGD | [nn::optim::Sgd] | `torch.optim.SGD` |
//! | Adam | [nn::optim::Adam] | torch.optim.Adam` |
//! | AdamW | [nn::optim::Adam] with [nn::optim::WeightDecay::Decoupled] | `torch.optim.AdamW` |
//! | RMSprop | [nn::optim::RMSprop] | `torch.optim.RMSprop` |
//!
//! You can use optimizers to optimize neural networks (or even tensors!). Here's
//! a simple example of how to do this:
Expand Down
2 changes: 1 addition & 1 deletion dfdx/src/nn/layers/abs.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
use crate::prelude::*;

/// Calls [dfdx::tensor_ops::abs()]
/// Calls [crate::tensor_ops::abs()]
#[derive(Default, Debug, Clone, Copy, crate::nn::CustomModule)]
pub struct Abs;
impl<S: Shape, E: Dtype, D: Device<E>, T: Tape<E, D>> crate::nn::Module<Tensor<S, E, D, T>>
Expand Down
4 changes: 2 additions & 2 deletions dfdx/src/nn/layers/batch_norm1d.rs
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,8 @@ use crate::prelude::*;
/// # Training vs Inference
///
/// BatchNorm1D supports the following cases (see sections below for more details):
/// 1. **Training**: [crate::Module::forward_mut()] and [OwnedTape] on the input tensor
/// 2. **Inference**: [crate::Module::forward()] and [NoneTape] on the input tensor.
/// 1. **Training**: [crate::nn::Module::forward_mut()] and [OwnedTape] on the input tensor
/// 2. **Inference**: [crate::nn::Module::forward()] and [NoneTape] on the input tensor.
///
/// Examples:
/// ```rust
Expand Down
4 changes: 2 additions & 2 deletions dfdx/src/nn/layers/batch_norm2d.rs
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,8 @@ use crate::prelude::*;
/// # Training vs Inference
///
/// BatchNorm2D supports the following cases (see sections below for more details):
/// 1. **Training**: [crate::Module::forward_mut()] and [OwnedTape] on the input tensor
/// 2. **Inference**: [crate::Module::forward()] and [NoneTape] on the input tensor.
/// 1. **Training**: [crate::nn::Module::forward_mut()] and [OwnedTape] on the input tensor
/// 2. **Inference**: [crate::nn::Module::forward()] and [NoneTape] on the input tensor.
///
/// *NOTE: ModuleMut/NoneTape, and Module/OwnedTape will fail to compile.*
///
Expand Down
2 changes: 1 addition & 1 deletion dfdx/src/nn/layers/conv2d.rs
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ use crate::prelude::*;
/// };
/// ```
///
/// To create a biased conv, combine with [crate::Bias2D].
/// To create a biased conv, combine with [crate::nn::Bias2D].
///
/// Generics:
/// - `InChan`: The number of input channels in an image.
Expand Down
2 changes: 1 addition & 1 deletion dfdx/src/nn/layers/conv_trans2d.rs
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ use crate::prelude::*;
///
/// **Pytorch Equivalent**: `torch.nn.ConvTranspose2d(..., bias=False)`
///
/// To create a biased conv, combine with [crate::Bias2D].
/// To create a biased conv, combine with [crate::nn::Bias2D].
///
/// Generics:
/// - `InChan`: The number of input channels in an image.
Expand Down
2 changes: 1 addition & 1 deletion dfdx/src/nn/layers/cos.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
use crate::prelude::*;

/// Calls [dfdx::tensor_ops::cos()].
/// Calls [crate::tensor_ops::cos()].
#[derive(Default, Debug, Clone, Copy, CustomModule)]
pub struct Cos;
impl<S: Shape, E: Dtype, D: Device<E>, T: Tape<E, D>> Module<Tensor<S, E, D, T>> for Cos {
Expand Down
5 changes: 2 additions & 3 deletions dfdx/src/nn/layers/dropout.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
use crate::prelude::*;

/// Calls [dfdx::tensor_ops::dropout()] with `p = 1.0 / N` in [Module::forward_mut()], and does nothing in [Module::forward()].
/// Calls [crate::tensor_ops::dropout()] with `p = 1.0 / N` in [Module::forward_mut()], and does nothing in [Module::forward()].
///
/// Generics:
/// - `N`: p is set as `1.0 / N`
Expand Down Expand Up @@ -44,12 +44,11 @@ impl<const N: usize, S: Shape, E: Dtype, D: Device<E>, T: Tape<E, D>> Module<Ten
}
}

/// Calls [dfdx::tensor_ops::dropout()] in [Module::forward_mut()], and does nothing in [Module::forward()].
/// Calls [crate::tensor_ops::dropout()] in [crate::nn::Module::forward_mut()], and does nothing in [crate::nn::Module::forward()].
///
/// Examples:
/// ```rust
/// # use dfdx::prelude::*;
/// # use dfdx::*;
/// # let dev: Cpu = Default::default();
/// let mut dropout = Dropout { p: 0.5 };
/// let grads = dropout.alloc_grads();
Expand Down
2 changes: 1 addition & 1 deletion dfdx/src/nn/layers/exp.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
use crate::prelude::*;

/// Calls [dfdx::tensor_ops::exp()].
/// Calls [crate::tensor_ops::exp()].
#[derive(Default, Debug, Clone, Copy, CustomModule)]
pub struct Exp;
impl<S: Shape, E: Dtype, D: Device<E>, T: Tape<E, D>> Module<Tensor<S, E, D, T>> for Exp {
Expand Down
4 changes: 2 additions & 2 deletions dfdx/src/nn/layers/gelu.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
use crate::prelude::*;

/// Calls [dfdx::tensor_ops::fast_gelu()].
/// Calls [crate::tensor_ops::fast_gelu()].
#[derive(Default, Debug, Clone, Copy, CustomModule)]
pub struct FastGeLU;
impl<S: Shape, E: Dtype, D: Device<E>, T: Tape<E, D>> Module<Tensor<S, E, D, T>> for FastGeLU {
Expand All @@ -11,7 +11,7 @@ impl<S: Shape, E: Dtype, D: Device<E>, T: Tape<E, D>> Module<Tensor<S, E, D, T>>
}
}

/// Calls [dfdx::tensor_ops::accurate_gelu()].
/// Calls [crate::tensor_ops::accurate_gelu()].
#[derive(Default, Debug, Clone, Copy, CustomModule)]
pub struct AccurateGeLU;
impl<S: Shape, E: Dtype, D: Device<E>, T: Tape<E, D>> Module<Tensor<S, E, D, T>> for AccurateGeLU {
Expand Down
2 changes: 1 addition & 1 deletion dfdx/src/nn/layers/ln.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
use crate::prelude::*;

/// Calls [dfdx::tensor_ops::ln()].
/// Calls [crate::tensor_ops::ln()].
#[derive(Default, Debug, Clone, Copy, CustomModule)]
pub struct Ln;
impl<S: Shape, E: Dtype, D: Device<E>, T: Tape<E, D>> Module<Tensor<S, E, D, T>> for Ln {
Expand Down
2 changes: 1 addition & 1 deletion dfdx/src/nn/layers/log_softmax.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
use crate::prelude::*;

/// Calls [dfdx::tensor_ops::log_softmax()].
/// Calls [crate::tensor_ops::log_softmax()].
#[derive(Default, Debug, Clone, Copy, CustomModule)]
pub struct LogSoftmax;
impl<S: Shape, E: Dtype, D: Device<E>, T: Tape<E, D>> Module<Tensor<S, E, D, T>> for LogSoftmax {
Expand Down
2 changes: 1 addition & 1 deletion dfdx/src/nn/layers/prelu.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
use crate::prelude::*;

/// Calls [dfdx::tensor_ops::prelu()] with learnable value.
/// Calls [crate::tensor_ops::prelu()] with learnable value.
#[derive(Debug, Clone, Copy)]
pub struct PReLUConfig(pub f64);

Expand Down
2 changes: 1 addition & 1 deletion dfdx/src/nn/layers/prelu1d.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
use crate::prelude::*;

/// Calls [dfdx::tensor_ops::prelu()] with learnable values along second dimension.
/// Calls [crate::tensor_ops::prelu()] with learnable values along second dimension.
#[derive(Debug, Clone, Copy)]
pub struct PReLU1DConfig<C: Dim> {
pub a: f64,
Expand Down
2 changes: 1 addition & 1 deletion dfdx/src/nn/layers/relu.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
use crate::prelude::*;

/// Calls [dfdx::tensor_ops::relu()].
/// Calls [crate::tensor_ops::relu()].
#[derive(Default, Debug, Clone, Copy, CustomModule)]
pub struct ReLU;
impl<S: Shape, E: Dtype, D: Device<E>, T: Tape<E, D>> Module<Tensor<S, E, D, T>> for ReLU {
Expand Down
2 changes: 1 addition & 1 deletion dfdx/src/nn/layers/sigmoid.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
use crate::prelude::*;

/// Calls [dfdx::tensor_ops::sigmoid()].
/// Calls [crate::tensor_ops::sigmoid()].
#[derive(Default, Debug, Clone, Copy, CustomModule)]
pub struct Sigmoid;
impl<S: Shape, E: Dtype, D: Device<E>, T: Tape<E, D>> Module<Tensor<S, E, D, T>> for Sigmoid {
Expand Down
2 changes: 1 addition & 1 deletion dfdx/src/nn/layers/sin.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
use crate::prelude::*;

/// Calls [dfdx::tensor_ops::sin()].
/// Calls [crate::tensor_ops::sin()].
#[derive(Default, Debug, Clone, Copy, CustomModule)]
pub struct Sin;
impl<S: Shape, E: Dtype, D: Device<E>, T: Tape<E, D>> Module<Tensor<S, E, D, T>> for Sin {
Expand Down
2 changes: 1 addition & 1 deletion dfdx/src/nn/layers/softmax.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
use crate::prelude::*;

/// Calls [dfdx::tensor_ops::softmax()] on the last axis of the input.
/// Calls [crate::tensor_ops::softmax()] on the last axis of the input.
#[derive(Default, Debug, Clone, Copy, CustomModule)]
pub struct Softmax;
impl<S: Shape, E: Dtype, D: Device<E>, T: Tape<E, D>> Module<Tensor<S, E, D, T>> for Softmax {
Expand Down
2 changes: 1 addition & 1 deletion dfdx/src/nn/layers/sqrt.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
use crate::prelude::*;

/// Calls [dfdx::tensor_ops::sqrt()].
/// Calls [crate::tensor_ops::sqrt()].
#[derive(Default, Debug, Clone, Copy, CustomModule)]
pub struct Sqrt;

Expand Down
2 changes: 1 addition & 1 deletion dfdx/src/nn/layers/square.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
use crate::prelude::*;

/// Calls [dfdx::tensor_ops::square()].
/// Calls [crate::tensor_ops::square()].
#[derive(Default, Debug, Clone, Copy, CustomModule)]
pub struct Square;
impl<S: Shape, E: Dtype, D: Device<E>, T: Tape<E, D>> Module<Tensor<S, E, D, T>> for Square {
Expand Down
2 changes: 1 addition & 1 deletion dfdx/src/nn/layers/tanh.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
use crate::prelude::*;

/// Calls [dfdx::tensor_ops::tanh()].
/// Calls [crate::tensor_ops::tanh()].
#[derive(Default, Debug, Clone, Copy, CustomModule)]
pub struct Tanh;
impl<S: Shape, E: Dtype, D: Device<E>, T: Tape<E, D>> Module<Tensor<S, E, D, T>> for Tanh {
Expand Down
2 changes: 1 addition & 1 deletion dfdx/src/nn/optim/adam.rs
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ use crate::{
/// });
/// ```
///
/// See module level documentation at [crate::optim] for examples of how to actually use an optimizer.
/// See module level documentation at [crate::nn::optim] for examples of how to actually use an optimizer.
#[derive(Debug, Clone)]
pub struct Adam<M, E: Dtype, D: Storage<E>> {
/// Hyperparameter configuration
Expand Down
4 changes: 2 additions & 2 deletions dfdx/src/nn/optim/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,8 @@
//!
//! # Updating network parameters
//!
//! This is done via [crate::Optimizer::update()], where you pass in a mutable [crate::Module], and
//! the [dfdx::tensor::Gradients]:
//! This is done via [crate::nn::Optimizer::update()], where you pass in a mutable [crate::nn::Module], and
//! the [crate::tensor::Gradients]:
//!
//! ```rust
//! # use dfdx::prelude::*;
Expand Down
2 changes: 1 addition & 1 deletion dfdx/src/nn/optim/sgd.rs
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ use crate::{
/// });
/// ```
///
/// See module level documentation at [crate::optim] for examples of how to actually use an optimizer.
/// See module level documentation at [crate::nn::optim] for examples of how to actually use an optimizer.
#[derive(Debug, Clone)]
pub struct Sgd<M, E: Dtype, D: Storage<E>> {
pub cfg: SgdConfig,
Expand Down

0 comments on commit 2ac9dc9

Please sign in to comment.