Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Breaking] Rewrite of nn to enable runtime layer sizes, proc macro declarations, and more #854

Merged
merged 62 commits into from
Oct 25, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
62 commits
Select commit Hold shift + click to select a range
5cfc8ab
Initial commit of workspaces
coreylowman Aug 18, 2023
f5bd91e
Styling
coreylowman Aug 19, 2023
696fc63
Adding AvgPool2D and MinPool2D
coreylowman Aug 19, 2023
cea1857
Adding Max/Min PoolGlobal
coreylowman Aug 19, 2023
0947184
Adding Adam
coreylowman Aug 19, 2023
7e77b14
Adding RMSprop
coreylowman Aug 19, 2023
57c012d
Adding SGD docstring
coreylowman Aug 19, 2023
455300c
Adding Dropout and DropoutOneIn
coreylowman Aug 19, 2023
117af4b
Reorg optimizers
coreylowman Aug 19, 2023
c41aaee
Adding all activations
coreylowman Aug 19, 2023
9ab7b1a
Adding BatchNorm1D
coreylowman Aug 19, 2023
897b319
Adding ConvTrans2D
coreylowman Aug 19, 2023
099188d
Adding Embedding
coreylowman Aug 19, 2023
74be3dc
Partial updates on examples
coreylowman Aug 19, 2023
1f17d9d
Adds AddInto and SplitInto
coreylowman Aug 22, 2023
84e64ad
Adding Upscale2D
coreylowman Aug 22, 2023
8c11a05
Format
coreylowman Aug 22, 2023
b650682
Moving nn benches
coreylowman Aug 22, 2023
bbb1d0e
Sketching examples
coreylowman Aug 22, 2023
ae9d603
Filling out build-module example
coreylowman Aug 22, 2023
2ca4f37
Filling out module-forward example
coreylowman Aug 22, 2023
98f8e6a
Adding Sequential example
coreylowman Aug 22, 2023
4c881ba
Adding debug calls to module fields
coreylowman Aug 22, 2023
6c82d1a
Adding 04-gradients example
coreylowman Aug 22, 2023
1dfae77
Updates to 05-optim
coreylowman Aug 22, 2023
275afae
Adding comments to nn layers
coreylowman Aug 29, 2023
1e61b9a
Documneting CustomModule
coreylowman Aug 30, 2023
13ccd08
Adding docstring to Sequential
coreylowman Aug 30, 2023
a273df0
Fixing cargo doc
coreylowman Aug 30, 2023
9057311
Fixing all doctests in dfdx-nn
coreylowman Aug 30, 2023
b794d9e
Adding tests back for optim
coreylowman Aug 31, 2023
68979c3
Adding some tests
coreylowman Aug 31, 2023
8444b90
Adding more tests
coreylowman Aug 31, 2023
1255690
Updating tests
coreylowman Aug 31, 2023
761c2eb
Updating safetensors depe ndency
coreylowman Sep 5, 2023
39063ac
Adding ResidualMul and GeneralizedMul
coreylowman Sep 5, 2023
e946645
Adding nightly wrappers around dfdx-nn layers
coreylowman Sep 5, 2023
a083496
Moe layers to dfdx-nn/src/layers
coreylowman Sep 5, 2023
ad27e6b
Using save_safetensors in mnist example
coreylowman Sep 5, 2023
a6e4619
Changing Linear to use weight/bias instead of MatMul/Add
coreylowman Sep 5, 2023
d8ff510
Merge branch 'main' into nn-rewrite
coreylowman Sep 14, 2023
ecd9a68
Adding conv1d layer
coreylowman Sep 14, 2023
fbe96e5
sqrt before converting to E
coreylowman Sep 14, 2023
239c21d
Fixing clippy errors
coreylowman Sep 14, 2023
3ac5946
updating nn examples
coreylowman Sep 14, 2023
0c9a01c
Fixing f64 tests
coreylowman Sep 14, 2023
22e5cc6
Merge branch 'main' into nn-rewrite
coreylowman Sep 14, 2023
346891f
Merge remote-tracking branch 'origin/main' into nn-rewrite
coreylowman Sep 14, 2023
cdd602a
Fixing documentation
coreylowman Sep 14, 2023
8b54609
Fixing doc tests
coreylowman Sep 14, 2023
16fec9b
Ignoring doctests in nn-derives
coreylowman Sep 14, 2023
f27a692
Fixing nightly feature propagation
coreylowman Sep 14, 2023
4281e57
Rename dfdx -> dfdx-core, dfdx-nn -> dfdx. Move dfdx-nn-core to dfdx-…
coreylowman Oct 25, 2023
01c01d3
Moving benches/examples to dfdx
coreylowman Oct 25, 2023
2ac1a1d
Change all versions to be the same
coreylowman Oct 25, 2023
4044697
Match dfdx and dfdx-core features
coreylowman Oct 25, 2023
25c7ef0
Update examples
coreylowman Oct 25, 2023
341ee78
Fixing prelude/exports
coreylowman Oct 25, 2023
929602e
Fixing doctests
coreylowman Oct 25, 2023
2ac9dc9
Fixing cargo doc
coreylowman Oct 25, 2023
a4db3d5
Moving feature flags & top level documnetation
coreylowman Oct 25, 2023
aef72b0
Update no-std
coreylowman Oct 25, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
92 changes: 8 additions & 84 deletions Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,87 +1,11 @@
[package]
name = "dfdx"
version = "0.13.0"
edition = "2021"
license = "MIT OR Apache-2.0"
rust-version = "1.65"
[workspace]
members = ["dfdx-core", "dfdx-derives", "dfdx"]
resolver = "2"

description = "Ergonomic auto differentiation in Rust, with pytorch like apis."
homepage = "https://github.com/coreylowman/dfdx"
documentation = "https://docs.rs/dfdx"
repository = "https://github.com/coreylowman/dfdx"
readme = "README.md"

keywords = [
"deep-learning",
"neural-network",
"backprop",
"tensor",
"autodiff",
]

# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html

[package.metadata.docs.rs]
features = ["nightly", "numpy", "safetensors", "cuda", "ci-check"]

[dependencies]
no-std-compat = { version = "0.4.1", default-features = false, features = [ "alloc", "compat_hash" ], optional = true }
spin = { version = "0.9.8", default-features = false, features = ["spin_mutex", "rwlock", "portable_atomic"], optional = true }
[workspace.dependencies]
num-traits = { version = "0.2.15", default-features = false }
safetensors = { version = "0.3.3", default-features = false }
memmap2 = { version = "0.5", default-features = false }
rand = { version = "0.8.5", default-features = false, features = ["std_rng"] }
rand_distr = { version = "0.4.3", default-features = false }
zip = { version = "0.6.6", default-features = false, optional = true }
cudarc = { version = "0.9.13", default-features = false, optional = true, features = ["driver", "cublas", "nvrtc"] }
num-traits = { version = "0.2.15", default-features = false }
safetensors = { version = "0.3", default-features = false, optional = true }
memmap2 = { version = "0.5", default-features = false, optional = true }
half = { version = "2.3.1", optional = true, features = ["num-traits", "rand_distr"] }
gemm = { version = "0.15.4", default-features = false, optional = true }
rayon = { version = "1.7.0", optional = true }
libm = "0.2.7"

[dev-dependencies]
tempfile = "3.3.0"
mnist = "0.5.0"
indicatif = "0.17.3"

[build-dependencies]
glob = { version = "0.3.1", optional = true }

[features]
default = ["std", "fast-alloc", "cpu"]
nightly = ["half?/use-intrinsics", "gemm?/nightly"]

std = ["cudarc?/std", "rand_distr/std_math", "gemm?/std"]
fast-alloc = ["std"]
no-std = ["no-std-compat", "dep:spin", "cudarc?/no-std", "num-traits/libm"]

cpu = ["dep:gemm", "dep:rayon"]
cuda = ["dep:cudarc", "dep:glob"]
cudnn = ["cuda", "cudarc?/cudnn"]

f16 = ["dep:half", "cudarc?/f16"]

numpy = ["dep:zip", "std"]
safetensors = ["dep:safetensors", "std", "dep:memmap2"]

test-f16 = ["f16"]
test-amp-f16 = ["f16"]
test-f64 = []
test-integrations = []
ci-check = ["cudarc?/ci-check"]

[[bench]]
name = "batchnorm2d"
harness = false

[[bench]]
name = "conv2d"
harness = false

[[bench]]
name = "sum"
harness = false

[[bench]]
name = "softmax"
harness = false
libm = "0.2.7"
72 changes: 72 additions & 0 deletions dfdx-core/Cargo.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
[package]
name = "dfdx-core"
version = "0.13.0"
edition = "2021"
license = "MIT OR Apache-2.0"
rust-version = "1.65"

description = "Ergonomic auto differentiation in Rust, with pytorch like apis."
homepage = "https://github.com/coreylowman/dfdx"
documentation = "https://docs.rs/dfdx"
repository = "https://github.com/coreylowman/dfdx"
readme = "README.md"

keywords = [
"deep-learning",
"neural-network",
"backprop",
"tensor",
"autodiff",
]

# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html

[package.metadata.docs.rs]
features = ["nightly", "numpy", "safetensors", "cuda", "ci-check"]

[dependencies]
no-std-compat = { version = "0.4.1", default-features = false, features = [ "alloc", "compat_hash" ], optional = true }
spin = { version = "0.9.8", default-features = false, features = ["spin_mutex", "rwlock", "portable_atomic"], optional = true }
rand = { workspace = true }
rand_distr = { workspace = true }
zip = { version = "0.6.6", default-features = false, optional = true }
cudarc = { version = "0.9.13", default-features = false, optional = true, features = ["driver", "cublas", "nvrtc"] }
num-traits = { workspace = true }
safetensors = { workspace = true, optional = true }
memmap2 = { workspace = true, optional = true }
half = { version = "2.3.1", optional = true, features = ["num-traits", "rand_distr"] }
gemm = { version = "0.15.4", default-features = false, optional = true }
rayon = { version = "1.7.0", optional = true }
libm = { workspace = true }

[dev-dependencies]
tempfile = "3.3.0"
mnist = "0.5.0"
indicatif = "0.17.3"

[build-dependencies]
glob = { version = "0.3.1", optional = true }

[features]
default = ["std", "fast-alloc", "cpu"]
nightly = ["half?/use-intrinsics", "gemm?/nightly"]

std = ["cudarc?/std", "rand_distr/std_math", "gemm?/std"]
no-std = ["no-std-compat", "dep:spin", "cudarc?/no-std", "num-traits/libm"]

cpu = ["dep:gemm", "dep:rayon"]
fast-alloc = ["std"]

cuda = ["dep:cudarc", "dep:glob"]
cudnn = ["cuda", "cudarc?/cudnn"]

f16 = ["dep:half", "cudarc?/f16"]

numpy = ["dep:zip", "std"]
safetensors = ["dep:safetensors", "std", "dep:memmap2"]

test-f16 = ["f16"]
test-amp-f16 = ["f16"]
test-f64 = []
test-integrations = []
ci-check = ["cudarc?/ci-check"]
File renamed without changes.
4 changes: 2 additions & 2 deletions src/data/arange.rs → dfdx-core/src/data/arange.rs
Original file line number Diff line number Diff line change
Expand Up @@ -11,15 +11,15 @@ pub trait Arange<E: Dtype>: Storage<E> + ZerosTensor<E> + TensorFromVec<E> {
///
/// Const sized tensor:
/// ```rust
/// # use dfdx::{prelude::*, data::Arange};
/// # use dfdx_core::{prelude::*, data::Arange};
/// # let dev: Cpu = Default::default();
/// let t: Tensor<Rank1<5>, f32, _> = dev.arange(Const::<5>);
/// assert_eq!(t.array(), [0.0, 1.0, 2.0, 3.0, 4.0]);
/// ```
///
/// Runtime sized tensor:
/// ```rust
/// # use dfdx::{prelude::*, data::Arange};
/// # use dfdx_core::{prelude::*, data::Arange};
/// # let dev: Cpu = Default::default();
/// let t: Tensor<(usize, ), f32, _> = dev.arange(5);
/// assert_eq!(t.as_vec(), [0.0, 1.0, 2.0, 3.0, 4.0]);
Expand Down
6 changes: 3 additions & 3 deletions src/data/batch.rs → dfdx-core/src/data/batch.rs
Original file line number Diff line number Diff line change
Expand Up @@ -80,14 +80,14 @@ pub trait IteratorBatchExt: Iterator {
///
/// Const batches:
/// ```rust
/// # use dfdx::{prelude::*, data::IteratorBatchExt};
/// # use dfdx_core::{prelude::*, data::IteratorBatchExt};
/// let items: Vec<[usize; 5]> = (0..12).batch_exact(Const::<5>).collect();
/// assert_eq!(&items, &[[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]);
/// ```
///
/// Runtime batches:
/// ```rust
/// # use dfdx::{prelude::*, data::IteratorBatchExt};
/// # use dfdx_core::{prelude::*, data::IteratorBatchExt};
/// let items: Vec<Vec<usize>> = (0..12).batch_exact(5).collect();
/// assert_eq!(&items, &[[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]);
/// ```
Expand All @@ -104,7 +104,7 @@ pub trait IteratorBatchExt: Iterator {
///
/// Example:
/// ```rust
/// # use dfdx::{prelude::*, data::IteratorBatchExt};
/// # use dfdx_core::{prelude::*, data::IteratorBatchExt};
/// let items: Vec<Vec<usize>> = (0..12).batch_with_last(5).collect();
/// assert_eq!(&items, &[vec![0, 1, 2, 3, 4], vec![5, 6, 7, 8, 9], vec![10, 11]]);
/// ```
Expand Down
2 changes: 1 addition & 1 deletion src/data/collate.rs → dfdx-core/src/data/collate.rs
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ pub trait IteratorCollateExt: Iterator {
///
/// Example implementations:
/// ```rust
/// # use dfdx::data::IteratorCollateExt;
/// # use dfdx_core::data::IteratorCollateExt;
/// let data = [[('a', 'b'); 10], [('c', 'd'); 10], [('e', 'f'); 10]];
/// // we use collate to transform each batch:
/// let mut iter = data.into_iter().collate();
Expand Down
File renamed without changes.
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ pub trait OneHotEncode<E: Dtype>: Storage<E> + ZerosTensor<E> + TensorFromVec<E>
///
/// Const class labels and const n:
/// ```rust
/// # use dfdx::{prelude::*, data::OneHotEncode};
/// # use dfdx_core::{prelude::*, data::OneHotEncode};
/// # let dev: Cpu = Default::default();
/// let class_labels = [0, 1, 2, 1, 1];
/// let probs: Tensor<Rank2<5, 3>, f32, _> = dev.one_hot_encode(Const::<3>, class_labels);
Expand All @@ -31,7 +31,7 @@ pub trait OneHotEncode<E: Dtype>: Storage<E> + ZerosTensor<E> + TensorFromVec<E>
///
/// Runtime class labels and const n:
/// ```rust
/// # use dfdx::{prelude::*, data::OneHotEncode};
/// # use dfdx_core::{prelude::*, data::OneHotEncode};
/// # let dev: Cpu = Default::default();
/// let class_labels = [0, 1, 2, 1, 1];
/// let probs: Tensor<(Const<5>, usize), f32, _> = dev.one_hot_encode(3, class_labels);
Expand All @@ -46,7 +46,7 @@ pub trait OneHotEncode<E: Dtype>: Storage<E> + ZerosTensor<E> + TensorFromVec<E>
///
/// Const class labels and runtime n:
/// ```rust
/// # use dfdx::{prelude::*, data::OneHotEncode};
/// # use dfdx_core::{prelude::*, data::OneHotEncode};
/// # let dev: Cpu = Default::default();
/// let class_labels = std::vec![0, 1, 2, 1, 1];
/// let probs: Tensor<(usize, Const<3>), f32, _> = dev.one_hot_encode(Const, class_labels);
Expand All @@ -61,7 +61,7 @@ pub trait OneHotEncode<E: Dtype>: Storage<E> + ZerosTensor<E> + TensorFromVec<E>
///
/// Runtime both:
/// ```rust
/// # use dfdx::{prelude::*, data::OneHotEncode};
/// # use dfdx_core::{prelude::*, data::OneHotEncode};
/// # let dev: Cpu = Default::default();
/// let class_labels = std::vec![0, 1, 2, 1, 1];
/// let probs: Tensor<(usize, usize), f32, _> = dev.one_hot_encode(3, class_labels);
Expand Down
2 changes: 1 addition & 1 deletion src/data/stack.rs → dfdx-core/src/data/stack.rs
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ pub trait IteratorStackExt: Iterator {
///
/// Example implementations:
/// ```rust
/// # use dfdx::{data::IteratorStackExt, prelude::*};
/// # use dfdx_core::{data::IteratorStackExt, prelude::*};
/// # let dev: Cpu = Default::default();
/// let a: Tensor<Rank1<3>, f32, _> = dev.zeros();
/// let data = [[a.clone(), a.clone(), a]];
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
82 changes: 3 additions & 79 deletions src/lib.rs → dfdx-core/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@
//! There are two options for this currently, with more planned to be added in the future:
//!
//! 1. [tensor::Cpu] - for tensors stored on the heap
//! 2. [tensor::Cuda] - for tensors stored in GPU memory

Check warning on line 62 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-check

unresolved link to `tensor::Cuda`

Check warning on line 62 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-check

unresolved link to `tensor::Cuda`
//!
//! Both devices implement [Default], you can also create them with a certain seed
//! and ordinal.
Expand All @@ -67,7 +67,7 @@
//! Here's how you might use a device:
//!
//! ```rust
//! # use dfdx::prelude::*;
//! # use dfdx_core::prelude::*;
//! let dev: Cpu = Default::default();
//! let t: Tensor<Rank2<2, 3>, f32, _> = dev.zeros();
//! ```
Expand All @@ -85,8 +85,8 @@
//! | Unary Operations | `a.sqrt()` | `a.sqrt()` | `a.sqrt()` |
//! | Binary Operations | `a + b` | `a + b` | `a + b` |
//! | gemm/gemv | [tensor_ops::matmul] | `a @ b` | `a @ b` |
//! | 2d Convolution | [tensor_ops::TryConv2D] | - | `torch.conv2d` |

Check warning on line 88 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-check

unresolved link to `tensor_ops::TryConv2D`

Check warning on line 88 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-check

unresolved link to `tensor_ops::TryConv2D`
//! | 2d Transposed Convolution | [tensor_ops::TryConvTrans2D] | - | `torch.conv_transpose2d` |

Check warning on line 89 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-check

unresolved link to `tensor_ops::TryConvTrans2D`

Check warning on line 89 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-check

unresolved link to `tensor_ops::TryConvTrans2D`
//! | Slicing | [tensor_ops::slice] | `a[...]` | `a[...]` |
//! | Select | [tensor_ops::SelectTo] | `a[...]` | `torch.select` |
//! | Gather | [tensor_ops::GatherTo] | `np.take` | `torch.gather` |
Expand All @@ -100,78 +100,6 @@
//! | Concat | [tensor_ops::TryConcat] | `np.concatenate` | `torch.concat` |
//!
//! and **much much more!**
//!
//! # Neural networks
//!
//! *See [nn] for more information.*
//!
//! Neural networks are composed of building blocks that you can chain together. In
//! dfdx, sequential neural networks are represents by **tuples**! For example,
//! the following two networks are identical:
//!
//! | dfdx | pytorch |
//! | --- | --- |
//! | `(Linear<3, 5>, ReLU, Linear<5, 10>)` | `nn.Sequential(nn.Linear(3, 5), nn.ReLU(), nn.Linear(5, 10))` |
//! | `((Conv2D<3, 2, 1>, Tanh), Conv2D<3, 2, 1>)` | `nn.Sequential(nn.Sequential(nn.Conv2d(3, 2, 1), nn.Tanh()), nn.Conv2d(3, 2, 1))`
//!
//! To build a neural network, you of course need a device:
//!
//! ```rust
//! # use dfdx::prelude::*;
//! let dev: Cpu = Default::default();
//! type Model = (Linear<3, 5>, ReLU, Linear<5, 10>);
//! let model = dev.build_module::<Model, f32>();
//! ```
//!
//! Note two things:
//! 1. We are using [nn::DeviceBuildExt] to instantiate the model
//! 2. We **need** to pass a dtype (in this case f32) to create the model.
//!
//! You can then pass tensors into the model with [nn::Module::forward()]:
//!
//! ```rust
//! # use dfdx::prelude::*;
//! # let dev: Cpu = Default::default();
//! # type Model = (Linear<3, 5>, ReLU, Linear<5, 10>);
//! # let model = dev.build_module::<Model, f32>();
//! // tensor with runtime batch dimension of 10
//! let x: Tensor<(usize, Const<3>), f32, _> = dev.sample_normal_like(&(10, Const));
//! let y = model.forward(x);
//! ```
//!
//! # Optimizers and Gradients
//!
//! *See [optim] for more information*
//!
//! dfdx supports a number of the standard optimizers:
//!
//! | Optimizer | dfdx | pytorch |
//! | --- | --- | --- |
//! | SGD | [optim::Sgd] | `torch.optim.SGD` |
//! | Adam | [optim::Adam] | torch.optim.Adam` |
//! | AdamW | [optim::Adam] with [optim::WeightDecay::Decoupled] | `torch.optim.AdamW` |
//! | RMSprop | [optim::RMSprop] | `torch.optim.RMSprop` |
//!
//! You can use optimizers to optimize neural networks (or even tensors!). Here's
//! a simple example of how to do this with [nn::ZeroGrads]:
//! ```rust
//! # use dfdx::{prelude::*, optim::*};
//! # let dev: Cpu = Default::default();
//! type Model = (Linear<3, 5>, ReLU, Linear<5, 10>);
//! let mut model = dev.build_module::<Model, f32>();
//! // 1. allocate gradients for the model
//! let mut grads = model.alloc_grads();
//! // 2. create our optimizer
//! let mut opt = Sgd::new(&model, Default::default());
//! // 3. trace gradients through forward pass
//! let x: Tensor<Rank2<10, 3>, f32, _> = dev.sample_normal();
//! let y = model.forward_mut(x.traced(grads));
//! // 4. compute loss & run backpropagation
//! let loss = y.square().mean();
//! grads = loss.backward();
//! // 5. apply gradients
//! opt.update(&mut model, &grads);
//! ```

#![cfg_attr(all(feature = "no-std", not(feature = "std")), no_std)]
#![allow(incomplete_features)]
Expand All @@ -185,19 +113,15 @@

pub mod data;
pub mod dtypes;
pub mod feature_flags;
pub mod losses;
pub mod nn;
pub mod optim;
pub mod nn_traits;
pub mod shapes;
pub mod tensor;
pub mod tensor_ops;

/// Contains subset of all public exports.
pub mod prelude {
pub use crate::losses::*;
pub use crate::nn::builders::*;
pub use crate::optim::prelude::*;
pub use crate::shapes::*;
pub use crate::tensor::*;
pub use crate::tensor_ops::*;
Expand All @@ -217,8 +141,8 @@

#[cfg(all(target_arch = "x86_64", target_feature = "sse"))]
{
use std::arch::x86_64::{_MM_FLUSH_ZERO_ON, _MM_SET_FLUSH_ZERO_MODE};

Check warning on line 144 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 144 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 144 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 144 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 144 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 144 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 144 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 144 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 144 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 144 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead
unsafe { _MM_SET_FLUSH_ZERO_MODE(_MM_FLUSH_ZERO_ON) }

Check warning on line 145 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 145 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 145 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 145 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 145 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 145 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 145 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 145 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 145 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 145 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead
}
}

Expand All @@ -236,14 +160,14 @@

#[cfg(all(target_arch = "x86_64", target_feature = "sse"))]
{
use std::arch::x86_64::{_MM_FLUSH_ZERO_OFF, _MM_SET_FLUSH_ZERO_MODE};

Check warning on line 163 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 163 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 163 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 163 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 163 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 163 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 163 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 163 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 163 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 163 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead
unsafe { _MM_SET_FLUSH_ZERO_MODE(_MM_FLUSH_ZERO_OFF) }

Check warning on line 164 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 164 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 164 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 164 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 164 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 164 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 164 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 164 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 164 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead

Check warning on line 164 in dfdx-core/src/lib.rs

View workflow job for this annotation

GitHub Actions / cargo-test-nightly

use of deprecated function `std::arch::x86_64::_MM_SET_FLUSH_ZERO_MODE`: see `_mm_setcsr` documentation - use inline assembly instead
}
}

#[cfg(test)]
pub(crate) mod tests {
pub use num_traits::{Float, FromPrimitive, NumCast, Zero};
pub use num_traits::{Float, NumCast, Zero};

#[cfg(not(feature = "cuda"))]
pub type TestDevice = crate::tensor::Cpu;
Expand Down
File renamed without changes.
Loading
Loading