Skip to content

Commit

Permalink
Merge pull request #739 from ArnoStrouwen/docs1
Browse files Browse the repository at this point in the history
Documenter 1.0 upgrade
  • Loading branch information
ChrisRackauckas authored Oct 6, 2023
2 parents 13f9fe6 + 0646bb6 commit f63468d
Show file tree
Hide file tree
Showing 9 changed files with 38 additions and 48 deletions.
4 changes: 4 additions & 0 deletions .github/workflows/CI.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,13 @@ on:
pull_request:
branches:
- master
paths-ignore:
- 'docs/**'
push:
branches:
- master
paths-ignore:
- 'docs/**'
jobs:
test:
runs-on: ubuntu-latest
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ the documentation, which contains the unreleased features.
- Integrated logging suite for handling connections to TensorBoard
- Handling of (partial) integro-differential equations and various stochastic equations
- Specialized forms for solving `ODEProblem`s with neural networks
- Compatability with [Flux.jl](https://docs.sciml.ai/Flux.jl/stable/) and [Lux.jl](https://docs.sciml.ai/Lux/stable/)
- Compatability with [Flux.jl](https://fluxml.ai/) and [Lux.jl](https://lux.csail.mit.edu/)
for all of the GPU-powered machine learning layers available from those libraries.
- Compatability with [NeuralOperators.jl](https://docs.sciml.ai/NeuralOperators/stable/) for
mixing DeepONets and other neural operators (Fourier Neural Operators, Graph Neural Operators,
Expand Down
2 changes: 1 addition & 1 deletion docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ SpecialFunctions = "276daf66-3868-5448-9aa4-cd146d93841b"

[compat]
DiffEqBase = "6.106"
Documenter = "0.27"
Documenter = "1"
DomainSets = "0.6"
Flux = "0.13, 0.14"
Integrals = "3.3"
Expand Down
15 changes: 3 additions & 12 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -10,19 +10,10 @@ include("pages.jl")

makedocs(sitename = "NeuralPDE.jl",
authors = "#",
clean = true,
doctest = false,
modules = [NeuralPDE],
strict = [
:doctest,
:linkcheck,
:parse_error,
:example_block,
# Other available options are
# :autodocs_block, :cross_references, :docs_block, :eval_block, :example_block, :footnote, :meta_block, :missing_docs, :setup_block
],
format = Documenter.HTML(analytics = "UA-90474609-3",
assets = ["assets/favicon.ico"],
clean = true, doctest = false, linkcheck = true,
warnonly = [:missing_docs, :example_block],
format = Documenter.HTML(assets = ["assets/favicon.ico"],
canonical = "https://docs.sciml.ai/NeuralPDE/stable/"),
pages = pages)

Expand Down
2 changes: 1 addition & 1 deletion docs/src/examples/Lotka_Volterra_BPINNs.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ tspan = (0.0, 6.0)
prob = ODEProblem(lotka_volterra, u0, tspan, p)

```
With the [`saveat` argument](https://docs.sciml.ai/latest/basics/common_solver_opts/) we can specify that the solution is stored only at `saveat` time units(default saveat=1 / 50.0).
With the [`saveat` argument](https://docs.sciml.ai/DiffEqDocs/stable/basics/common_solver_opts/) we can specify that the solution is stored only at `saveat` time units(default saveat=1 / 50.0).

```julia
# Plot solution got by Standard DifferentialEquations.jl ODE solver
Expand Down
10 changes: 7 additions & 3 deletions docs/src/examples/linear_parabolic.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,10 +81,14 @@ sym_prob = symbolic_discretize(pdesystem, discretization)
pde_inner_loss_functions = sym_prob.loss_functions.pde_loss_functions
bcs_inner_loss_functions = sym_prob.loss_functions.bc_loss_functions
global iteration = 0
callback = function (p, l)
println("loss: ", l)
println("pde_losses: ", map(l_ -> l_(p), pde_inner_loss_functions))
println("bcs_losses: ", map(l_ -> l_(p), bcs_inner_loss_functions))
if iteration % 10 == 0
println("loss: ", l)
println("pde_losses: ", map(l_ -> l_(p), pde_inner_loss_functions))
println("bcs_losses: ", map(l_ -> l_(p), bcs_inner_loss_functions))
end
global iteration += 1
return false
end
Expand Down
12 changes: 8 additions & 4 deletions docs/src/examples/nonlinear_elliptic.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,11 +95,15 @@ pde_inner_loss_functions = sym_prob.loss_functions.pde_loss_functions
bcs_inner_loss_functions = sym_prob.loss_functions.bc_loss_functions[1:6]
aprox_derivative_loss_functions = sym_prob.loss_functions.bc_loss_functions[7:end]
global iteration = 0
callback = function (p, l)
println("loss: ", l)
println("pde_losses: ", map(l_ -> l_(p), pde_inner_loss_functions))
println("bcs_losses: ", map(l_ -> l_(p), bcs_inner_loss_functions))
println("der_losses: ", map(l_ -> l_(p), aprox_derivative_loss_functions))
if iteration % 10 == 0
println("loss: ", l)
println("pde_losses: ", map(l_ -> l_(p), pde_inner_loss_functions))
println("bcs_losses: ", map(l_ -> l_(p), bcs_inner_loss_functions))
println("der_losses: ", map(l_ -> l_(p), aprox_derivative_loss_functions))
end
global iteration += 1
return false
end
Expand Down
37 changes: 12 additions & 25 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ networks which both approximate physical laws and real data simultaniously.
- Integrated logging suite for handling connections to TensorBoard.
- Handling of (partial) integro-differential equations and various stochastic equations.
- Specialized forms for solving `ODEProblem`s with neural networks.
- Compatibility with [Flux.jl](https://docs.sciml.ai/Flux.jl/stable/) and [Lux.jl](https://docs.sciml.ai/Lux/stable/).
- Compatibility with [Flux.jl](https://fluxml.ai/) and [Lux.jl](https://lux.csail.mit.edu/).
for all the GPU-powered machine learning layers available from those libraries.
- Compatibility with [NeuralOperators.jl](https://docs.sciml.ai/NeuralOperators/stable/) for
mixing DeepONets and other neural operators (Fourier Neural Operators, Graph Neural Operators,
Expand Down Expand Up @@ -132,32 +132,19 @@ Pkg.status(; mode = PKGMODE_MANIFEST) # hide
</details>
```

```@raw html
You can also download the
<a href="
```

```@eval
using TOML
using Markdown
version = TOML.parse(read("../../Project.toml", String))["version"]
name = TOML.parse(read("../../Project.toml", String))["name"]
link = "https://github.com/SciML/" * name * ".jl/tree/gh-pages/v" * version *
"/assets/Manifest.toml"
```

```@raw html
">manifest</a> file and the
<a href="
```

```@eval
using TOML
version = TOML.parse(read("../../Project.toml", String))["version"]
name = TOML.parse(read("../../Project.toml", String))["name"]
link = "https://github.com/SciML/" * name * ".jl/tree/gh-pages/v" * version *
"/assets/Project.toml"
```

```@raw html
">project</a> file.
link_manifest = "https://github.com/SciML/" * name * ".jl/tree/gh-pages/v" * version *
"/assets/Manifest.toml"
link_project = "https://github.com/SciML/" * name * ".jl/tree/gh-pages/v" * version *
"/assets/Project.toml"
Markdown.parse("""You can also download the
[manifest]($link_manifest)
file and the
[project]($link_project)
file.
""")
```
2 changes: 1 addition & 1 deletion src/pinn_types.jl
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ methodology.
should only be used to more directly impose functional information in the training problem,
for example imposing the boundary condition by the test function formulation.
* `adaptive_loss`: the choice for the adaptive loss function. See the
[adaptive loss page](@id adaptive_loss) for more details. Defaults to no adaptivity.
[adaptive loss page](@ref adaptive_loss) for more details. Defaults to no adaptivity.
* `additional_loss`: a function `additional_loss(phi, θ, p_)` where `phi` are the neural
network trial solutions, `θ` are the weights of the neural network(s), and `p_` are the
hyperparameters of the `OptimizationProblem`. If `param_estim = true`, then `θ` additionally
Expand Down

0 comments on commit f63468d

Please sign in to comment.