Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-layer Perceptron classifier #493

Open
wants to merge 2 commits into
base: ml
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- `load_geojson`
- `load_ml_model`
- `load_url`
- `ml_fit_class_mlp`
- `ml_fit_class_random_forest`
- `ml_fit_regr_random_forest`
- `ml_predict`
Expand Down
159 changes: 159 additions & 0 deletions proposals/ml_fit_class_mlp.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,159 @@
{
"id": "ml_fit_class_mlp",
"summary": "Train a Multilayer Perceptron classification model",
"description": "Fit a Multilayer Perceptron (MLP) classification model to training data. MLP is a class of feedforward artificial neural network (ANN) that consists of at least three layers of nodes: an input layer, a hidden layer, and an output layer. MLP utilizes a supervised learning technique called backpropagation for training.",
"categories": [
"machine learning",
"deep learning"
],
"experimental": true,
"parameters": [

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could be helpful to somehow divide training data into train/test to get training statistics independently from the training data, e.g.

{
  "name": "train_test_split",
  "description": "defines the ration by which data is split into training and test samples",
  "schema": {
    "type": "number",
    "minimum": 0.0,
    "maximum": 1.0,
    "default": 0.8
  }
}

{
"name": "predictors",
"description": "The predictors for the MLP classification model as a vector data cube. These are the independent variables that the MLP algorithm analyses to learn patterns and relationships within the data.",
"schema": [
{
"type": "object",
"subtype": "datacube",
"dimensions": [
{
"type": "geometry"
},
{
"type": "bands"
}
]
},
{
"type": "object",
"subtype": "datacube",
"dimensions": [
{
"type": "geometry"
},
{
"type": "other"
}
]
}
]
},
{
"name": "target",
"description": "The dependent variable for MLP classification. These are the labeled data, aligning with predictor values based on a shared geometry dimension. This ensures a clear connection between predictor rows and labels.",
"schema": {
"type": "object",
"subtype": "datacube",
"dimensions": [
{
"type": "geometry"
}
]
}
},
{
"name": "hidden_layers",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We define the shape of hidden layers here. But where do we define the shape of input and output layers?
Suggestion: Rename this parameter to layers to include input and output.

"description": "The number and size of hidden layers in the MLP.",
"schema": {
"type": "array",
"items": {
"type": "integer",
"minimum": 1
},
"default": [
512,
512,
512
]
}
},
{
"name": "dropout_rates",
"description": "Dropout rates for the hidden layers. Each value corresponds to the dropout rate for a specific layer.",
"schema": {
"type": "array",
"items": {
"type": "number",
"minimum": 0,
"maximum": 1
},
"default": [
0.4,
0.3,
0.2
]
}
},
{
"name": "epochs",
"description": "The number of epochs for training the model.",
"schema": {
"type": "integer",
"minimum": 1,
"default": 100
}
},
{
"name": "batch_size",
"description": "Size of minibatches for stochastic optimizers.",
"schema": {
"type": "integer",
"minimum": 1,
"default": 64
}
},
{
"name": "activation_function",
"description": "Activation function for the hidden layers.",
"schema": {
"type": "string",
"enum": [
"relu",
"tanh",
"sigmoid"
],
"default": "relu"
}
},
Comment on lines +106 to +117

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might need some minor corrections. We cannot specify a specific activation function per layer. A choice might be more appropriate

  • a single string: activation function applied to all layers
  • an array of strings: one string to specify the activation function for each layer

It might also be helpful to include softmax here, commonly used on the output layer, helpful for uncertainty quantification.

This could be the schema then:

{
  "oneOf": [
    {
      "type": "string",
      "enum": ["relu", "tanh", "sigmoid", "softmax"]
    },
    {
      "type": "array",
      "items": {
        "type": "string",
        "enum": ["relu", "tanh", "sigmoid", "softmax"]
      }
    }
  ],
  "default"="relu"
}

{
"name": "optimizer",
"description": "The gradient descent algorithm for weight optimization.",
"schema": {
"type": "string",
"enum": [
"adam",
"sgd",
"lbfgs"
],
"default": "adam"
}
},
{
"name": "random_state",
"description": "Sets the seed of the algorithm's internal random number generator for initializing weights, biases, and data splitting in 'sgd' or 'adam' optimizers. Use an integer for consistent results across function calls.",
"optional": true,
"default": null,
"schema": {
"type": [
"integer",
"null"
]
}
}
],
"returns": {
"description": "A model object that can be saved with ``save_ml_model()`` and restored with ``load_ml_model()``.",
"schema": {
"type": "object",
"subtype": "ml-model"
}
},
"links": [
{
"href": "https://arxiv.org/abs/1611.06455",
"title": "Z. Wang, W. Yan, and T. Oates (2017), Time Series Classification from Scratch with Deep Neural Networks: A Strong Baseline",
"type": "text/html",
"rel": "about"
}
]
}
9 changes: 9 additions & 0 deletions tests/.words
Original file line number Diff line number Diff line change
Expand Up @@ -47,3 +47,12 @@ Hyndman
date1
date2
favor
Wang
Yan
Oates
adam
sgd
minibatches
Perceptron
feedforward
backpropagation
Loading