Skip to content
SurrealVectors edited this page Jul 2, 2021 · 2 revisions

The neural net implemented in this project can take on a wide variety of forms. The core traits which bind this classification of neural nets are:

  1. As the average input of a neuron increases, the output tends to decrease.
  2. As the input of a neuron increases, the output tends to increase.
  3. As synaptic connections are formed and broken, the outputs of all neurons tend to decrease.

The first two traits define the activation function. In practice this can be achieved by returning the comparison of two estimated average functions, with the second more locally weighted than this first. This second can be the direct input. Any comparison works, though there are advantages and disadvantages to each. Example comparisons include: '>', '-', '/'.

The third trait defines the behavior of the neural net as a whole. There is not a core implementation for this rule, and generally will rely upon many systems working together. Examples will be discussed elsewhere in the wiki.

Clone this wiki locally