BlueNeurons.ch
Home
Neural Network
Playground
Datasets
User Guide
Profile
Logout
Login
First: Build the Neural Network Architecture
Choose Dataset:
Input:
Activation
Linear
ReLU
Sigmoid
Tanh
Softmax
Bias
True
False
Hidden {0}:
Activation
Linear
ReLU
Sigmoid
Tanh
Softmax
Bias
True
False
Add Hidden Layer
Output:
Activation
Linear
ReLU
Sigmoid
Tanh
Softmax
Bias
True
False
Continue to Training
Visualization:
rendering...
Second: Train the Weights
Edit Architecture
Optimizer:
SGD
SGD Momentum
Adam
RMSprop
Loss:
Mean Squared Error
Softmax Cross Entropy
Optimizer Settings:
Maximum Iterations:
Minimum Error:
Reset Network
Train
Continue without Training
Visualization:
rendering...
Last: Evaluate or make Predictions with the Neural Network
Back to Training
Choose Dataset:
Fill Inputs manually
Clear
Compute
Output:
Visualization:
rendering...
Dataset Info
×
Name
Description
Input Neurons
Output Neurons
Number of Dataitems
Problemtype
Has Drawing Matrix
Has a Predefined Architecture
Has Predefined Training Settings
Input Example
Optimizer Settings Info
×
Learning rate
The learning rate defines how much of the calculated weight change should be applied to the current weight. If the learning rate is too high, the minimum of the loss function will be overshot. If the learning rate is too low, the training will take extremely long or will get stuck.
Usually a learning rate between 0 and 1 is selected. For the adaptive learning rate optimizers or if a momentum has been specified rather small learning rates between 0.01 and 0.2 should be chosen.
Momentum
Momentum adds a fraction of the past weight change to the current weight change. It helps to accelerate the learning and improves the accuracy as it gives a bias for the direction of the gradient. The momentum can initially be set to 0.9.
Loss Info
×
General
A loss function measures how well a neural network performs. The goal of training a neural network is to minimize the loss function.
Mean Squared Error
The mean squared error is a popular loss function for regression problems. It measures the quality of an estimator by calculating the average squared distances between the estimated points ỹ and actual points y.
Softmax Cross Entropy
Softmax Cross Entropy, also known as categorical cross entropy, is used for multi-class classification problems.
Optimizer Info
×
General
An optimizer is used in order to minimize the loss function.
Stochastic Gradient Descent (SGD)
Stochastic gradient descent (SGD) is a popular optimization method in machine learning. The method is based on the idea that at each iteration a data element is selected randomly and that the gradient of the entire dataset can be estimated due to this element. The advantage of this approach is that it is not necessary to go through the entire dataset in order to calculate the gradient. However, this method needs more iterations to converge, since the training is very noisy due to the estimation.
Parameter: Learning rate (required)
Stochastic Gradient Descent with Momentum (SGDM)
Momentum adds a fraction of the past weight change to the current weight change and therefore helps SGD to find the right direction towards a minimum. This technique can lead to a better estimation of the actual gradient than the classic SGD without momentum.
Parameters: Learning rate (required) and momentum (required)
Adam and RMSprop
Adam and RMSprop are adaptive learning rate optimizers. This means that these optimizers automatically adjust the learning rate during training. With SGD and SGDM, the learning rate defined at the beginning always remains the same. However, it has been shown that an appropriate adjustment of the learning rate during training has a positive impact on the result. In the area close to the minimum it is desirable to have a small learning rate. If the minimum is further away, a larger learning rate is better to accelerate the training. Another advantage of adaptive learning rate optimizers is that the trial-and-error search for the optimal parameters can be avoided.
Adam: Initial Learning rate (optional)
RMSprop: Initial learning rate (required), momentum (optional)
Sign In
Sign Up
Reset Password
Why?
Email:
Reset
You need a login so that we can save and restore your datasets.
Email:
Password:
Sign In
Email:
Password:
Re-Enter Password:
Sign Up
Modal title
Modal title
×