| <table><tr><td>Property group</td><td>Property Name</td><td>Description</td><td>Misc.</td><td>Note</td></tr><tr><td>Model File</td><td>Save Model File</td><td>Decide whether to save model file.</td><td>Required</td><td>Yes, No</td></tr><tr><td>Model File</td><td>Model File Path</td><td>Set the path for the model file to be saved.</td><td>Conditionally Required</td><td/></tr><tr><td>Selection options</td><td>Run Count</td><td>Select the maximum number of times to train the dataset.</td><td>Required</td><td>natural number</td></tr><tr><td>Selection options</td><td>Final Target Error</td><td>Select the minimum error condition for learning to end.</td><td>Required</td><td>0<real number</td></tr><tr><td>Selection options</td><td>Loss Function</td><td>Select the type of loss function. If Loss Function is Cross Entropy, SoftMax activation function is automatically applied.</td><td>Required</td><td>Sum of Squared Error, Cross Entropy</td></tr><tr><td>Selection options</td><td>Output Layer Activation Function</td><td>Select the activation function of the output layer if Loss Function is Sum of Squared Error.</td><td>Conditionally Required</td><td>Linear, Tanh, Log Sigmoid, ReLU, Leaky ReLU, ELU</td></tr><tr><td>Selection options</td><td>Optimization Technique</td><td>Set the method of weight updates.</td><td>Required</td><td>Gradient Descent With Momentum, ADAM, Adagrad, RMSProp</td></tr><tr><td>Selection options</td><td>Batch Size</td><td>Set the number of data to update at once in mini-batch gradient descent.</td><td>Required</td><td>0<integer≤k (k: number of data)</td></tr><tr><td>Selection options</td><td>Learning Rate</td><td>Set the degree of weight updates during optimization technique.</td><td>Required</td><td>0<real number≤1</td></tr><tr><td>Selection options</td><td>Momentum</td><td>Set the degree to which the weight reflects the existing direction of movement if Optimization Technique is Gradient Descent with Momentum.</td><td>Conditionally Required</td><td>0≤real number<1</td></tr><tr><td>Selection options</td><td>Gamma</td><td>Set the rate at which previously updated weights are remembered if Optimization Technique is RMSProp.</td><td>Conditionally Required</td><td>0≤real number<1</td></tr><tr><td>Selection options</td><td>Beta1</td><td>Set the degree to which the weight reflects the existing direction of movement if Optimization Technique is ADAM.</td><td>Conditionally Required</td><td>0≤real number<1</td></tr><tr><td>Selection options</td><td>Beta2</td><td>Set the rate at which ADAM Optimization Technique remembers previously updated weights if Optimization Technique is ADAM.</td><td>Conditionally Required</td><td>0≤real number<1</td></tr><tr><td>Selection options -Hidden Layer Configuration</td><td>Add Hidden Layer</td><td>Add a hidden layer of the model.</td><td>button</td><td/></tr><tr><td>Selection options -Hidden Layer Configuration</td><td>Remove Last Hidden Layer</td><td>Remove the last hidden layer of the model.</td><td>button</td><td/></tr><tr><td>Selection options -Hidden Layer #</td><td>Number of Nodes</td><td>Set the number of perceptron in the hidden layer.</td><td>Required</td><td>natural number</td></tr><tr><td>Selection options -Hidden Layer #</td><td>Activation Function</td><td>Select the activation function of the hidden layer.</td><td>Required</td><td>Linear, Tanh, Log Sigmoid, ReLU, Leaky ReLU, ELU</td></tr></table> |