Skip to content

Training and fitting models with Rosenblatt's perceptron rule

License

Notifications You must be signed in to change notification settings

adrianbarwicki/ml-perceptron

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ml-perceptron

Training and fitting models with Rosenblatt's perceptron rule

General idea

General concept of perceptron

The preceding figure illustrates how the perceptron receives the inputs of a sample and combines them with the weights to compute the net input. The net input is then passed on to the activation function (here: the unit step function), which generates a binary output 0 or +1 — the predicted class label of the sample. During the learning phase, this output is used to calculate the error of the prediction and update the weights.

p.train([
    [ 1, 1, 0 ],
    [ 1, 1, 0 ],
    [ 0, 1, 0 ]
], [
    1, 1, 0
]);

p.predict([ 1, 1, 1 ]);
// 1

p.predict([ 0, 0, 0 ]);
// 0

You can also set a custom activation function, example: sigmoid function:

p.setActivationFn(value => {
    const exp = Math.pow(2.71828, -1 * value);
    const y = 1 / (1 + exp);

    return y;
)}

About

Training and fitting models with Rosenblatt's perceptron rule

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published