Welcome, Guest: Register On Nairaland / LOGIN! / Trending / Recent / New
Stats: 3,143,423 members, 7,781,230 topics. Date: Friday, 29 March 2024 at 11:00 AM

Let Learn Artificial Neutral Network - Programming - Nairaland

Nairaland Forum / Science/Technology / Programming / Let Learn Artificial Neutral Network (1333 Views)

Should I Learn Artificial Intelligence? / I Want To Learn Artificial Intelligence, Where Do I Start From / #udemy: Learn Artificial Intelligence With Python For Biginners (2) (3) (4)

(1) (Reply) (Go Down)

Let Learn Artificial Neutral Network by gbolly1151(m): 3:09am On May 20, 2020
In the last few weeks now,i embarked on ANN journey and it has been fun and exciting to learn. I will like to open this thread for people to drop what will be helpful for beginner as we journey in this ANN together.

1 Like

Re: Let Learn Artificial Neutral Network by gbolly1151(m): 12:36pm On May 20, 2020
The backbone or foundation of deep learning is from this equation below


Y=a+bx

what ANN is looking for is a and b

A typical question to throw to the network is if Y= 1 and X=0 find a and b

How ANN find a and b is as follows
1. first get random value for a and b
2. Calculate Y from x using a and b generated (a + bx)
3. If output is not equal to Y,update a and b and try again

On a norm we can do this by bruce-force method but it consume a lot of time and resources to find the real a and b

How ANN minimize the error is by calculating a and b from the error obtain using gradient decent

Error= Y - YP (YP = output gotten from random pick for a and b]

This error is used to update our a and b by finding the rate at which a and b has contributed to the error i.e dE/db and dE/da

Our new a and b for next testing will

a=a + dE/da
b=b + dE/db

in order to speed up our findings we add step called learning rate n

So new equation is

a= a + n dE/da
b=b + n dE/db

we will continue updating still we find the real a and b that satisfy the equation.
Re: Let Learn Artificial Neutral Network by Nobody: 6:11pm On May 20, 2020
Thanks for this thread.

I'm currently Learning machine learning and I know as the days go by I will get to ANN, hope to learn from this thread when the time comes.

1 Like

Re: Let Learn Artificial Neutral Network by iCode2: 7:37pm On May 20, 2020
This looks like Physics. cheesy

1 Like

Re: Let Learn Artificial Neutral Network by gbolly1151(m): 7:39pm On May 20, 2020
iCode2:
This looks like Physics. cheesy
Hmm...it is possible to see that equation in physics but it is math [linear equation]
Re: Let Learn Artificial Neutral Network by iCode2: 7:47pm On May 20, 2020
gbolly1151:

Hmm...it is possible to see that equation in physics but it is math [linear equation]
Hmmm nice one

1 Like

Re: Let Learn Artificial Neutral Network by scarplanet(m): 9:54pm On May 20, 2020
gbolly1151:
The backbone or foundation of deep learning is from this equation below


Y=a+bx

what ANN is looking for is a and b

A typical question to throw to the network is if Y= 1 and X=0 find a and b

How ANN find a and b is as follows
1. first get random value for a and b
2. Calculate Y from x using a and b generated (a + bx)
3. If output is not equal to Y,update a and b and try again

On a norm we can do this by bruce-force method but it consume a lot of time and resources to find the real a and b

How ANN minimize the error is by calculating a and b from the error obtain using gradient decent

Error= Y - YP (YP = output gotten from random pick for a and b]

This error is used to update our a and b by finding the rate at which a and b has contributed to the error i.e dE/db and dE/da

Our new a and b for next testing will

a=a + dE/da
b=b + dE/db

in order to speed up our findings we add step called learning rate n

So new equation is

a= a + n dE/da
b=b + n dE/db

we will continue updating still we find the real a and b that satisfy the equation.


Nice one Tag wink wink

1 Like

Re: Let Learn Artificial Neutral Network by gbolly1151(m): 10:08pm On May 20, 2020
scarplanet:


Nice one Tag wink wink
Hmm...praise?
Re: Let Learn Artificial Neutral Network by gbolly1151(m): 10:40pm On May 20, 2020
What is perceptron?
Perceptron is the simplest network that accept input(x) or inputs(X1....Xn), multiplying the input by weight (b) making it (bx),adding bias (a) to it,then activating the function (a + bx) using various activation function like sigmiod to produce predicted output(YP).

by default activation is
1 if (a+bx) >= 0 and
0 if (a+bx) < 0

Here is the flow

Input(x) >>>> sum(a+bx)>>>> activate >>>>output


def Perception(input,weight=0.4,bias=0.1): #only one input for now
x,b,a=input,weight, bias
z=a + bx
return 1 if z >= 0 else 0

Yp= Perceptron(1)


Activation function is what help to produce predicted output(YP) that is closer to Y.

Sigmoid function is sigmiod(z) = 1/1+e^-z ,it is one of activation function

In real world input(x) are more that 1 e.g
Input1 = [0,1,0,1] output=1
input2= [1,1,0,1] output = 0

In this case
Y= a + b1.x1 + b2.x2 + b3.x3 + b4.x4

1 Like

Re: Let Learn Artificial Neutral Network by faithfull18(f): 10:46pm On May 20, 2020
Hmmn. How does one learn all these maths embarassed

1 Like

Re: Let Learn Artificial Neutral Network by gbolly1151(m): 2:54am On May 21, 2020
faithfull18:
Hmmn. How does one learn all these maths embarassed
Pick up your o or A level textbook they are there, linear algebra
Re: Let Learn Artificial Neutral Network by scarplanet(m): 1:45pm On May 21, 2020
gbolly1151:

Hmm...praise?


Negative. Try again grin grin
Re: Let Learn Artificial Neutral Network by gbolly1151(m): 5:08pm On May 21, 2020
scarplanet:


Negative. Try again grin grin
Hmm....ikena boss
Re: Let Learn Artificial Neutral Network by gbolly1151(m): 12:05pm On May 22, 2020
What is activation?
Here is the flow from input to output

(Input(x) >>>> sum(a+bx) >>>> activate >>>> output)

Every perceptron has activation,by default activation is
1 if (a+bx) >= 0 and
0 if (a+bx) < 0

Activation is what help to transform the result of
(a+bx),they take a+bx as input inside a perceptron. This is needed to reach our desire output Y in fast manner.

So let z=a+bx

We have various type of activation,some are;
1.relu: The rectified linear unit function, returns f(z) = max(0, z).
2.logistic: The logistic sigmoid function, returns f(z) = 1 / (1 + exp(-z)).
3.identity: No-op activation, useful to implement linear bottleneck, returns f(z) = z
4. tanh: The hyperbolic tan function, returns f(z) = tanh(z).


Let update our code


from math import exp as e,tanh as t
from random import randint

def sigmoid (z):
return 1/(1+e(-z))

def relu(z):
return max(0,z)

def tanh(z):
return t(z)

def perceptron(input_set:list,activation=None):
a,b=round(randint(1,4)/5,3),[round(randint(1,4)/5,3) for i in range(len(input_set))]
bx=0
for weight,x in zip(b,input_set):
bx+=weight*x
z=a + bx
if activation is None:
return 1 if z >= 0 else 0
return activation(z)

#Let test our output
output=perceptron([1,0],activation=tanh)
print(output) # mine print 1
Re: Let Learn Artificial Neutral Network by gbolly1151(m): 10:31am On May 26, 2020
WHAT IS FEEDFORWARD?

When an input move from input layer through hidden layers then to ouput layer,that is feedforward

Input >>> hidden layer >>> output (feedforward operation)
Re: Let Learn Artificial Neutral Network by 404Dev: 10:36am On May 26, 2020
There is a new platform that aims to teach you how to code and become a better developer with premium courses and videos, verified job listings on the job board, digital assets to speed up your work flow, a discussion forum to engage with others and ask questions and also a blog with articles published daily.

This platform will be officially launched by friday. All ongoing courses will be dripped launched which means new episodes of all ongoing courses will be published every day till the course is completed.

Expect courses on html and css, vuejs, react, node, php and laravel with other exciting frameworks.

Courses will be practical and project based (not all these todo app kind of projects but real life projects that would eventually be hosted for public use). There will also be introductory courses on all languages and frameworks for beginners to grasp the fundamentals.
Some courses would be free whereas others would require a pro subscription.
You can also download all episodes for offline viewing if you desire.

The job board will have verified and vetted job listings including full time, remote, internship and even freelance gigs. You don't need to be scurrying the internet for job openings as we have done all that and will be posting verified job listings for free on a daily basis. You can also post your own job openings all for free.

The community forum is free for all. You can engage with other developers, ask questions, join conversions and discussions about trending issues in tech and web development. A free account would be needed to participate in the forum.

The assets library is a collection of assets that you could use in your projects ranging from templates to starter kits, complete scripts etc. Most of the assets are free while others would require a pro subscription.

A free account gets you instant access to the job board, the discussion forums, all free courses, videos and assets.

A pro account gets you access to everything.
Pro accounts would cost just NGN 950 a month.

If interested, have any questions or wish to be updated you can join this WhatsApp group

https://chat.whatsapp . com/HZgqd0bHXETGGtYqO2OTdl

[copy and paste in your browser and remove the spaces between smiley]
Re: Let Learn Artificial Neutral Network by gbolly1151(m): 2:28pm On May 29, 2020
What is Back propagation?
To understand back propagation, you need to understand calculus. It is the way of obtaining error in each weight of the network in order to update it.
Recall that

Input >>>> Z=weight * input >>>> activation (Z) >>>>> predicted output

If the predicted output is equal to real output,then we need not to update the weight else we will update


Using mean square error(MSE) to find the error, 1/2(sum(real output - predicted output)^2
E = 1/2 sum(Y - PY)^2

To calculate how far weight has contributed to error we find

dE/dw

e.g
input = x , output = Y
Z= a + bx (b= weight, a=bias)
PY = sigmoid (Z)

E = 1/2 sum(Y - PY)^2

Let U = Y - PY
dU/dPY = -1
dU/dY = 1

dE/dU = sum(Y - PY)

PY= sigmoid( Z)
dPY/dZ = z (1 - z) (activation function derivative)

Z= a + bx
dZ/db=x
dZ/da = 1

dE/dPY =dE/dU . dU/dPY =sum (Y - PY) - 1 = sum(PY - Y)

dE/db = dE/dPY . dPY/dZ . dZ/db = sum( (PY -Y) * z(1-z) * x)
dE/da = dE/dPY . dPY/dZ . dZ/db = sum((PY - Y) * z(1-z))

b = b + lr * dE/db
a = a + lr * dE/da

note that lr * dE/db and lr * dE/db are know as gradient decent

learning rate (lr) help to reduce the amount of dE/db to be added in order not to pass real weight (b) that satisfy the equation
Re: Let Learn Artificial Neutral Network by gbolly1151(m): 2:35pm On May 29, 2020
gbolly1151:
What is Back propagation?
To understand back propagation, you need to understand calculus. It is the way of obtaining error in each weight of the network in order to update it.
Recall that

Input >>>> Z=weight * input >>>> activation (Z) >>>>> predicted output

If the predicted output is equal to real output,then we need not to update the weight else we will update


Using mean square error(MSE) to find the error, 1/2(sum(real output - predicted output)^2
E = 1/2 sum(Y - PY)^2

To calculate how far weight has contributed to error we find

dE/dw

e.g
input = x , output = Y
Z= a + bx (b= weight, a=bias)
PY = sigmoid (Z)

E = 1/2 sum(Y - PY)^2

Let U = Y - PY
dU/dPY = -1
dU/dY = 1

dE/dU = sum(Y - PY)

PY= sigmoid( Z)
dPY/dZ = z (1 - z) (activation function derivative)

Z= a + bx
dZ/db=x
dZ/da = 1

dE/dPY =dE/dU . dU/dPY =sum (Y - PY) - 1 = sum(PY - Y)

dE/db = dE/dPY . dPY/dZ . dZ/db = sum( (PY -Y) * z(1-z) * x)
dE/da = dE/dPY . dPY/dZ . dZ/db = sum((PY - Y) * z(1-z))

b = b + lr * dE/db
a = a + lr * dE/da

note that lr * dE/db and lr * dE/db are know as gradient decent

learning rate (lr) help to reduce the amount of dE/db to be added in order not to pass real weight (b) that satisfy the equation


This is for simply linear equations,it get complex when we have
y = a + b1x1 + b2x2

If not for research purpose,you dont need them and you can proceed to use keras to start building your model since you get overview of the concept... In next update i will post how to build a model using keras
Re: Let Learn Artificial Neutral Network by Runningwater(m): 6:10am On Jun 19, 2020
Blast!!! Nice thread posts. Machine learning intrigued me. I could implement some codes in Matlab, simpler problems anyway. Kudos to you keep it burning.

1 Like

Re: Let Learn Artificial Neutral Network by gbolly1151(m): 8:01am On Jun 19, 2020
Runningwater:
Blast!!! Nice thread posts. Machine learning intrigued me. I could implement some codes in Matlab, simpler problems anyway. Kudos to you keep it burning.
Hmm...thank bro but might take time to resume this thread

1 Like

Re: Let Learn Artificial Neutral Network by Runningwater(m): 2:30pm On Jun 19, 2020
gbolly1151:

Hmm...thank bro but might take time to resume this thread
What courses, tutorials, or ebooks are you using to learn about the artificial neural networks (ANN)? So I can learn more too.
Thanks in advance
Re: Let Learn Artificial Neutral Network by gbolly1151(m): 3:48pm On Jun 19, 2020
Runningwater:

What courses, tutorials, or ebooks are you using to learn about the artificial neural networks (ANN)? So I can learn more too.
Thanks in advance
Pdf

Neural Networks and Deep Learning by Michael Nielsen

1 Like

Re: Let Learn Artificial Neutral Network by Dwise19990(m): 8:31am On Jun 20, 2020
gbolly1151:
What is perceptron?
Perceptron is the simplest network that accept input(x) or inputs(X1....Xn), multiplying the input by weight (b) making it (bx),adding bias (a) to it,then activating the function (a + bx) using various activation function like sigmiod to produce predicted output(YP).

by default activation is
1 if (a+bx) >= 0 and
0 if (a+bx) < 0

Here is the flow

Input(x) >>>> sum(a+bx)>>>> activate >>>>output


def Perception(input,weight=0.4,bias=0.1): #only one input for now
x,b,a=input,weight, bias
z=a + bx
return 1 if z >= 0 else 0

Yp= Perceptron(1)


Activation function is what help to produce predicted output(YP) that is closer to Y.

Sigmoid function is sigmiod(z) = 1/1+e^-z ,it is one of activation function

In real world input(x) are more that 1 e.g
Input1 = [0,1,0,1] output=1
input2= [1,1,0,1] output = 0

In this case
Y= a + b1.x1 + b2.x2 + b3.x3 + b4.x4
Now I see where our mathematics is applied
Re: Let Learn Artificial Neutral Network by Runningwater(m): 10:56am On Jun 20, 2020
gbolly1151:

Pdf

Neural Networks and Deep Learning by Michael Nielsen
Thanks....
Re: Let Learn Artificial Neutral Network by gbolly1151(m): 10:59am On Jun 20, 2020
Dwise19990:

Now I see where our mathematics is applied
Yes....in Nigeria school, they just hardcode math into our brain without no full application
Re: Let Learn Artificial Neutral Network by gbolly1151(m): 11:01am On Jun 20, 2020
Dwise19990:

Now I see where our mathematics is applied
If hou learn algorithm analysis, you will appreciate math more because it bring math closer to real world

(1) (Reply)

Could You Give Some Advice To A Junior Developer? / URGENT: Advance School Management Software Needed / Aptech Students What Languages Re Tot In Ur School

(Go Up)

Sections: politics (1) business autos (1) jobs (1) career education (1) romance computers phones travel sports fashion health
religion celebs tv-movies music-radio literature webmasters programming techmarket

Links: (1) (2) (3) (4) (5) (6) (7) (8) (9) (10)

Nairaland - Copyright © 2005 - 2024 Oluwaseun Osewa. All rights reserved. See How To Advertise. 57
Disclaimer: Every Nairaland member is solely responsible for anything that he/she posts or uploads on Nairaland.