Welcome, Guest: Register On Nairaland / LOGIN! / Trending / Recent / NewStats: 3,159,796 members, 7,841,090 topics. Date: Sunday, 26 May 2024 at 08:42 PM |
Nairaland Forum / 4kings's Profile / 4kings's Posts
(1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (of 97 pages)
Programming / Re: Artificial Intelligence And Machine Learning Group by 4kings: 12:17pm On Apr 21, 2018 |
AnonyNymous:Just MIT OpenCourseWare bro. You know what i studied for Bsc was totally unrelated; Accounting. That's why i had to clarify that i don't have a broad knowledge of the field, i just have learnt much compared to most others i meet around even in Bioinformatics and Comp Science(in Nigeria though), which i find very unfortunate. 3 Likes 1 Share |
Phones / Re: New Headset Enables You To Talk Without Using Your Mouth by 4kings: 11:43am On Apr 21, 2018 |
Ya, seen this before with Emotiv. Though it constrained for now, to set of trained examples with a particular test set(people). So you have to train it on what you think and corresponding action/words to learn and generate a model for use later on. So it just can't read people's mind without being trained for now. Here's a short video demonstrating it. https://www.youtube.com/watch?v=bposG6XHXvU 2 Likes 1 Share |
Programming / Re: Artificial Intelligence And Machine Learning Group by 4kings: 11:18am On Apr 21, 2018 |
Darivie04:Cool. Just checked the contents of the book, that's a good start. You could also check out courses on Stanford Online too, and they are free with good materials. https://online.stanford.edu/courses And No, I don't have a broad knowledge of AI yet, atleast not to the level of those with the Academic background. 2 Likes |
Programming / Re: Artificial Intelligence And Machine Learning Group by 4kings: 10:41am On Apr 21, 2018 |
preciousnobel:I suggest you focus more on getting online GPU, while also getting a good enough laptop. Depending on what you do with machine learning as you progress, you might be needing to process a very large data. For example, dataset for medical imaging scan, speech synthesis, and so on, could be over 60gb. Just ordinary games crashes most laptops, but this you would need to process and visualize the data to identify anomaly or likely approach to carry out, then carry out feature engineering test if need be on the data, then run algorithm on the feature vectors of the data, make cross validation with other algorithms or parameter optimization and so on... This is too much work on a normal laptop, except you can afford the likes of Acer Predator and later models of AlienWare. So my suggestion, is get a GPU cloud computing service that you can work from on a normal system. Then use your normal system to test on small data or algorithms you developing. |
Religion / Re: I've Never Seen A Breed Of Christians As Hateful As Those On Nairaland!! by 4kings: 2:30am On Apr 19, 2018 |
Just finished reading every post on this thread. LMAO That Butterflyleo is a tough case. That's Naijadeyhia fo you. Lol Anyways, don't have much to add, during the Naijadeyhia/Butterflylion fraud saga, even KingEbukasBlog and Felixomor didn't bother questioning him not to talk of Analice107 sef. Anything, to support their religious argument i guess. Though like you said some are cool, like lordnicklaus kilo4sure and a few others, though they are not active again in this section. 3 Likes |
Properties / Re: Any lawyer in the house? What are the laws guarding tenancy agreement by 4kings: 3:58am On Apr 17, 2018 |
Wirinet Delishpot Dalaman any idea? 1 Like |
Religion / Re: Pleiadians by 4kings: 3:20am On Apr 17, 2018 |
Chubhie:Hmm. Okay, would study it and come back. 1 Like |
Education / Re: Motilewa Bolanle Becomes Covenant University's Youngest Phd Holder Ever Produced by 4kings: 3:19am On Apr 17, 2018 |
CodeTemplar:There are privately funded na... My point is that there are folks out there who could have started their own small establishment with the skills that go with their certificates but you see them claiming no jobs. Those certificates can mock all the noise they make about not having jobs.Hmmm, in this time, some certificates are not really in relevant, some skills are not taught adequately in the University. |
Education / Re: Motilewa Bolanle Becomes Covenant University's Youngest Phd Holder Ever Produced by 4kings: 3:04am On Apr 17, 2018 |
CodeTemplar:Subsidized education in terms of 90% payments of salary and 10% educational expenses? Over the years Nigeria or the Ministry of Education has not sponsored or shown interest in funding research. I think a little motivation in research would go a long way in planning, motivating and developing students into STEM. |
Religion / Re: Pleiadians by 4kings: 2:52am On Apr 17, 2018 |
Chubhie:Hey Chubhie been a while since i've seen your post. Can you educate people like me that have little knowledge about this. Please! |
Programming / Re: Rchain Community by 4kings: 2:43am On Apr 17, 2018 |
ValentineMary:Hmm, observing... Do you use it? |
Science/Technology / Re: $1 Million Up For Grabs At African Entrepreneurship Award 2018 by 4kings: 2:40am On Apr 17, 2018 |
Interesting! 1 Like |
Programming / Re: Da Fuk Is Wrong With This Board? Useless Threads Everywhere!! by 4kings: 2:02am On Apr 17, 2018 |
dhtml181:Hmmm I didn't notice this thread. DHTML has resurrected in NL? Hmm, observing... Seun atleast help us put a function for writing math notations if you can. Thanks Not sure i can moderate... contact Seunthomas mechtronics or dhtml or just organize a poll or something. |
Programming / Re: Artificial Intelligence And Machine Learning Group by 4kings: 6:57pm On Apr 16, 2018 |
Samcent:Didn't think about that. Though it's not compulsory you use python, that's why i'm explaining the theoretical parts of every machine learning algorithm so anyone can use any language to implement it. However, a good recommendation to make is the documentations of Numpy and Matplotlib: And maybe my next tutorial would be a short relevant overview of Numpy and Matplotlib. |
Fashion / Re: Photos From The First Saudi Arabia Fashion Week Where Men Were Not Allowed In by 4kings: 4:58pm On Apr 16, 2018 |
AnonyNymous:Well said. |
Programming / Re: Artificial Intelligence And Machine Learning Group by 4kings: 2:23pm On Apr 16, 2018 |
Okay guys, here i go with the next tutorial. Finally!, right. Would have done this yesterday, but got distracted watching Eye Candy. SMH The previous tutorial we looked at how simple linear equation can solve a regression problem, however our dataset was evenly distributed on purpose for demonstration and we only dealt with one independent variable or feature. So today we would explore two datasets to see the functions available for adjusting error for prediction accuracy in a dataset(that’s actually what machine learning is about, how well an algorithm predicts or generalise with error minimisations ) and also looking at multiple linear regression for handling more than one independent features. But before we go on, it is important to have an understanding of Matrix as we are going to be using that in our calculations. Matrix helps to solve complex simultaneous equations with multiple variables faster and easier. You can goto mathisfun website and get a simple and quick intro to matrix here: https://www.mathsisfun.com/algebra/matrix-introduction.html, matrix inverse here: https://www.mathsisfun.com/algebra/matrix-inverse.html and finally solving simultaneous equation with matrix here: https://www.mathsisfun.com/algebra/systems-linear-equations-matrices.html . That’s all about matrix that we would be using for now. It’s really easy to understand especially with the elegant explanation in mathisfun website, however if you don’t understand any concept you can always ask. Let’s explore an unevenly distributed dataset. Though this dataset is not as evenly distributed as the previous one, but you can still observe the pattern, where price increase by 100 after two intervals. I prepared it this way to demonstrate the example of the next tutorial in a simple manner. So with the pattern in our dataset we can predict that at hour(16) price would be 800 and at hour(20) price would be 1000, right?. Now let’s scatter plot our dataset:
Now before we use our linear equation function to fit a line and make prediction let me make an important change. -- Our previous dataset was evenly distributed, so when calculating the value for gradient(m) we used any two points in y and the corresponding points in x. -- In that logic, if we get the average value of y and average value of x, we would arrive at the same value since the data is evenly distributed. (try it yourself) -- So m = Average of Y / Average of X -- This is what we would use for a normal dataset because there are not evenly distributed and we need to get the average as a randomly picked points my cause a high deviation from the original slope. -- So our code would change by one line: #Code: -- Hope that’s clear… Now let’s use the function to fit line into our scatter plot. By the way, the function y=mx + b is what fits a line to our data and enable us to make prediction. It fits a line by calculating the points from when y = 0 to y = n. Get it? #Code: [img]https://image.ibb.co/cNV7bn/regression_line.png[/img] So you can see what i mean by fitting a line, if you had a graph and continue drawing the line, you could connect the points we want to predict 16 and 20 and see where there meet at y. But we would predict that with code: #Code: >>> [853.3333333333334, 1066.6666666666667] The normal linear equation predicted that at point 16 and 20 the results would be 853.3 and 1066.6 respectively. However, the prediction we expected is 800 and 1000. So what is provided in linear regression to adjust for this error to come closer to accuracy. Therein, lies the heart of what most machine learning algorithms do: adjusting the error or cost function for better accuracy. For this problem, there are two solutions, either using statistical method called Ordinary Least Squares or using a Calculus method called gradient descent. For this tutorial, we would be using the statistical method called Ordinary Least Squares Method and when we look at other machine learning algorithms we would switch to gradient descent. The only difference between this two methods is that gradient descent is faster. Least Squares Method: So with the linear equation function, we can get it’s predicted value from when y = 1 to y = n and the true or correct value of y at each point. The difference between this is the error. Example: Using the linear equation function; when we predict what y is at 1. #Code: print(linear_equation(hours, prices, 1)) >>> 53.333333333333336 It gives us 53.33 whereas the value of y at 1 is 100. The difference 100 - 53.333 = 46.666666666666664 Is the error. And also there would be some differences that would give negative values. Example: #Code: print(linear_equation(hours, prices, 2)) >>> 106.66666666666667 Where the correct value is 100, making the difference -6.66 a negative value. So to deal with this negative value we square them. -2 ^ 2 will give a positive value of 4. So the goal of least square, is to get the total sum of the squares. And then randomize the value of m(the slope) and consequently b in the equation y = mx + b until we get the LEAST TOTAL SUM OF SQUARED ERRORS. Get it? Actually this is the goal of gradient descent algorithms too… To get at what point in the slope or gradient(m) that reduces the error in prediction. Anyways, you might be thinking of writing a code to randomize the value of m then calculate b and get the sum of squared errors until we find the lowest total sum. Well that would be a very slow method especially when we have a large dataset, as m could be even in decimal like 0.1 so going through every number and decimal is just too much CPU processing, even at that m could be -0.25 a negative value so you have start as low as much as possible, which you might not get right eventually after much cpu processes. So in Linear regression or statistics in general there’s a concept called Pearson’s correlation. With the formula we can get at what point m describes the closeness or relationship between two variables the most in a dataset. And this is the shortcut for finding the gradient(m). When we get to other machine learning algorithms, m would be referred to as weights and finding m would be referred to as adjusting the weights, there mean the same thing, just using a different approach.(just incase i forget to add that). This is the formula. [img]https://image.ibb.co/dHrr2S/pearson_formula.png[/img] I would provide a short video that proves this formula in a later post. Therefore to get b using the formula y = mx + b, b = y_mean - (m * x_mean) Remember i talked about using the average of y and x in a non-evenly distributed dataset. With the formula for m and b, it easy to write a code for our least square regression. #Code: So to make prediction for our dataset at point 16 and 20 with the least square regression formula. predictions = [16, 20] result = [linear_regression(hours, prices, i) for i in predictions] print(result) >>> [826.6666666666667, 1026.6666666666667] So you can see that the least square regression has increased in accuracy and is closer to the correct value of 800 and 1000 as opposed to the normal linear equation formula that gave [853.3333333333334, 1066.6666666666667] This is Linear Regression using least square regression and as we look at other algorithms you would see how their accuracies compare, though the amount of data also determines how well the algorithm performs but that’s a topic for another day. Before we proceed, let’s look at one last concept: MULTIPLE LINEAR REGRESSION. So far we’ve been working with one independent variable or feature for prediction, but whenever a data has more than one feature that defines a label which is usually the case, we use Multiple linear regression instead. Multiple linear regression is easy to grasp. It’s just an addition to the y=mx + b formula, that’s all. So for y= mx + b X is the feature while y is the label. But when we have more than one feature say 3 features, the formula changes to: Y = m1x1 + m2x2 + m3x3 + b Or Y = b + m1x1 + m2x2 + m3x3 Where: each x [x1, x2, x3] are different axis. And each m [m1, m2, m3] are the corresponding slopes. And that’s it. **** But it would be difficult to calculate this(every point of m and x) especially when there are many features. So to do this easily, we need the help of MATRIX. In the intro of this post i gave link to learn matrix, especially how to solve linear equations with matrices https://www.mathsisfun.com/algebra/systems-linear-equations-matrices.html. Check the tutorial if you don’t understand Matrix before now, mathisfun.com is a popular website that explains maths concept in a simple way, so it should be easy to follow along, again if you have any issues while learning it you can always ask. Once you understand that: Then the formula for multiple linear regression, can be transformed into this Matrix form. Where in y = mx + b: A is X transposed. X are the features And B is Y. And X represents m and b as the coeficients. We would use the python library NUMPY which is a libary for matrix calculations, modelled from the popular MATLAB(matrix laboratory) software for complex mathematical calculations using matrix forms. See MATRIX is important in Machine Learning, hmmm wonder about the MATRIX Trilogy movie on AI domination now. Lol So if we had three features [2,4,6,8,10], [3,6,9,12,15] and [4,8,12,16,20]. With labels [1,2,3,4,5] Using Numpy to calculate from our matrix understanding to get coefficients for prediction. #Code: #This is the coeficient. In y=mx + b, coefficient equal m * b, again if you don’t understand matrix use the links given. #Therefore if we want to predict the value of y the 3 X features are [12, 18 and 24] #We would use dot multiplication with our coefficient to get the value.
#You can see it predicted it correctly, that was a simple demonstration on matrix calculations with Numpy. So let’s build our final Linear Regression code using the Matrix solution. #Code: #So we can solve the equation above as: import numpy as np y = np.array([1,2,3,4,5]) X = np.array([[2,4,6,8,10], [3,6,9,12,15], [4,8,12,16,20]]) prediction = LinearRegression(xd, yd, predict) print(prediction) >>> 6.0 1 Like |
Religion / Re: Proof Of God. Scientifically Derived. by 4kings: 12:05pm On Apr 16, 2018 |
Butterflyleo:@bolded. Not to drag that, assume you are right, which God does the physics laws proove right? |
Programming / Re: Python Programmers With Machine Learning Come In by 4kings: 11:48am On Apr 16, 2018 |
realmundi:Bro, that's IMPOSSIBLE. If it was virtual league, then we can devise a workaround. But real life scores prediction is not possible, the highest the algorithm would do is give bias to stronger teams based on previous performance and health of players, which does not define the result of a match most of the time. But hey, there might be a feature i'm not thinking about. Can't help you out here... 1 Like |
Programming / Re: Python Programmers With Machine Learning Come In by 4kings: 11:05am On Apr 16, 2018 |
realmundi:Virtual league football scores or real life football scores? If it's the latter then i can't help you... |
Education / Re: Motilewa Bolanle Becomes Covenant University's Youngest Phd Holder Ever Produced by 4kings: 10:38am On Apr 16, 2018 |
mikolo80:Not everyone in Nigeria is under the umbrella of bad economy. Some use their skills to make profit both money wise and in research contributions, while others sit still and wait for the government to help "someday". 2 Likes 1 Share |
Education / Re: Motilewa Bolanle Becomes Covenant University's Youngest Phd Holder Ever Produced by 4kings: 10:36am On Apr 16, 2018 |
AnonyNymous:CU is a funny place... |
Education / Re: Top 10 Iconic University Libraries In Africa by 4kings: 1:53pm On Apr 15, 2018 |
What's the criteria for "top 10", considering not all are ranked by THE? |
Science/Technology / Re: Japanese Engineer Builds Giant Robot To Realize ‘gundam’ Dream by 4kings: 1:42pm On Apr 15, 2018 |
Cool. 930 US dollars an hour. Nice business. |
Education / Re: Motilewa Bolanle Becomes Covenant University's Youngest Phd Holder Ever Produced by 4kings: 1:39pm On Apr 15, 2018 |
She was a lecturer in Business Admin department. Guess she would continue or maybe it's a Phd requirement. Didn't know she was that young, and i would be checking my Tie in front of her. Mtcheeeeeeew. She is cute though. Cc: HigherEd 1 Like |
Religion / Re: The Non-Christian Chatbox ( sticky ) by 4kings: 1:27pm On Apr 15, 2018 |
OtemAtum:@bolded What do you mean by "manmade" and "human-made"? |
Programming / Re: Python Programming by 4kings: 1:24pm On Apr 15, 2018 |
efficiencie:Been busy... Cool solution. |
Programming / Re: Python Programmers With Machine Learning Come In by 4kings: 1:19pm On Apr 15, 2018 |
realmundi:What's it about? |
Programming / Re: MATLAB Programming Assignment Help - 15k by 4kings: 1:10pm On Apr 15, 2018 |
Obiwannn:Replied already. |
Programming / Re: Friendzone by 4kings: 1:10pm On Apr 15, 2018 |
Jenifa123:Just seeing this. Would do that soon... |
Religion / Re: Power Contest: Yahweh(god Of Goodmuyis) Vs Otem (Atum's Boy): Otem Won Again!!! by 4kings: 1:09pm On Apr 15, 2018 |
OtemAtum: Missed this... |
Programming / Re: Artificial Intelligence And Machine Learning Group by 4kings: 7:03pm On Apr 10, 2018 |
4kings: Hey guys, with the look of things today, i won't be able to write and complete the next tutorial. Can someone do this. I think i wanted to explain error functions using ordinary least squares(with little matrix explanation for the multivariable part) before moving to logistic regression while using another error function; gradient descent. But i won't be able to cover this today again , maybe someone can help, or later when i'm free. And my images ain't still showing. Cheers. |
(1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (of 97 pages)
(Go Up)
Sections: politics (1) business autos (1) jobs (1) career education (1) romance computers phones travel sports fashion health religion celebs tv-movies music-radio literature webmasters programming techmarket Links: (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) Nairaland - Copyright © 2005 - 2024 Oluwaseun Osewa. All rights reserved. See How To Advertise. 99 |