분류 전체보기
-
Chi-squared DistributionStats/Distribution 2020. 1. 31. 15:21
1. Overview the chi-square distribution (also chi-squared or χ2-distribution) with k degrees of freedom is the distribution of a sum of the squares of k independent standard normal random variables. The chi-square distribution is a special case of the gamma distribution and is one of the most widely used probability distributions in inferential statistics, notably in hypothesis testing and in co..
-
Activation FunctionsMLAI/DeepLearning 2020. 1. 30. 14:32
1. Overview In artificial neural networks, the activation function of a node defines the output of that node given an input or set of inputs. A standard computer chip circuit can be seen as a digital network of activation functions that can be "ON" (1) or "OFF" (0), depending on the input. This is similar to the behavior of the linear perceptron in neural networks. However, only nonlinear activa..
-
Type 1 Error and Type 2 ErrorStats/Inferential 2020. 1. 30. 14:22
1. Overview In statistical hypothesis testing, a type I error is the rejection of a true null hypothesis (also known as a "false positive" finding or conclusion), while a type II error is the non-rejection of a false null hypothesis (also known as a "false negative" finding or conclusion). 2. Description 2.1 Type 1 Error $\alpha$ It is often assimilated with false positives or Level of significa..
-
Canonical Correlation AnalysisMLAI/DimensionalityReduction 2020. 1. 25. 17:57
1. Overview Canonical Correlation Analysis (CCA) as a good prediction model. Because CCA well explains data dependency between input and output. So CCA can minimize the prediction error. CCA finds pairs of basis that maximize the correlation between two variables x and y in subspace. When we perform the regression in the reduced space, the fitting errors are minimized because two variables are h..
-
SVD, matrix inverse, and pseudoinverseMath/Linear algebra 2020. 1. 25. 10:36
1. Overview 2. Description 2.1 Inverse Full rank square matrix A Now I'm going to invert A which is fine we assume for the moment that A is an invertible matrix. So it's square and full rank. And of course, whatever operation you perform on one side of the equation must be repeated on the other side of the equation. So we apply the inverse to the right-hand side as well. Now we know that each of..
-
Least-squares for model fittingMath/Linear algebra 2020. 1. 24. 14:20
1. Overview There are uncountable dynamics and processes and individuals with uncountable and mind-bogglingly complex interactions. So how can we possibly make sense of all of this complexity? The answer is we can't :( So instead we generate simple models and we fit models to data using linear least squares modeling and that is the goal of this section of the course. the idea of building models ..
-
Linear Algebra FeaturesMath/Linear algebra 2020. 1. 23. 18:03
1. Overview Summarize Terminologies and features 2. Description 2.1 Matrix multiplications It doesn't matter if you're multiplying $A^{T}A$ or $AA^{T}$ both results will produce not only a Square matrix but a Symmetric matrix. 2.1.1 Characteristic equation The characteristic equation is the equation that is solved to find a matrix's eigenvalues, also called the characteristic polynomial. For a g..
-
Difference PCA and Factor analysisMLAI/DimensionalityReduction 2020. 1. 23. 11:06
1. Overview Both are dimension reduction techniques, but while Principal Component Analysis is used to reduce the number of variables by creating principal components, extracting the essence of the dataset in the means of artificially created variables, which best describe the variance of the data. Factor Analysis tries to identify, unknown latent variables to explain the original data. Often pr..