Linear Methods of AI
Mathematical backbone transforming data patterns into intelligent predictions.
Definition of Determinant
Determinant Calculation
Laplace Expansion Theorem
Cramer's Rule
Complex Vector Space
Complex Matrix
Eigenvalues, Eigenvectors, and Eigenspaces
Characteristic Polynomial
Eigenvalues of Diagonal and Triangular Matrices
Orthogonal and Unitary Matrices
Symmetric and Hermitian Matrices
Positive Definite Matrix
Scalar Product
Matrix Condition
LU Decomposition
Cholesky Decomposition
QR Decomposition
Linear Model
System of Linear Equations
Linear Equilibrium Problem
Normal Equation System
Normal Equation System Solution
Identifiability and Ranking Capability
Regularization
Statistical Analysis
Best Approximation in Function and Polynomial Spaces
Orthogonal Projection
Orthogonal Polynomials
Matrix Similarity
Matrix Diagonalization
Basic Procedure for Diagonalization
Spectral Theorem
Spectral Theorem for Complex Matrices
Spectral Theorem for Real Matrices
Real Axis Transformation
Principal Component Analysis
Triangularization and Jordan Normal Form
Numerical Calculation of Eigenvalues
Individual Eigenvalue Calculation
All Eigenvalues Calculation
AI Programming
Coding intelligence that teaches machines to think and solve complex problems.
Markdown and Command Line Interfaces
First Steps in Python
Everything is an Object in Python
Numbers
Arithmetic Operators
Number Attributes and Methods
Mathematical Functions
Variables
Comparison and Logic
String Objects
Escape Sequences
Indexing and Slicing
String Methods
Print Function
String Formatting
Containers
Immutable, Mutable, and Identity
Iterables
Control Flow
File Input and Output
Dictionary
Functions
Creating Arrays with NumPy
Attributes and Data Types with NumPy
Indexing and Slicing with NumPy
Array Operations with NumPy
Syntactic Sugar
Neural Networks
Digital brains mimicking human neurons to recognize patterns and make decisions.
Problems of Supervised Learning
Types of Supervised Learning
Linear Regression
Function Space
Loss Function
Normal Equation
Basic Linear Function
Binary Classifier
Binary Cross-Entropy Loss
Optimization
Multi-Class Classifier
Cross-Entropy Loss
Multi-Layer Perceptron
Partial Derivative
Partial Derivative Example
Learning in Vector Form
Activation Function for Hidden Layer
Optimization Challenge
Gradient Descent Variants
Adaptive Learning Rate
Exponential Weighted Average
Momentum
Root Mean Squared Propagation
Adaptive Moment Estimation
Overfitting and Underfitting
Regularization
Parameter Penalties
Dropout
Training, Validation, and Testing
Preprocessing and Initialization
Batch Normalization
Hyperparameter Search
Convolution for Time Series
Pooling for Time Series
Convolutional Neural Networks for Time Series
Convolution for Images
Convolution on Volume
Convolution Block
Convolutional Neural Network
Machine Learning
Algorithms that learn from experience to predict future outcomes automatically.
What is Machine Learning?
What is Supervised Learning?
Formal Setting for Supervised Learning
Empirical Risk Minimization
Regularized Risk Minimization
K-Nearest Neighbor Classifier
Maximum A Posteriori Rule
Bayesian Decision Rule
Discriminant Function
Proof of Bayes Optimal Classifier
Univariate Normal Distribution
Probability vs Likelihood
Maximum Likelihood Estimation
Maximum Likelihood Estimation Example
Maximum Likelihood Method
Parameter Fitting for Normal Distribution
Covariance
Eigendecomposition of Covariance Matrix
Gaussian Model
Discriminant Function for Gaussian Model
Mahalanobis Distance
Point Estimation
Unbiased Estimator
Consistency
Mean Squared Error
Bias-Variance Decomposition of Mean Squared Error
Model Selection for Maximum Likelihood Estimation
Kullback-Leibler Divergence
Akaike Information Criterion
Cross Validation
Gaussian Mixture Model
Expectation Maximization Algorithm
Nonlinear Optimization for AI
Advanced mathematics finding optimal solutions in complex AI problem spaces.
Nonlinear Optimization Problem
Acceptance and Optimization
Convergence of Repeated Process
One-Dimensional Optimality Condition
Two-Dimensional Optimality Condition
Degradation Procedure
Gradient Method
Convergence Behavior
Conjugate Gradient
Gradient Method vs Conjugate Gradient Method
Newton and Quasi-Newton Methods
Sequential Quadratic Programming Method
Quasi-Newton Sequential Quadratic Programming Method
Nonlinear Equality Problem
Gauss-Newton Method
Local Convergence of Gauss-Newton Method
Why Sequential Quadratic Programming Not Suitable for Nonlinear Regression?
Underdetermined Problem and Regularization
Advanced Machine Learning
Cutting-edge techniques pushing the boundaries of artificial intelligence capabilities.
Universal Consistency
Union Bound
Jensen's Inequality
Markov's Inequality
Chebyshev's Inequality
Chernoff's Inequality
Hoeffding's Inequality
Point Estimation
Empirical Risk Minimization
When Empirical Risk Minimization Fails
Generalization Bound
Estimation Error Bound
Growth Function
Vapnik-Chervonenkis Inequality
Vapnik-Chervonenkis Dimension
Sauer-Shelah Lemma
Vapnik-Chervonenkis Bound
Vapnik-Chervonenkis Generalization Bound
Symmetrization Lemma
Condorcet's Theorem
Hard and Soft Voting
Weak and Strong Learner
Boosting Algorithm
AdaBoost Algorithm
Additive Model
Forward Stagewise Additive Modelling
Theoretical Analysis
Computer Vision
Teaching machines to see and understand the visual world like humans.
Grayscale Image
Color Image
Histogram Equalization
Edge Detection
Smoothing
Unsharp Masking
Hough Transform
Fourier Transform
Otsu's Method
Region Growing
Watershed Algorithm
Co-Occurrence Matrix
Gabor Filter
Harris Corner Detection
Histogram for Gradient
Scale-Invariant Feature Transform
Speeded-Up Robust Features
Convolution Layer
Pooling
Deconvolution
Loss Function for Classification
Augmentation
Regularization
AlexNet
Natural Language Processing
Bridging human language and machine understanding for seamless communication.