Machine Learning Concepts List¶
This document contains 200 concepts for the Machine Learning: Algorithms and Applications course.
Concepts (1-200)¶
- Machine Learning
- Supervised Learning
- Unsupervised Learning
- Classification
- Regression
- Training Data
- Test Data
- Validation Data
- Feature
- Label
- Instance
- Feature Vector
- Model
- Algorithm
- Hyperparameter
- K-Nearest Neighbors
- Distance Metric
- Euclidean Distance
- Manhattan Distance
- K Selection
- Decision Boundary
- Voronoi Diagram
- Curse of Dimensionality
- KNN for Classification
- KNN for Regression
- Lazy Learning
- Decision Tree
- Tree Node
- Leaf Node
- Splitting Criterion
- Entropy
- Information Gain
- Gini Impurity
- Pruning
- Overfitting
- Underfitting
- Tree Depth
- Categorical Features
- Continuous Features
- Feature Space Partitioning
- Logistic Regression
- Sigmoid Function
- Log-Loss
- Binary Classification
- Multiclass Classification
- Maximum Likelihood
- One-vs-All
- One-vs-One
- Softmax Function
- Regularization
- L1 Regularization
- L2 Regularization
- Ridge Regression
- Lasso Regression
- Support Vector Machine
- Hyperplane
- Margin
- Support Vectors
- Margin Maximization
- Hard Margin SVM
- Soft Margin SVM
- Slack Variables
- Kernel Trick
- Linear Kernel
- Polynomial Kernel
- Radial Basis Function
- Gaussian Kernel
- Dual Formulation
- Primal Formulation
- K-Means Clustering
- Centroid
- Cluster Assignment
- Cluster Update
- K-Means Initialization
- Random Initialization
- K-Means++ Initialization
- Elbow Method
- Silhouette Score
- Within-Cluster Variance
- Convergence Criteria
- Inertia
- Neural Network
- Artificial Neuron
- Perceptron
- Activation Function
- ReLU
- Tanh
- Sigmoid Activation
- Leaky ReLU
- Weights
- Bias
- Forward Propagation
- Backpropagation
- Gradient Descent
- Stochastic Gradient Descent
- Mini-Batch Gradient Descent
- Learning Rate
- Loss Function
- Mean Squared Error
- Cross-Entropy Loss
- Epoch
- Batch Size
- Vanishing Gradient
- Exploding Gradient
- Weight Initialization
- Xavier Initialization
- He Initialization
- Fully Connected Layer
- Hidden Layer
- Output Layer
- Input Layer
- Network Architecture
- Deep Learning
- Multilayer Perceptron
- Universal Approximation
- Convolutional Neural Network
- Convolution Operation
- Filter
- Kernel Size
- Stride
- Padding
- Valid Padding
- Same Padding
- Feature Map
- Receptive Field
- Pooling Layer
- Max Pooling
- Average Pooling
- Spatial Hierarchies
- Translation Invariance
- Local Connectivity
- Weight Sharing
- CNN Architecture
- LeNet
- AlexNet
- VGG
- ResNet
- Inception
- Transfer Learning
- Pre-Trained Model
- Fine-Tuning
- Feature Extraction
- Domain Adaptation
- ImageNet
- Model Zoo
- Freezing Layers
- Learning Rate Scheduling
- Bias-Variance Tradeoff
- Training Error
- Validation Error
- Test Error
- Generalization
- Cross-Validation
- K-Fold Cross-Validation
- Stratified Sampling
- Holdout Method
- Confusion Matrix
- True Positive
- False Positive
- True Negative
- False Negative
- Accuracy
- Precision
- Recall
- F1 Score
- ROC Curve
- AUC
- Sensitivity
- Specificity
- Data Preprocessing
- Normalization
- Standardization
- Min-Max Scaling
- Z-Score Normalization
- One-Hot Encoding
- Label Encoding
- Feature Engineering
- Feature Selection
- Dimensionality Reduction
- Data Augmentation
- Computational Complexity
- Time Complexity
- Space Complexity
- Scalability
- Batch Processing
- Online Learning
- Optimizer
- Adam Optimizer
- RMSprop
- Momentum
- Nesterov Momentum
- Gradient Clipping
- Dropout
- Early Stopping
- Model Evaluation
- Model Selection
- Hyperparameter Tuning
- Grid Search
- Random Search
- Bayesian Optimization