Ml-Foundations
Three types of ML: supervised, unsupervised, and reinforcement โ and why learning from data beats hand-written rules.
Features, labels, and how ML data is structured as rows (samples) and columns (features) in a DataFrame.
Predict a continuous value with ลท = wx + b. Derive the MSE loss and compute one gradient descent step from scratch.
5 questions after completing the first 7 ML Foundations pages. Check your understanding before continuing.
The optimization algorithm behind every trained ML model: iteratively follow the negative gradient to minimize a loss.
Extend linear regression to multiple features using matrix form ลท = Xw + b and vectorized NumPy operations.
12 questions covering supervised learning, gradient descent, model evaluation, and sklearn. Pass: 9/12.
Predict categories instead of numbers. Decision boundaries, sigmoid activation, and binary probability outputs.
Binary classifier from scratch: sigmoid + cross-entropy loss + gradient update. The building block of softmax policies.
Train/test split, accuracy, precision, recall, and F1 โ evaluating classifiers honestly.
K-fold cross-validation, overfitting vs underfitting, and the bias-variance tradeoff.
Classify new points by majority vote among K closest training examples.
If/else questions on features, entropy, and information gain as splitting criteria.
Unsupervised grouping of data by alternating assignment and centroid-update steps.
The full sklearn pipeline: fit, predict, score, and comparing multiple models.
End-to-end ML project combining loading, exploration, preprocessing, training, and evaluation.
15 short drill problems covering supervised learning, gradient descent, evaluation, and sklearn.
Review ML Foundations and see why linear models fail on complex patterns โ motivation for neural networks.