ML-Cheat-Sheet
Basic Rules of Differentiation
Basic Rules
Constant Rule:
Power Rule:
Linear Combination:
Product Rule:
Quotient Rule:
Chain Rule:
Exponential:
| | Logarithmic
||
Linear Regression
1. Hypothesis
2. Cost Function
Mean Squared Error (MSE):
3. Optimization
- Gradient Descent:
- Normal Equation:
Logistic Regression
1. Hypothesis
- Prediction Rule:
- Predict
if , otherwise .
- Predict
2. Cost Function
Log Loss:
3. Optimization
- Gradient Descent:
4. Sigmoid Properties
- Output:
- Derivative:
Ridge Regression
Loss Function
Adds
: Regularization parameter. Higher values shrink .
Optimization
- Closed-form Solution:
- Gradient Descent:
Bayesian Classification
Dataset
,
Posterior Probability
The probability of class
If features are conditionally independent:
SVM
Hard SVMHyperplane:
Soft SVMHyperplane:
Lagrangian:
Weight vector:
Kernel SVMHyperplane:
Linear:
Polynomial:
Gaussian (RBF):
Sigmoid:
MLE and MAP
MLE
构建似然函数:联合分布
MAP
结合先验构建后验概率: