【学习笔记】李宏毅机器学习
速通。
机器学习任务攻略
可能的问题:
- Overfitting
- Model Bias (不够大、不够好)
- Optimization Issue(没找到好的解)
Optimization Issue
Method: Start from shallower networks (or other models)
Overfitting
Small loss on training data, large loss on testing.
Method:
- More training data.
- Data augmentation.
- 给 model 一些限制 (太多限制会导致 model bias: trade-off):
- Less parameters, sharing parameters.
- Less features
- Early stopping
- Regularization
- Dropout
Cross Validation
把 Training Set 分成 Training Set 和 Validation Set.
N-fold Cross Validation
分成 份,分别取 组为 Validation Set,其他 组为 Training Set 跑 次。
避免了 Validation Set 选的不好的情况。
Mismatch
Training data and testing data have different distribution.
When Gradient is Small
Critical Point (where gradient is small):
- Local Minima
- Saddle Point (鞍点).
如何判断?
Tayler Series Approximation.
其中 是 gradient, 是 Hessian Matrix.
在 critical point, .
- 如果 正定 (positive definite, positive eigen values):Local Minima;
- 如果 负定 (negative definite, …):Local Maxima;
- Otherwise: Saddle Point.
也就是说,如果在 Saddle Point, 有 eigen value , eigen vector , 顺着 更新就可以。
Saddle Point v.s. Local Minima
低维度的 Local Minima 在高维度可能是 Saddle Point.
Batch & Momentum
Batch
- Large batch size does not require longer time to compute gradient (取决于 GPU)
- Smaller batch requires longer time for one epoch (longer time for seeing all data once)
大的 batch size 可能会导致 Optimization fail.
- Smaller batch size has better performance
- “Noisy” update is better for training (wow!)
- Small batch is better on testing data (!)。一个解释:Minima 有 Flat Minima 和 Sharp Minima (Flat 指更“圆润”),于是在 Testing 的时候 Loss 可能和 Training loss 有偏差,在 Flat Minima 附近移动的话变化就比较少。
Momentum
动量。
Error Surface is Rugged
Training Stuck Small Gradient
Different parameters needs different learning rate.
Root Mean Square
Used in Adagrad.
RMSProp
Let ,
Adam: RMSProp + Momentum
Learning Rate Scheduling
- Learning Rate Decay: 训练越后期,Learning Rate 越减小。
- Warm Up: Learning Rate 先变大后变小。
为什么要 Warm Up? 一个解释:在 小的时候, 的估计的方差比较大 (see RAdam).
Various Improvements
Classification
Class as one-hot vector: 避免用数字直接编号的时候,隐含的数字大小关系。
Logit: Softmax 后得到的值。
Mean Square Error (MSE): .
Cross-Entropy: (Minimizing cross-entropy 等价于 maximizing likelihood,更加适合在 classification)
Pokemon (?)
Hoeffding’s Inequality
Convolutional Neural Network (CNN)
We need augmentation.
Why Validation Set but Still Overfit?
每次相当于从若干模型中,选择一个在 Validation Set 上表现最好的模型。
但如果可选项太多了,可能还是会 Overfit (可以看作相当于在可选项中通过 Validation Set “训练”)
Why Deep Learning?
如何同时使得 Loss 小 & 中的 candidate 少?
Deep network 能得到相对小的 ,Shallow network 才是相对大的 ( 和参数量相关(?) ).
Deep networks outperforms shallow ones when the required functions are complex and regular.
Generation
Network as Generator
+ Simple Distribution (we know the formulation)
Why Distribution?
特别对于那些需要 creativity 的 tasks.
Generative Adversarial Network (GAN)
Unconditional Generation
例:没有 ,输入从 Normal Distribution 采样,要求过了 Generator 之后生成动画人物 (Complex Distribution).
Discriminator
也是 function (neural network).
Algorithm
- Initialize Generator and Discriminator
- In each training iteration:
- Fix (固定) generator and update discriminator : 从 database 中 sample 一些 (real object),用 生成一些,让 分辨;
- Fix discriminator and update generator : Generator learns to “fool” the discriminator,让 生成的东西在 上得分更高
Theory Behind GAN
目标:
其中 是 distribution, 是 Divergence.
How to compute Divergence?
从 和 里 sample 即可,Discriminator 起到了计算 Divergence 的功能。
- Training:
- Objective Function
其中 和 JS divergence 有关。
Tips for GAN
JS divergence is not suitable
In most cases, and are not overlapped.
JS divergence 在两个 distribution not overlapped 的时候永远是 .
Intuition: If two distributions are not overlapped, binary classifier can always achieve accuracy.
Wasserstein distance - WGAN
smallest average distance to “move”.
限制的意思就是要求 足够光滑。怎么做到?See Inproved WGAN, Spectral Normalization (SNGAN)…
GAN is still challenging…
Generator and Discriminator needs to match each other.
GAN for Sequence Generation
在取 max logit 的时候,Decoder 较小的改变可能不会带来输出的变化,导致 Discriminator 训不动。
那为什么 CNN 的 max pooling 不会有这个问题?
- 因为 GAN 的过程是要交替训练 Generator 和 Discriminator 的。
Try Reinforcement Learning 但也非常难。
See ScratchGAN.
Evaluation of Generation
How to evaluate the quality of the generated image?
Image Off-the-shelf Image Classifier Concentraded distribution means higher visual quality.
Diversity:
- Mode Collapse: 多次生成的东西都差不多 (Diversity 低)
- Mode Dropping: 模式丢失,生成出来的只有部分模式
一个检查 Diversity 的方法:Inception Score (IS), Inception Network: 用来做 classification 的网络。
Good Quality: 对于一张图的概率分布集中;High Diversity: 对于多张图的概率分布分散。
Frechet Inception Distance (FID)
看进入 Softmax 之前的向量,计算两个 Gaussian 分布的 Frechet Distance。Smaller is better.
可能存在的问题:
- 真的是 Gaussian 吗?
- 需要很多 sample.
We don’t want memory GAN.
Conditional Generation (CGAN)
+ Normal Distribution
Better discriminator: Scalar: is realistic or not + and are matched or not.
可以是文本,也可以是图片 (pix2pix…)
Learning from Unpaired Data
把 GAN 用在 unsupervised data 上。
只有 domain , 没有配对数据。
Cycle GAN
只从 domain sample,剩下部分和上面一样是不够的:甚至可以 ignore input, input & output 完全无关。
训两个 Generator 和 ,一个 Discriminator .
要求 通过 后得到 能过 Discriminator ,且 通过 后 和 尽量接近 (cycle consistency).
这样 就不能忽略 input .
但是机器学到的联系会不会不是我们想要的?
实验上好像表现不错。
当然也可以训两个 Discriminator, 和 ,把上面过程反过来,中间用 .
See SELFIE2ANIME
Self-Attention
Sophisticated Input
A set of vectors.
Self-Attention for Speech
如果滑动窗口 10 ms 的话,vector 数量太多。
使用 Truncated Self-Attention.
Self-Attention v.s. CNN
可以认为 CNN 是 Self-Attention 的特例 (simplified)。
- CNN: Good for less data.
- Self-Attention: Good for more data.
Self-Attention v.s. RNN
RNN: 要考虑距离 + nonparallel.
Self-Attention for Graph
Consider edge: only consider attention to connected nodes.
This is one type of Graph Neural Network (GNN).
Batch-Normalization
能不能直接改变 Error Surface 的 Landscape?
如果输入 , 在所有数据中 都很小,但是 都很大。
乘上 , 乘上 。因为 比较小,所以 对 Loss 的影响比较小,同理 对 Loss 的影响比较大。从而导致 和 的 Gradient 大小差的较多。
Feature Normalization
对于 , 对 dimension 计算 mean , standard deviation .
也就是平均值为 0,方差为 1.
这样的话,因为要算 ,所以相当于把所有数据同时输入进去了。但是因为数据可能太多了,所以一般指对一个 batch 同时做 batch normalization.
因为要算 ,所以 batch size 不能太小。我们也可以加上 来改变分布。
Batch Normalization - Testing
Testing 的时候可能没有一个 batch,无法从中直接计算 . 但是可以计算 training 时候 batches 的 的 moving average:
Internal Covariate Shift?
Transformer
Decoder - Non-Autoregressive (NAT)
How to decide the output length for NAT decoder?
- Another predictor for output length;
- Output a very long sequence and truncate it at EOS;
parallel, controllable output length.
NAT is usually worse that AT. (Why? Multi-Modality)
Seq2Seq Tips
Copy Mechanism
Guided Attention
In some tasks, input and output are monotonically aligned.
Monotonic Attention, Location-aware attention.
Beam Search
Greedy decoding: always choose the word with highest probability.
Not possible to check all the paths.
Beam Search: at each step, keep the top paths.
Optimizing Evaluation Metrics?
How to do the optimization? USE REINFORCEMENT LEARNING.
Schduled Sampling
各种 Attention
Local Attention / Truncated Attention
只计算附近的 Attention: Similar with CNN.
Stride Attention
跳着看。
Global Attention
Add special token into original sequence.
- Attend to every token collect global information.
- Attend by every token it knows global information.
No attention between non-special tokens.
可以在 multi-head 来同时使用上面的各种 Attention.
Focus on Critical Parts
将 small value 直接设为 .
How to quickly estimate the portion with small attention weights?
-
Clustering: 让相近的 vector 属于同一个 cluster. 只计算同一个 cluster 里的 Attention.
-
Learnable Patterns: Sinkforn Sorting Network.
Do we need full attention matrix?
选一些 Representative keys (why not queries? change output sequence length).
How to reduce number of keys?
- Compress Attention
- Linformer (做不同的线性变换,组合)
Attention-Free?
Self-Supervised Learning
BERT
GPT
Predict Next Token
Prompt Tuning.
Self-Supervised Learning for Speech
中间省略了很多。
Domain Adaptation
Domain Shift
Source domain 和 Target domain 的输入资料的分布不同。
有一些 Knowledge of target domain,可以考虑先在 Source domain 上训练,然后用 Target data 去 fine-tune.
Challenge: Target data 很少,所以要避免 overfit / 如果 Target data 没有 label?
Basic Idea
训练一个 Feature Extractor,比如在有颜色的数字识别上,让这个 Feature Extractor 去 ignore colors.
具体就是把 Source data 和 Target data 输入到同一个 Feature Extractor,目标是让他们的 feature 相近。
Domain Adversarial Training
同时训一个 Feature Extractor 的同时,还训一个 Domain Classifier,类似 GAN 地对抗训练就行。
为什么 Feature Extractor 不会直接输出 0?
因为 Feature Extractor 还要后面接 Label Predictor,因此他不得不保留一些 feature 去支持后面的 predictor.
可以看作 Feature Extractor 的 loss 形如:Label Predictor 一侧的 loss - Domain Classifier 一侧的 loss.
Unknown Label?
如果 Target Domain 中有 Source Domain 没有的 label?
Domain Generalization
如果对 Target Domain 一无所知?
用 Data Augmentation 拓宽 Domain
Reinforcement Learning
Intro
快进了一堆内容之后终于到 RL 了!
Self-Supervised Learning 的时候其实还是有 label 的,只不过不需要我们标记。
Environment 给 Actor 一个 Observation, Actor 输出一个 Action 给 Environment, Environment 给 Actor 一个 Reward 和新的 Observation.
要找的 function 就是 Actor, .
- Step 1: function with unknown parameters
- Step 2: define loss function
- Step 3: optimization
Step 1: Function with Unknown Parameters
Policy Network (Actor)
Input: the observation,
Output: each action corresponds to a neuron in output layer.
一般会通过 Action 的 Score 作为概率来 Sample,而不是直接 Argmax.
Step 2: Define Loss Function
从开始到结束的过程,所有的行为的总和叫一个 Episode.
一个操作 有 reward ,那么 Total reward (return) 是 .
因此我们的目标是 maximize . 或者说,minimize .
Step 3: Optimization
Actor: , Env: .
Reward Function: .
困难的问题在于 Env 和 Reward 是 black box with randomness.
How to do the optimization here is the main challenge in RL.
Policy Gradient
How to control your Actor
对于 ,如果 take action ,用 () 表示这个数据表示的是做 / 不做 .
我们给 一个值 表示这个行为有多好。然后就可以令 .
Version 0 : Wrong !
直接拿 reward 当作 .
这是一个 Short-sighted version.
An action affects the subsequent observations and thus subsequent rewards.
- Reward Delay: Actor has to sacrifice immediate reward for future reward.
Version 1
Cumulated Reward: 用 来当 . 问题在于太靠后的奖励不一定是最前面 action 的功劳。
Version 2
从 Version 1 加一个 discount factor .
Version 3
Good or bad reward is “relative”.
可以 Minus by a baseline , make have positive and negative values.
Policy Gradient Theorem
- Initialize actor network parameters .
- For training iteration .
- Using actor to interact
- Obtain data
- Compute
- Compute loss
Obtain data 在每次 iteration 内部。更新完一次参数之后,就需要重新收集参数了。
- On-Policy: The actor to train and the actor for interacting is the same.
- Off-Policy: The actor to train and the actor for interacting is different.
Off-Policy 就不需要每一轮之后都收集数据(怎么做???)
Off-Policy: Proximal Policy Optimization (PPO)
The actor to train has to know its difference from the actor to interact.
Collection Training Data: Exploration
The actor needs to have randomness during data collection.
Actor-Critic
Critic: Given Actor , how good it is when observing (and taking action )
Value Function
Value Function : When using Actor , the discounted cumulated reward expects to be obtained after seeing . “未卜先知”,从 根据 随机,能得到的期望 reward。
The output values of a critic depend on the actor evaluated (i.e. ).
How to estimate ?
- Monte-Carlo (MC) based approach: The critic watches actor to interact with the environment. (但有些游戏或许永远不会结束)
- Temporal-Difference (TD) approach.
MC v.s. TD
用两种方法可能训出不同的结果。
Version 3.5
在 Version 3 中, 的 . (也就是取 )
但是这个 是某一次 sample 的结果,还是不太牛。
Version 4
将 换成 . 其实就是 .
“Advantage Actor-Critic”
Tip of Actor-Critic
- The parameters of actor and critic can be shared.
Deep Q Network (DQN)
Reward Shaping
Sparse Reward
稀疏反馈。没有明确的对 action 的 reward.
The developers define extra rewards to guide agents reward shaping
VizDoom
需要很多 Domain Knowledge.
Curiosity
Obtaining extra reward when the agent sees something new (but meaningful)
No Reward: Learning from Demonstration
Motivation:
- Even define reward can be challenging in some tasks.
- Hand-crafted rewards can lead to uncontrolled behavior.
Imitation Learning
Actor can interact with the environment, but reward function is not available.
We have demonstration of the expert. Each is a trajectory of the expert.
“但这不就是 Supervised Learning 吗?”
Problem:
- The experts only sample limited observations (比如去学开车的时候,expert 没有展示撞车的现象)
- The agent will copy every behavior, even irrelevant actions.
Inverse Reinforcement Learning
现在我们没有 reward 的,只有 Env 和 demonstration of the expert.
想法是通过 Env 的 expert 的展示,反推出 reward function. (ohhhh)
然后再用这个 reward function 去找 optimal actor.
- Priciple: The teacher is always the best (但这不是说老师的所有行为都需要模仿)
- Basic idea:
- Initialize an actor
- In each iteration:
- The actor interacts with the environment to obtain some trajectories
- Define a reward function, which makes the trajectory of the teacher better than the actor
- The actor learns to maximize the reward based on the new reward function (By RL)
- Output the reward function and the actor learned from the reward function.
GAN & IRL: 非常相似。
Robot
Network Compression
Network Pruning
Networks are typically over-parameterized (there is significant redundant weights or neurons)
- Importance of a weight
- Importance of a neuron
After pruning, the accuracy will drop.
但我们可以接着 Fine-tuning on training data for recover.
Don’t prune too much at once, or the network will not recover.
Practical Issue:
- Weight pruning: The network architecture becomes irregular (Hard to implement, hard to speedup)
- Neuron pruning: The network architecture is regular (Easy to implement, easy to speedup)
Why Pruning? It is widely known that smaller network is more difficult to learn successfully.
Knowledge Distillation
Temperature for Softmax:
Parameter Quantization
- Using less bits to represent a value
- Weight clustering: 把相近的 weight 放进一个 cluster,用同一个值表示。
- Huffman Coding.
Binary Weights: 的 weight (给了模型比较大的限制,所以他比较不容易 overfit)
Architecture Design
- Depthwise Separable Convolution (原理:Low-rank Approximation)
- Depthwise Convolution: Filter number = Input channel number, Each filter only considers one channel.
- Pointwise Convolution: 1x1 Filter (考虑不同 channel 之间的关系)
Dynamic Computation
The network adjusts the computation it needs.
- Dynamic Depth: 每过一层的结果套一个 Extra Layer (每层不同),让他们都接近 Ground Truth.
- Dynamic Width.
- Computation based on Sample Difficulty: 动态根据样本难度调整计算量。