FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence


Idea


최종 loss는 supervised(기존) + weight * semi-supervised loss(제안).

Augmentation in FixMatch


Additional important factors


  1. Regularization이 매우 중요함.
  2. Adam은 잘 동작하지 않음(SGD + momentum)
  3. Learning rate decay는 cosine을 추천(세팅은 이미지 참조)