GO DEEPEST
【Tips】
・dlbook_notation
・Formatting Instructions for ICLR2020 Conference Submissions
・トップカンファレンスへの論文採択に向けて(AI研究分野版)
・AI系トップカンファレンスへの論文採択に向けた試験対策
・トップカンファレンスへの論文採択に向けて(NLP研究分野版)
・研究における評価実験で重要な7つのこと
・松尾ぐみの論文の書き方:英語論文
・How PyTorch Transposed Conv1D Work
・Deconvolution and Checkerboard Artifacts
・Flow-based Deep Generative Models
・learn2learn
【Flow-based Model】
・Variational Inference with Normalizing Flows
・NICE: Non-linear Independent Components Estimation
・Density estimation using Real NVP
・Glow: Generative Flow with Invertible 1x1 Convolutions
【GAN関連】
・Large Scale GAN Training for High Fidelity Natural Image Synthesis
・A Style-Based Generator Architecture for Generative Adversarial Networks
・HoloGAN: Unsupervised Learning of 3D Representations from Natural Images
・Few-Shot Adversarial Learning of Realistic Neural Talking Head Models
・SinGAN: Learning a Generative Model from a Single Natural Image
・Towards a Deeper Understanding of Adversarial Losses under a Discriminative Adversarial Network Setting
●Generative Adversarial Networks
・Generative Adversarial Nets
●BiGAN, ALI
・Adversarial Feature Learning
・Adversarially Learned Inference
・Adversarially Learned Inference - GitHub Pages
●VAE-GAN
・Autoencoding beyond pixels using a learned similarity metric
●Adversarial Autoencoder
・Adversarial Autoencoders
●Wasserstein GAN
・Wasserstein GAN
●Gradient Penalty
・Improved Training Wasserstein GANs
●Perceptual Loss
・Perceptual Loss for Real-Time Style Transfer and Super-Resolution
●Hinge Loss
・Hierarchical Implicit Models and Likelihood-Free Variational Inference Tran, Ranganath, Blei, 2017
・Geometric GAN Lim & Ye, 2017
・Spectral Normalization for Generative Adversarial Networks Miyato, Kataoka, Koyama, Yoshida, 2018
●Feature Matching Loss
・High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs
●Instance Normalization
・Instance Normalization: The Missing Ingredient for Fast Stylization
●Spectral Normalization
・Spectral Normalization for Generative Adversarial Networks
●Adaptive Instance Normalization(AdaIN)
・Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization
●Self-Attention
・Self-Attention Generative Adversarial Networks
●Projection Discriminator
・cGANs with Projection Discriminator
●Inception Score
・Improved Techniques for Training GANs
●Frechet-Inception Distance(FID)
・GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium
●Structured Similarity(SSIM)
・Image Quality Assessment: From Error Visibility to Structural Similarity
●Perception-Distortion Tradeoff
・The Perception-Distortion Tradeoff
●Cosine Similarity(CSIM)
・ArcFace: Additive Angular Margin Loss for Deep Face Recognition
【Meta Learning】
Meta-Learning: Learning to Learn Fast - Lil'Log
・MetaGAN: An Adversarial Approach to Few-Shot Learning
●MAML
・Model-Agnostic Meta-Learning For Fast Adaptation of Deep Neural Networks
●Reptile (FOMAML)
・On First-Order Meta-Learning Algorithims
●Implicit MAML
・Meta-Learning with Implicit Graidents
・Modular Meta-Learning with Shrinkage
●CAVIA
・Fast Context Adaptation via Meta-Learning
●TAML
・Task-Agnostic Meta-Learning for Few-shot Learning
【The Lottery Ticket Hypothesis】
・The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
・What's Hidden in a Randomly Weighted Neural Network?
・Training BatchNorm and Only BatchNorm: On the Expressive Power of Random Feature in CNNs
公開日:
最終更新日:2020/11/09