GO DEEPEST
【Tips】
・dlbook_notation
・Formatting Instructions for ICLR2020 Conference Submissions
・トップカンファレンスへの論文採択に向けて(AI研究分野版)
・AI系トップカンファレンスへの論文採択に向けた試験対策
・トップカンファレンスへの論文採択に向けて(NLP研究分野版)
・研究における評価実験で重要な7つのこと
・松尾ぐみの論文の書き方:英語論文
・Papers with code
・Hugging Face
・Distill
・labml.ai Deep Learning Paper Implementations
・PyTorch Image Models
・How PyTorch Transposed Conv1D Work
・Deconvolution and Checkerboard Artifacts
・Flow-based Deep Generative Models
・learn2learn
・安易に逆行列を計算するのはやめよう
・有名企業のエンジニア向け研修資料まとめ
【General】
・Real-ESRGAN-GUI
・TorchOpt
【Transformer関連】
・Transformer Language Model Mathematical Definition
・より良いtransformerをつくる
・Transformerの最前線 ~ 畳込みニューラルネットワークの先へ ~
・【メタサーベイ】Transformerから基盤モデルまでの流れ
・Transformer メタサーベイ
【Flow-based Model】
・Normalizing Flow入門 第1回 変分推論
・Normalizing Flow入門 第2回 Planar Flow
・Normalizing Flow入門 第3回 Bijective Coupling
・Normalizing Flow入門 第4回 Glow
・Normalizing Flow入門 第5回 Autoregressive Flow
・Normalizing Flow入門 第6回 Residual Flow
・Normalizing Flow入門 第7回 Neural ODEとFFJORD
・Variational Inference with Normalizing Flows
・NICE: Non-linear Independent Components Estimation
・Density estimation using Real NVP
・Glow: Generative Flow with Invertible 1x1 Convolutions
【Diffusion Model】
・Awesome Diffusion Models
・Denoising Diffusion Probabilistic Models (DDPM)
・Understanding Diffusion Models: A Unified Perspective
・Generative Modeling by Estimating Gradients of the Data Distribution
・Inject Noise to Remove Noise: A Deep Dive into Score-Based Generative Modeling Techniques
【GAN関連】
・Large Scale GAN Training for High Fidelity Natural Image Synthesis
・A Style-Based Generator Architecture for Generative Adversarial Networks
・HoloGAN: Unsupervised Learning of 3D Representations from Natural Images
・Few-Shot Adversarial Learning of Realistic Neural Talking Head Models
・SinGAN: Learning a Generative Model from a Single Natural Image
・Towards a Deeper Understanding of Adversarial Losses under a Discriminative Adversarial Network Setting
●Generative Adversarial Networks
・Generative Adversarial Nets
●BiGAN, ALI
・Adversarial Feature Learning
・Adversarially Learned Inference
・Adversarially Learned Inference - GitHub Pages
●VAE-GAN
・Autoencoding beyond pixels using a learned similarity metric
●Adversarial Autoencoder
・Adversarial Autoencoders
●Wasserstein GAN
・Wasserstein GAN
●Gradient Penalty
・Improved Training Wasserstein GANs
●Perceptual Loss
・Perceptual Loss for Real-Time Style Transfer and Super-Resolution
●Hinge Loss
・Hierarchical Implicit Models and Likelihood-Free Variational Inference Tran, Ranganath, Blei, 2017
・Geometric GAN Lim & Ye, 2017
・Spectral Normalization for Generative Adversarial Networks Miyato, Kataoka, Koyama, Yoshida, 2018
●Feature Matching Loss
・High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs
●Instance Normalization
・Instance Normalization: The Missing Ingredient for Fast Stylization
●Spectral Normalization
・Spectral Normalization for Generative Adversarial Networks
●Adaptive Instance Normalization(AdaIN)
・Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization
●Self-Attention
・Self-Attention Generative Adversarial Networks
●Projection Discriminator
・cGANs with Projection Discriminator
●Inception Score
・Improved Techniques for Training GANs
●Frechet-Inception Distance(FID)
・GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium
●Structured Similarity(SSIM)
・Image Quality Assessment: From Error Visibility to Structural Similarity
●Perception-Distortion Tradeoff
・The Perception-Distortion Tradeoff
●Cosine Similarity(CSIM)
・ArcFace: Additive Angular Margin Loss for Deep Face Recognition
【Meta Learning】
Meta-Learning: Learning to Learn Fast - Lil'Log
Few-shot Learningとは何なのか【Generalizing from a few examples: A survey on few-shot learning】
・MetaGAN: An Adversarial Approach to Few-Shot Learning
●MAML
・Model-Agnostic Meta-Learning For Fast Adaptation of Deep Neural Networks
●Reptile (FOMAML)
・On First-Order Meta-Learning Algorithims
●Implicit MAML
・Meta-Learning with Implicit Graidents
・Modular Meta-Learning with Shrinkage
●CAVIA
・Fast Context Adaptation via Meta-Learning
●TAML
・Task-Agnostic Meta-Learning for Few-shot Learning
【The Lottery Ticket Hypothesis】
・The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
・What's Hidden in a Randomly Weighted Neural Network?
・Training BatchNorm and Only BatchNorm: On the Expressive Power of Random Feature in CNNs
公開日:
最終更新日:2022/09/27