{"id":22088,"date":"2020-01-16T23:45:37","date_gmt":"2020-01-16T14:45:37","guid":{"rendered":"https:\/\/blog.yokokanno.com\/?page_id=22088"},"modified":"2022-09-27T10:39:57","modified_gmt":"2022-09-27T01:39:57","slug":"go-deepest","status":"publish","type":"page","link":"https:\/\/blog.yokokanno.com\/?page_id=22088","title":{"rendered":"GO DEEPEST"},"content":{"rendered":"<p>\u3010Tips\u3011<br \/>\n\u30fb<a href=\"https:\/\/github.com\/goodfeli\/dlbook_notation\/\" target=\"_blank\" rel=\"noopener noreferrer\">dlbook_notation<\/a><br \/>\n\u30fb<a href=\"https:\/\/github.com\/ICLR\/Master-Template\/blob\/master\/iclr2020\/iclr2020_conference.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">Formatting Instructions for ICLR2020 Conference Submissions<\/a><br \/>\n\u30fb<a href=\"https:\/\/www.slideshare.net\/JunSuzuki21\/20191123jsaiinvitedtalk-205359389\" target=\"_blank\" rel=\"noopener noreferrer\">\u30c8\u30c3\u30d7\u30ab\u30f3\u30d5\u30a1\u30ec\u30f3\u30b9\u3078\u306e\u8ad6\u6587\u63a1\u629e\u306b\u5411\u3051\u3066\uff08AI\u7814\u7a76\u5206\u91ce\u7248\uff09<\/a><br \/>\n\u30fb<a href=\"https:\/\/www.ai-gakkai.or.jp\/jsai2020\/wp-content\/uploads\/sites\/10\/2020\/06\/jsai2020_tutorial_suzuki_ver2.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">AI\u7cfb\u30c8\u30c3\u30d7\u30ab\u30f3\u30d5\u30a1\u30ec\u30f3\u30b9\u3078\u306e\u8ad6\u6587\u63a1\u629e\u306b\u5411\u3051\u305f\u8a66\u9a13\u5bfe\u7b56<\/a><br \/>\n\u30fb<a href=\"https:\/\/www.slideshare.net\/JunSuzuki21\/2019-0826-yansinvitedtalk\" target=\"_blank\" rel=\"noopener noreferrer\">\u30c8\u30c3\u30d7\u30ab\u30f3\u30d5\u30a1\u30ec\u30f3\u30b9\u3078\u306e\u8ad6\u6587\u63a1\u629e\u306b\u5411\u3051\u3066\uff08NLP\u7814\u7a76\u5206\u91ce\u7248\uff09<\/a><br \/>\n\u30fb<a href=\"https:\/\/qiita.com\/yu4u\/items\/606e6e5225ad9b603269\" target=\"_blank\" rel=\"noopener noreferrer\">\u7814\u7a76\u306b\u304a\u3051\u308b\u8a55\u4fa1\u5b9f\u9a13\u3067\u91cd\u8981\u306a7\u3064\u306e\u3053\u3068<\/a><br \/>\n\u30fb<a href=\"http:\/\/ymatsuo.com\/japanese\/ronbun_eng.html\" target=\"_blank\" rel=\"noopener noreferrer\">\u677e\u5c3e\u3050\u307f\u306e\u8ad6\u6587\u306e\u66f8\u304d\u65b9\uff1a\u82f1\u8a9e\u8ad6\u6587<\/a><\/p>\n<p>\u30fb<a href=\"https:\/\/paperswithcode.com\/\" target=\"_blank\" rel=\"noopener\">Papers with code<\/a><br \/>\n\u30fb<a href=\"https:\/\/huggingface.co\/\" target=\"_blank\" rel=\"noopener\">Hugging Face<\/a><br \/>\n\u30fb<a href=\"https:\/\/distill.pub\/\" target=\"_blank\" rel=\"noopener\">Distill<\/a><br \/>\n\u30fb<a href=\"https:\/\/github.com\/labmlai\/annotated_deep_learning_paper_implementations\" target=\"_blank\" rel=\"noopener\">labml.ai Deep Learning Paper Implementations<\/a><br \/>\n\u30fb<a href=\"https:\/\/github.com\/rwightman\/pytorch-image-models\" target=\"_blank\" rel=\"noopener\">PyTorch Image Models<\/a><\/p>\n<p>\u30fb<a href=\"https:\/\/medium.com\/@santi.pdp\/how-pytorch-transposed-convs1d-work-a7adac63c4a5\" target=\"_blank\" rel=\"noopener noreferrer\">How PyTorch Transposed Conv1D Work<\/a><br \/>\n\u30fb<a href=\"https:\/\/distill.pub\/2016\/deconv-checkerboard\/\" target=\"_blank\" rel=\"noopener noreferrer\">Deconvolution and Checkerboard Artifacts<\/a><br \/>\n\u30fb<a href=\"https:\/\/lilianweng.github.io\/lil-log\/2018\/10\/13\/flow-based-deep-generative-models.html\" target=\"_blank\" rel=\"noopener noreferrer\">Flow-based Deep Generative Models<\/a><br \/>\n\u30fb<a href=\"http:\/\/learn2learn.net\/\" target=\"_blank\" rel=\"noopener noreferrer\">learn2learn<\/a><\/p>\n<p>\u30fb<a href=\"https:\/\/opqrstuvcut.github.io\/blog\/posts\/%E5%AE%89%E6%98%93%E3%81%AB%E9%80%86%E8%A1%8C%E5%88%97%E3%82%92%E6%95%B0%E5%80%A4%E8%A8%88%E7%AE%97%E3%81%99%E3%82%8B%E3%81%AE%E3%81%AF%E3%82%84%E3%82%81%E3%82%88%E3%81%86\/\" target=\"_blank\" rel=\"noopener\">\u5b89\u6613\u306b\u9006\u884c\u5217\u3092\u8a08\u7b97\u3059\u308b\u306e\u306f\u3084\u3081\u3088\u3046<\/a><br \/>\n\u30fb<a href=\"https:\/\/qiita.com\/KNR109\/items\/f3268b311e11d5b821c0\" target=\"_blank\" rel=\"noopener\">\u6709\u540d\u4f01\u696d\u306e\u30a8\u30f3\u30b8\u30cb\u30a2\u5411\u3051\u7814\u4fee\u8cc7\u6599\u307e\u3068\u3081<\/a><\/p>\n<p>\u3010General\u3011<br \/>\n\u30fb<a href=\"https:\/\/github.com\/tsukumijima\/Real-ESRGAN-GUI\" target=\"_blank\" rel=\"noopener\">Real-ESRGAN-GUI<\/a><br \/>\n\u30fb<a href=\"https:\/\/github.com\/metaopt\/torchopt\" target=\"_blank\" rel=\"noopener\">TorchOpt<\/a><\/p>\n<p>\u3010Transformer\u95a2\u9023\u3011<br \/>\n\u30fb<a href=\"https:\/\/www.apronus.com\/math\/transformer-language-model-definition\" target=\"_blank\" rel=\"noopener\">Transformer Language Model Mathematical Definition<\/a><br \/>\n\u30fb<a href=\"https:\/\/speakerdeck.com\/butsugiri\/yoriliang-itransformerwotukuru\" target=\"_blank\" rel=\"noopener\">\u3088\u308a\u826f\u3044transformer\u3092\u3064\u304f\u308b<\/a><br \/>\n\u30fb<a href=\"https:\/\/speakerdeck.com\/yushiku\/20220608_ssii_transformer\" target=\"_blank\" rel=\"noopener\">Transformer\u306e\u6700\u524d\u7dda \uff5e \u7573\u8fbc\u307f\u30cb\u30e5\u30fc\u30e9\u30eb\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u306e\u5148\u3078 \uff5e<\/a><br \/>\n\u30fb<a href=\"https:\/\/www.slideshare.net\/cvpaperchallenge\/transformer-from-transformer-to-foundation-models\" target=\"_blank\" rel=\"noopener\">\u3010\u30e1\u30bf\u30b5\u30fc\u30d9\u30a4\u3011Transformer\u304b\u3089\u57fa\u76e4\u30e2\u30c7\u30eb\u307e\u3067\u306e\u6d41\u308c<\/a><br \/>\n\u30fb<a href=\"https:\/\/www.slideshare.net\/cvpaperchallenge\/transformer-247407256\" target=\"_blank\" rel=\"noopener\">Transformer \u30e1\u30bf\u30b5\u30fc\u30d9\u30a4<\/a><\/p>\n<p>\u3010Flow-based Model\u3011<br \/>\n\u30fb<a href=\"https:\/\/tatsy.github.io\/blog\/posts\/2020\/2020-12-30-normalizing_flow%E5%85%A5%E9%96%80_%E7%AC%AC1%E5%9B%9E\/\" target=\"_blank\" rel=\"noopener\">Normalizing Flow\u5165\u9580 \u7b2c1\u56de \u5909\u5206\u63a8\u8ad6<\/a><br \/>\n\u30fb<a href=\"https:\/\/tatsy.github.io\/blog\/posts\/2021\/2021-01-03-normalizing_flow%E5%85%A5%E9%96%80_%E7%AC%AC2%E5%9B%9E\/\" target=\"_blank\" rel=\"noopener\">Normalizing Flow\u5165\u9580 \u7b2c2\u56de Planar Flow<\/a><br \/>\n\u30fb<a href=\"https:\/\/tatsy.github.io\/blog\/posts\/2021\/2021-01-05-normalizing_flow%E5%85%A5%E9%96%80_%E7%AC%AC3%E5%9B%9E\/\" target=\"_blank\" rel=\"noopener\">Normalizing Flow\u5165\u9580 \u7b2c3\u56de Bijective Coupling<\/a><br \/>\n\u30fb<a href=\"https:\/\/tatsy.github.io\/blog\/posts\/2021\/2021-01-06-normalizing_flow%E5%85%A5%E9%96%80_%E7%AC%AC4%E5%9B%9E\/\" target=\"_blank\" rel=\"noopener\">Normalizing Flow\u5165\u9580 \u7b2c4\u56de Glow<\/a><br \/>\n\u30fb<a href=\"https:\/\/tatsy.github.io\/blog\/posts\/2021\/2021-01-06-normalizing_flow%E5%85%A5%E9%96%80_%E7%AC%AC5%E5%9B%9E\/\" target=\"_blank\" rel=\"noopener\">Normalizing Flow\u5165\u9580 \u7b2c5\u56de Autoregressive Flow<\/a><br \/>\n\u30fb<a href=\"https:\/\/tatsy.github.io\/blog\/posts\/2021\/2021-01-10-normalizing_flow%E5%85%A5%E9%96%80_%E7%AC%AC6%E5%9B%9E\/\" target=\"_blank\" rel=\"noopener\">Normalizing Flow\u5165\u9580 \u7b2c6\u56de Residual Flow<\/a><br \/>\n\u30fb<a href=\"https:\/\/tatsy.github.io\/blog\/posts\/2021\/2021-01-11-normalizing_flow%E5%85%A5%E9%96%80_%E7%AC%AC7%E5%9B%9E\/\" target=\"_blank\" rel=\"noopener\">Normalizing Flow\u5165\u9580 \u7b2c7\u56de Neural ODE\u3068FFJORD<\/a><\/p>\n<p>\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1505.05770\" target=\"_blank\" rel=\"noopener noreferrer\">Variational Inference with Normalizing Flows<\/a><br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1410.8516\" target=\"_blank\" rel=\"noopener noreferrer\">NICE: Non-linear Independent Components Estimation<\/a><br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1605.08803\" target=\"_blank\" rel=\"noopener noreferrer\">Density estimation using Real NVP<\/a><br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1807.03039\" target=\"_blank\" rel=\"noopener noreferrer\">Glow: Generative Flow with Invertible 1&#215;1 Convolutions<\/a><\/p>\n<p>\u3010Diffusion Model\u3011<br \/>\n\u30fb<a href=\"https:\/\/github.com\/heejkoo\/Awesome-Diffusion-Models\" target=\"_blank\" rel=\"noopener\">Awesome Diffusion Models<\/a><br \/>\n\u30fb<a href=\"https:\/\/nn.labml.ai\/diffusion\/ddpm\/index.html\" target=\"_blank\" rel=\"noopener\">Denoising Diffusion Probabilistic Models (DDPM)<\/a><br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/2208.11970\" target=\"_blank\" rel=\"noopener\">Understanding Diffusion Models: A Unified Perspective<\/a><br \/>\n\u30fb<a href=\"https:\/\/yang-song.net\/blog\/2021\/score\/\" target=\"_blank\" rel=\"noopener\">Generative Modeling by Estimating Gradients of the Data Distribution<\/a><br \/>\n\u30fb<a href=\"https:\/\/wandb.ai\/ucalyptus\/ScoreGM\/reports\/Score-Based-Generative-Modeling-Techniques--Vmlldzo1OTE2NDg\" target=\"_blank\" rel=\"noopener\">Inject Noise to Remove Noise: A Deep Dive into Score-Based Generative Modeling Techniques<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>\u3010GAN\u95a2\u9023\u3011<br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1809.11096\" target=\"_blank\" rel=\"noopener noreferrer\">Large Scale GAN Training for High Fidelity Natural Image Synthesis<\/a><br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1812.04948\" target=\"_blank\" rel=\"noopener noreferrer\">A Style-Based Generator Architecture for Generative Adversarial Networks<\/a><br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1904.01326\" target=\"_blank\" rel=\"noopener noreferrer\">HoloGAN: Unsupervised Learning of 3D Representations from Natural Images<\/a><br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1905.08233\" target=\"_blank\" rel=\"noopener noreferrer\">Few-Shot Adversarial Learning of Realistic Neural Talking Head Models<\/a><br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1905.01164\" target=\"_blank\" rel=\"noopener noreferrer\">SinGAN: Learning a Generative Model from a Single Natural Image<\/a><br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1901.08753\" target=\"_blank\" rel=\"noopener noreferrer\">Towards a Deeper Understanding of Adversarial Losses under a Discriminative Adversarial Network Setting<\/a><\/p>\n<p>\u25cfGenerative Adversarial Networks<br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1406.2661\" target=\"_blank\" rel=\"noopener noreferrer\">Generative Adversarial Nets<\/a><br \/>\n\u25cfBiGAN, ALI<br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1605.09782\" target=\"_blank\" rel=\"noopener noreferrer\">Adversarial Feature Learning<\/a><br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1606.00704\" target=\"_blank\" rel=\"noopener noreferrer\">Adversarially Learned Inference<\/a><br \/>\n\u30fb<a href=\"https:\/\/ishmaelbelghazi.github.io\/ALI\/\" target=\"_blank\" rel=\"noopener noreferrer\">Adversarially Learned Inference &#8211; GitHub Pages<\/a><br \/>\n\u25cfVAE-GAN<br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1512.09300\" target=\"_blank\" rel=\"noopener noreferrer\">Autoencoding beyond pixels using a learned similarity metric<\/a><br \/>\n\u25cfAdversarial Autoencoder<br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1511.05644\" target=\"_blank\" rel=\"noopener noreferrer\">Adversarial Autoencoders<\/a><br \/>\n\u25cfWasserstein GAN<br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1701.07875\" target=\"_blank\" rel=\"noopener noreferrer\">Wasserstein GAN<\/a><br \/>\n\u25cfGradient Penalty<br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1704.00028\" target=\"_blank\" rel=\"noopener noreferrer\">Improved Training Wasserstein GANs<\/a><br \/>\n\u25cfPerceptual Loss<br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1603.08155\" target=\"_blank\" rel=\"noopener noreferrer\">Perceptual Loss for Real-Time Style Transfer and Super-Resolution<\/a><br \/>\n\u25cfHinge Loss<br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1702.08896\" target=\"_blank\" rel=\"noopener noreferrer\">Hierarchical Implicit Models and Likelihood-Free Variational Inference<\/a> Tran, Ranganath, Blei, 2017<br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1705.02894\" target=\"_blank\" rel=\"noopener noreferrer\">Geometric GAN<\/a> Lim &amp; Ye, 2017<br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1802.05957\" target=\"_blank\" rel=\"noopener noreferrer\">Spectral Normalization for Generative Adversarial Networks<\/a> Miyato, Kataoka, Koyama, Yoshida, 2018<br \/>\n\u25cfFeature Matching Loss<br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1711.11585\" target=\"_blank\" rel=\"noopener noreferrer\">High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs<\/a><br \/>\n\u25cfInstance Normalization<br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1607.08022\" target=\"_blank\" rel=\"noopener noreferrer\">Instance Normalization: The Missing Ingredient for Fast Stylization<\/a><br \/>\n\u25cfSpectral Normalization<br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1802.05957\" target=\"_blank\" rel=\"noopener noreferrer\">Spectral Normalization for Generative Adversarial Networks<\/a><br \/>\n\u25cfAdaptive Instance Normalization(AdaIN)<br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1703.06868\" target=\"_blank\" rel=\"noopener noreferrer\">Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization<\/a><br \/>\n\u25cfSelf-Attention<br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1805.08318\" target=\"_blank\" rel=\"noopener noreferrer\">Self-Attention Generative Adversarial Networks<\/a><br \/>\n\u25cfProjection Discriminator<br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1802.05637\" target=\"_blank\" rel=\"noopener noreferrer\">cGANs with Projection Discriminator<\/a><br \/>\n\u25cfInception Score<br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1606.03498\" target=\"_blank\" rel=\"noopener noreferrer\">Improved Techniques for Training GANs<\/a><br \/>\n\u25cfFrechet-Inception Distance(FID)<br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1706.08500\" target=\"_blank\" rel=\"noopener noreferrer\">GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium<\/a><br \/>\n\u25cfStructured Similarity(SSIM)<br \/>\n\u30fb<a href=\"https:\/\/www.cns.nyu.edu\/pub\/lcv\/wang03-preprint.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">Image Quality Assessment: From Error Visibility to Structural Similarity<\/a><br \/>\n\u25cfPerception-Distortion Tradeoff<br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1711.06077\" target=\"_blank\" rel=\"noopener noreferrer\">The Perception-Distortion Tradeoff<\/a><br \/>\n\u25cfCosine Similarity(CSIM)<br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1801.07698\" target=\"_blank\" rel=\"noopener noreferrer\">ArcFace: Additive Angular Margin Loss for Deep Face Recognition<\/a><\/p>\n<p>\u3010Meta Learning\u3011<br \/>\n<a href=\"https:\/\/lilianweng.github.io\/lil-log\/2018\/11\/30\/meta-learning.html\" target=\"_blank\" rel=\"noopener noreferrer\">Meta-Learning: Learning to Learn Fast &#8211; Lil&#8217;Log<\/a><br \/>\n<a href=\"https:\/\/qiita.com\/ell\/items\/9e9de65521c8b935d28f\" target=\"_blank\" rel=\"noopener\">Few-shot Learning\u3068\u306f\u4f55\u306a\u306e\u304b\u3010Generalizing from a few examples: A survey on few-shot learning\u3011<\/a><br \/>\n\u30fb<a href=\"https:\/\/www.cse.ust.hk\/~yqsong\/papers\/2018-NIPS-MetaGAN-long.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">MetaGAN: An Adversarial Approach to Few-Shot Learning<\/a><br \/>\n\u25cfMAML<br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1703.03400\" target=\"_blank\" rel=\"noopener noreferrer\">Model-Agnostic Meta-Learning For Fast Adaptation of Deep Neural Networks<\/a><br \/>\n\u25cfReptile (FOMAML)<br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1803.02999\" target=\"_blank\" rel=\"noopener noreferrer\">On First-Order Meta-Learning Algorithims<\/a><br \/>\n\u25cfImplicit MAML<br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1909.04630\" target=\"_blank\" rel=\"noopener noreferrer\">Meta-Learning with Implicit Graidents<\/a><br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1909.05557\" target=\"_blank\" rel=\"noopener noreferrer\">Modular Meta-Learning with Shrinkage<\/a><br \/>\n\u25cfCAVIA<br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1810.03642\" target=\"_blank\" rel=\"noopener noreferrer\">Fast Context Adaptation via Meta-Learning<\/a><br \/>\n\u25cfTAML<br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1805.07722\" target=\"_blank\" rel=\"noopener noreferrer\">Task-Agnostic Meta-Learning for Few-shot Learning<\/a><\/p>\n<p>\u3010The Lottery Ticket Hypothesis\u3011<br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1803.03635\" target=\"_blank\" rel=\"noopener noreferrer\">The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks<\/a><br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/1911.13299\" target=\"_blank\" rel=\"noopener noreferrer\">What&#8217;s Hidden in a Randomly Weighted Neural Network?<\/a><br \/>\n\u30fb<a href=\"https:\/\/arxiv.org\/abs\/2003.00152\" target=\"_blank\" rel=\"noopener noreferrer\">Training BatchNorm and Only BatchNorm: On the Expressive Power of Random Feature in CNNs<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>\u3010Tips\u3011 \u30fbdlbook_notation \u30fbFormatting Inst &#8230; <\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-22088","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/blog.yokokanno.com\/index.php?rest_route=\/wp\/v2\/pages\/22088","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.yokokanno.com\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/blog.yokokanno.com\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/blog.yokokanno.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.yokokanno.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=22088"}],"version-history":[{"count":37,"href":"https:\/\/blog.yokokanno.com\/index.php?rest_route=\/wp\/v2\/pages\/22088\/revisions"}],"predecessor-version":[{"id":22669,"href":"https:\/\/blog.yokokanno.com\/index.php?rest_route=\/wp\/v2\/pages\/22088\/revisions\/22669"}],"wp:attachment":[{"href":"https:\/\/blog.yokokanno.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=22088"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}