ResNet

Abstract

  • Deeper neural networks are hard to train. The paper presents a learning framework which eases the training for networks which are substantially deeper

  • The paper provides a method of learning referenced functions (? probably a function which links layers) rather than old types of unreferenced functions

  • The paper provides evidence that these residues help networks optimize and gain accuracy from increased depth.

  • The paper talks about ImageNet dataset, with evaluation of a 152 layer network, which is 8 times deeper than VGG net (2014) but still has lower complexity (? time/space, not clear as of now)

  • Analysis of CIFAR-10|100 datasets is also presented in the paper with 100 and 1000 layers

  • The paper argues that depth of representations is very important for many visual recognition tasks. The paper says that it obtained a 28% relative improvement on COCO object detection

  • This paper won 1st places on the tasks of ImageNet detection, localisation, COCO detection and segmentation.

(Example Image) Network architectures for ImageNet

Left : VGG-16
Middle: Plain 34 parameter layers
Right: Residual Network with 34 parameter layers


@misc{he2015deep,

title={Deep Residual Learning for Image Recognition},

author={Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun},

year={2015},

eprint={1512.03385},

archivePrefix={arXiv},

primaryClass={cs.CV}

}


References

  • https://arxiv.org/abs/1512.03385