深度学习及计算机视觉相关资源(不断积累中)

LSTM:  http://colah.github.io/posts/2015-08-Understanding-LSTMs/ 
深度学习领域PyTorch项目-git源码整理 https://blog.csdn.net/u012969412/article/details/77479269?utm_source=blogxgwz0 
各种Pytorch项目 https://www.ritchieng.com/the-incredible-pytorch/ 

深度学习概述教程--Deep Learning Overview http://www.cnblogs.com/liuyihai/p/8321299.html 
深度学习深刻理解和应用--人工智能从业人员必看知识 https://www.cnblogs.com/liuyihai/p/8449058.html 
一天搞懂深度学习--李宏毅教程分享 http://www.cnblogs.com/liuyihai/p/8448977.html 

Artificial Inteligence https://leonardoaraujosantos.gitbooks.io/artificial-inteligence/ 

The Unreasonable Effectiveness of Recurrent Neural Networks http://karpathy.github.io/2015/05/21/rnn-effectiveness/ 

Pytorch资源大合集:https://github.com/bharathgs/Awesome-pytorch-list

cvpr 2018 image caption generation论文导读(含workshop): https://blog.csdn.net/m0_37052320/article/details/80947049

2019计算机视觉领域顶级会议时间表: https://blog.csdn.net/hitzijiyingcai/article/details/81709755

Image Captioning论文合辑: https://github.com/tangzhenyu/Image_Captioning_DL

The Annotated Transformer: http://nlp.seas.harvard.edu/2018/04/03/attention.html

【image caption】之任务图鉴:深度学习的图片描述生成方法集锦: https://blog.csdn.net/hanss2/article/details/80732318

吴恩达DeepLearning.ai《深度学习》课程笔记: https://blog.csdn.net/Koala_Tree/article/details/79913655

Cheatsheets for Stanford's CS 230 Deep Learning:https://github.com/afshinea/stanford-cs-230-deep-learning

Cheatsheets for Stanford's CS 229 Machine Learning: https://github.com/afshinea/stanford-cs-229-machine-learning

注意力的动画解析(以机器翻译为例): https://towardsdatascience.com/attn-illustrated-attention-5ec4ad276ee3

张量求导和计算图: https://mp.weixin.qq.com/s/HQpaAg00j-teybSQJAb-ZQ

 

TorchGAN: https://github.com/torchgan/torchgan

TorchText: https://github.com/pytorch/text

FastNLP: https://fastnlp.readthedocs.io/en/latest/

NLP-tutorial: https://github.com/graykode/nlp-tutorial

 

PyTorch Cookbook(常用代码段整理合集): https://zhuanlan.zhihu.com/p/59205847

PyTorch LSTM: https://towardsdatascience.com/taming-lstms-variable-sized-mini-batches-and-why-pytorch-is-good-for-your-health-61d35642972e

PyTorch Basics: https://medium.com/@aakashns/pytorch-basics-tensors-and-gradients-eb2f6e8a6eee

Text Summarization using Deep Learning: https://towardsdatascience.com/text-summarization-using-deep-learning-6e379ed2e89c

Convolutional Neural Networks from the ground up: https://towardsdatascience.com/convolutional-neural-networks-from-the-ground-up-c67bb41454e1

Attention in RNNs: https://medium.com/datadriveninvestor/attention-in-rnns-321fbcd64f05

Introduction to Computer Vision: https://medium.com/overture-ai/part-1-introduction-to-computer-vision-9a02a393d86d

Evolution of Natural Language Generation: https://medium.com/sfu-big-data/evolution-of-natural-language-generation-c5d7295d6517

The Real Reason behind all the Craze for Deep Learning: https://towardsdatascience.com/decoding-deep-learning-a-big-lie-or-the-next-big-thing-b924298f26d4

Everything About Python — Beginner To Advance: https://medium.com/fintechexplained/everything-about-python-from-beginner-to-advance-level-227d52ef32d2

Python Decorators: https://pouannes.github.io/blog/decorators/

10 Python Pandas tricks that make your work more efficient: https://towardsdatascience.com/10-python-pandas-tricks-that-make-your-work-more-efficient-2e8e483808ba

Gumbel-Softmax Trick和Gumbel分布: https://www.cnblogs.com/initial-h/p/9468974.html

Deep Learning Techniques Applied to Natural Language Processing: https://nlpoverview.com

Open Questions about Generative Adversarial Networks: https://distill.pub/2019/gan-open-problems/#advx

强化学习与序列生成: https://freeman.one/2019/04/13/reinforced-generation/

Estimators, Loss Functions, Optimizers —Core of ML Algorithms:  https://towardsdatascience.com/estimators-loss-functions-optimizers-core-of-ml-algorithms-d603f6b0161a

One LEGO at a Time: Explaining the Math of how Neural Networks Learn with Implementation from Scratch: https://medium.com/towards-artificial-intelligence/one-lego-at-a-time-explaining-the-math-of-how-neural-networks-learn-with-implementation-from-scratch-39144a1cf80

Advanced Topics in Deep Convolutional Neural Networks: https://towardsdatascience.com/advanced-topics-in-deep-convolutional-neural-networks-71ef1190522d

Understanding Tensor Processing Units: https://medium.com/sciforce/understanding-tensor-processing-units-10ff41f50e78

Must-Read Papers on GANs: https://towardsdatascience.com/must-read-papers-on-gans-b665bbae3317

How Neural Networks Are Learning to Write: https://towardsdatascience.com/how-neural-networks-are-learning-to-write-d631b249b499

Deep Learning Illustrated: Building Natural Language Processing Models: https://blog.dominodatalab.com/deep-learning-illustrated-building-natural-language-processing-models/

NLP's ImageNet moment has arrived: https://thegradient.pub/nlp-imagenet/

学术论文写作小工具: https://blog.csdn.net/qq_33373858/article/details/88385690

https://www.qichacha.com/postnews_552e6fa019bcae0d9a629a6e4758e8e9.html

http://www.percent.cn/Case.html

https://data.aliyun.com/product/product_index

http://www.apusic.com/bigdata/analysics

https://www.leiphone.com/news/201706/oSbBAbYPoDrdzsIY.html

https://www.xdatainsight.com/portal/html/anli.html?type=1

https://tech.antfin.com/articles/98

天云大数据 http://www.beagledata.com/?page_id=3918

百度数据科学平台: https://cloud.baidu.com/product/jarvis.html

http://www.esensoft.com/products/wonderdm.html

国云大数据魔镜: http://www.moojnn.com/

 

https://www.jiqizhixin.com/articles/2019-01-07-8

https://zihaocode.com/2019/03/10/gnn-review-2/

 

https://python-data-structures-and-algorithms.readthedocs.io/zh/latest/13_%E9%AB%98%E7%BA%A7%E6%8E%92%E5%BA%8F%E7%AE%97%E6%B3%95/merge_sort/

https://www.cnblogs.com/fatsheep9146/p/5079177.html

Streamlined Dense Video Captioning

Memory-Attended Recurrent Network for Video Captioning

Object-Aware Aggregation With Bidirectional Temporal Graph for Video Captioning

Spatio-Temporal Dynamics and Semantic Attribute Enriched Visual Encoding for Video Captioning

Chen et.al [CVPR 2017] propose an AMC[] learning method for Image search, the framework consists of a jointly learned hierarchy of intra and inter-attention network, which conditioned on query’s intent, intra attention networks attend on informative parts within each modality; a multi-modal inter-attention network promotes the importance of the most query-relevant modalities. (简化)

Gao el. al [CVPR 2019] present a dynamic fusion with intra- and inter-modality attention flow method for visual question answering.

Lee et.al [ECCV 2018] use a stacked cross attention network to learn all the possible alignments between image regions and words and capture fine-grained interplay between image and text.

Based on [IJCAI 2019], Hu et.al [] further design a relation-wise dual attention network to capture the latent relations and infer visual-semantic alignments.

Huang et.al [CVPR 2018] train a multi-regional multi-label CNN to predict

 

Li et.al [ICCV 2019] use GCNs for region relationship reasoning and GRU for global semantic reasoning to build up connections between salient image regions.

Gu et al. [CVPR 2018] incorporate the image-to-text and text-to-image generative models into the cross-modal feature embedding learning.

原文地址:https://www.cnblogs.com/czhwust/p/resource.html