Geoffrey E. Hinton

https://www.cs.toronto.edu/~hinton/

Geoffrey E. Hinton

I am an Engineering Fellow at Google where I manage Brain Team Toronto, which is a new part of the Google Brain Team and is located at Google's Toronto office at 111 Richmond Street. Brain Team Toronto does basic research on ways to improve neural network learning techniques. I also do pro bono work as the Chief Scientific Adviser of the new Vector Institute. I am also an Emeritus Professor at the University of Toronto. 

Department of Computer Science   email: geoffrey [dot] hinton [at] gmail [dot] com
University of Toronto   voice: send email
6 King's College Rd.   fax: scan and send email
Toronto, Ontario    
 

Information for prospective students:
I advise interns at Brain team Toronto. 
I also advise some of the residents in the Google Brain Residents Program.
I will not be taking any more visiting students, summer students or visitors at the University of Toronto. I will not be the sole advisor of any new graduate students, but I may co-advise a few graduate students with Prof. Roger Grosse or soon to be Prof. Jimmy Ba. 

News 
Results of the 2012 competition to recognize 1000 different types of object
How George Dahl won the competition to predict the activity of potential drugs
How Vlad Mnih won the competition to predict job salaries from job advertisements
How Laurens van der Maaten won the competition to visualize a dataset of potential drugs

Using big data to make people vote against their own interests 
A possible motive for making people vote against their own interests 

Basic papers on deep learning

Hinton, G. E., Osindero, S. and Teh, Y. (2006)
A fast learning algorithm for deep belief nets.
Neural Computation, 18, pp 1527-1554. [pdf
Movies of the neural network generating and recognizing digits 

Hinton, G. E. and Salakhutdinov, R. R. (2006)
Reducing the dimensionality of data with neural networks.
Science, Vol. 313. no. 5786, pp. 504 - 507, 28 July 2006.
[ full paper ] [ supporting online material (pdf) ] [ Matlab code 

 LeCun, Y., Bengio, Y. and Hinton, G. E. (2015)
Deep Learning
Nature, Vol. 521, pp 436-444. [pdf]

Papers on deep learning without much math

Hinton, G. E. (2007)
To recognize shapes, first learn to generate images
In P. Cisek, T. Drew and J. Kalaska (Eds.)
Computational Neuroscience: Theoretical Insights into Brain Function. Elsevier. [pdf of final draft]

Hinton, G. E. (2007)
Learning Multiple Layers of Representation.
Trends in Cognitive Sciences, Vol. 11, pp 428-434. [pdf]

Hinton, G. E. (2014)
Where do features come from?.
Cognitive Science, Vol. 38(6), pp 1078-1101. [pdf]

A practical guide to training restricted Boltzmann machines
[pdf]

Recent Papers

 Shazeer, N., Mirhoseini, A., Maziarz, K., Davis, A., Le, Q., Hinton, G., & Dean, J. (2017)
Outrageously large neural networks: The sparsely-gated mixture-of-experts layer
arXiv preprint arXiv:1701.06538 [pdf]

 Ba, J. L., Hinton, G. E., Mnih, V., Leibo, J. Z. and Ionescu, C. (2016)
Using Fast Weights to Attend to the Recent Past
{it NIPS-2016}, arXiv preprint arXiv:1610.06258v2 [pdf]

 Ba, J. L., Kiros, J. R. and Hinton, G. E. (2016)
Layer normalization
{it Deep Learning Symposium, NIPS-2016}, arXiv preprint arXiv:1607.06450 [pdf]

 Ali Eslami, S. M., Nicolas Heess, N., Theophane Weber, T., Tassa, Y., Szepesvari, D., Kavukcuoglu, K. and Hinton, G. E. (2016)
Attend, Infer, Repeat: Fast Scene Understanding with Generative Models
{it NIPS-2016}, arXiv preprint arXiv:1603.08575v3 [pdf]

LeCun, Y., Bengio, Y. and Hinton, G. E. (2015)
Deep Learning
Nature, Vol. 521, pp 436-444. [pdf]

Hinton, G. E., Vinyals, O., and Dean, J. (2015)
Distilling the knowledge in a neural network
arXiv preprint arXiv:1503.02531 [pdf]

Vinyals, O., Kaiser, L., Koo, T., Petrov, S., Sutskever, I., & Hinton, G. E. (2014)
Grammar as a foreign language.
arXiv preprint arXiv:1412.7449 [pdf]

Hinton, G. E. (2014)
Where do features come from?.
Cognitive Science, Vol. 38(6), pp 1078-1101. [pdf]

Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. and Salakhutdinov, R. (2014)
Dropout: A simple way to prevent neural networks from overfitting
The Journal of Machine Learning Research, 15(1), pp 1929-1958. [pdf]

Srivastava, N., Salakhutdinov, R. R. and Hinton, G. E. (2013)
Modeling Documents with a Deep Boltzmann Machine
arXiv preprint arXiv:1309.6865 [pdf]

Graves, A., Mohamed, A. and Hinton, G. E. (2013)
Speech Recognition with Deep Recurrent Neural Networks
In IEEE International Conference on Acoustic Speech and Signal Processing (ICASSP 2013) Vancouver, 2013. [pdf]

Joseph Turian's map of 2500 English words produced by using t-SNE on the word feature vectors learned by Collobert & Weston, ICML 2008    

Doing analogies by using vector algebra on word embeddings    

原文地址:https://www.cnblogs.com/rsapaper/p/7766905.html