Deep Learning for AI par Yoshua Bengio le lundi 16 avril à 11h30

Lundi 16 avril / Monday, April 16 11:30 – 12:30
Université de Montréal,
Pavillon André-Aisenstadt,
salle / room 1360

Conférence inaugurale / Opening keynote lecture
Deep Learning for AI

Yoshua Bengio (Université de Montréal) 

There has been rather impressive progress recently with brain-inspired statistical learning algorithms based on the idea of learning multiple levels of representation, also known as neural networks or deep learning. They shine in artificial intelligence tasks involving perception and generation of sensory data like images or sounds and to some extent in understanding and generating natural language. We have proposed new generative models which lead to training frameworks very different from the traditional maximum likelihood framework, and borrowing from game theory. Theoretical understanding of the success of deep learning is work in progress but relies on representation aspects as well as optimization aspects, which interact. At the heart is the ability of these learning mechanisms to capitalize on the compositional nature of the underlying data distributions, meaning that some functions can be represented exponentially more efficiently with deep distributed networks compared to approaches like standard non-parametric methods which lack both depth and distributed representations. On the optimization side, we now have evidence that local minima (due to the highly non-convex nature of the training objective) may not be as much of a problem as thought a few years ago, and that training with variants of stochastic gradient descent actually helps to quickly find better-generalizing solutions. Finally, new interesting questions and answers are arising regarding learning theory for deep networks, why even very large networks do not necessarily overfit and how the representation-forming structure of these networks may give rise to better error bounds which do not absolutely depend on the iid data hypothesis.

14 mars – Conférence de Simon Singh: Homer’s Last Theorem — From Fermat to The Simpsons

Join Simon Singh, one of the world’s most popular science and maths writers on a whistle-stop tour through the bestselling books that he has written over the last two decades. Fermat’s Last Theorem looks at one of the biggest mathematical puzzles of the millennium; The Code Book shares the secrets of cryptology; and Simon’s latest book, The Simpsons and Their Mathematical Secrets, enters the world of the world’s most popular TV show. Simon will also discuss his other books, Big Bang, which explores the history of cosmology, and “Trick or Treatment?”, which asks some hard questions about alternative medicine.
Date : 14 mars 2018
Heure : 19h30
Lieu : Université de Montréal, Pavillon Jean-Coutu, Salle S1-151

Le 14 mars est le Jour de Pi ! 
Animations sur Pi à compter de 19h.

La conférence sera suivie d’un vin d’honneur  

En savoir plus

12 – 16 mars 2018 » Conférences Nirenberg de Eugenia Malinnikova

Organisateurs des conférences Nirenberg: Pengfei Guan (McGill), Dima Jakobson (McGill), Iosif Polterovich (Montréal), Alina Stancu (Concordia)

La série de conférences Nirenberg du CRM en analyse géométrique est organisée annuellement depuis 2014. Cette série de conférences a été nommée ainsi en l’honneur de Louis Nirenberg, un des plus éminents spécialistes en analyse géométrique de notre temps. Les conférences de 2018 seront données par Eugenia Malinnikova, professeure à l’Université norvégienne de sciences et de technologie à Trondheim. Les contributions de Malinnikova incluent des travaux novateurs réalisés conjointement avec A. Logunov sur la géométrie nodale des fonctions propres du laplacien, qui ont mené à la preuve de deux conjectures importantes de ce domaine mathématique dues à Shing-Tung Yau et Nikolai Nadirashvili. Les réalisations scientifiques de Eugenia Malinnikova ont mené à l’obtention d’un Clay Research Award en 2017 et à une invitation comme conférencière à l’ICM 2018 de Rio de Janeiro.

PLUS D’INFORMATION