This is the recoding of a talk given on the 2nd of February 2021, on my work and that of my colleagues at SynSense. It was presented as part of a series of weekly talks organised by the tinyML foundation.
A recording of a non-technical talk I gave to a popular audience as part of the 'Edinburgh School of AI' Meetup, about the differences and similarities between the brain and deep artificial nets.
My Towards Data Science post that provides a lay explanation of our paper "Optimizing the energy consumption of spiking neural networks for neuromorphic applications", now published in Frontiers.
With python, scikit-learn, and handwritten digits. This is a notebook that I made for a high school student who visited our lab. It is a simple introduction to dimensionality reduction and should not be considered a statement that I'm an expert in unsupervised learning..
Wilson and Cowan, in the seventies, developed a dynamical systems approach to the study of the large-scale behaviour of neuronal population. Their approach doesn't mind the behaviour of single neurons, but works at the level of population firing rates (roughly, the number of neurons that fire in the unit time) for two subpopulation the inhibitory neurons and the excitatory neurons.
Today, I went to the 'Deep Learning and the Brain' workshop. This post includes some excellent considerations I heard there on analogies between deep artificial networks and the cortex. I have included in this post yesterday's intervention by Yoshua Bengio.
Random samples from days three and four.
Today: sensory processing, neural dynamics, Wei Ji Ma's personal life, memory, reinforcement learning.
Today: excitement, food reception, announcements, two wonderful opening talks, and a poster session full of sleepy Europeans.