February 23, 2017

The Cosyne Chronicles, vol. 1

I have some notes about a few highlights of the first day at Cosyne 2017, selected in a completely random way, and I thought of sharing them with you. Having a look at the #cosyne17 tweets also gives a lot of information, if you’re interested in following what is happening. What follows is my personal take on what I learned, which is about 1% of what I could have learned.

The big news is, next year it will change location to Denver, and in two years, it will for the first time be in Europe, in Lisbon. I have a personal love story with Lisbon, so I’m really excited about this.

There were two great opening talks. The first, by Surya Ganguli (Stanford), covered a lot of ideas between the availability of data vs. models, dimensionality, and the potential use of deep learning in neuroscience. Among the concrete results, they tried using a CNN to reproduce the retina’s behaviour, in the same way as people have done with linear-nonlinear models and generalised linear models. CNNs seem to perform better than both of the above, and aren’t more data-hungry as one may think. The ideal CNN has three layers, like the retina itself. Additionally, they measure from bipolar cells (experimentally challenging), and show that the activation of BCs is similar to the activation of the hidden layer of their CNN, which hasn’t had access to BC data. I’m personally fascinated by deep-learning/sensory neuroscience connections, so it was great to hear this. There will be more about it in tomorrow’s post.

The second talk was by Greg Gage of Backyard Brains (a startup). He was an absolutely excellent speaker, keeping the audience fascinated and entertained. He showed great results obtained in creating low cost and portable experimental devices for neuroscience, which were initially designed for popularisation and teaching. He had a device for single-cell recording which costs less than 100 dollars to make, and he tried it on stage, with a member of the audience (a theoretician) sacrificing a cockroach and sticking electrodes in it. He could record electromyographic signals with an Arduino. Then he used those to have a volunteer control the arm of another volunteer. Yes, literally a person moving another person’s hand without the intervention of the latter. He showed how a certain squid can change colour very quickly by a neural signal, and how this can be hacked to make the squid change colours based on the bass line of a hip hop song. He showed how to radio-control a cockroach with a smartphone, making it turn left or right. And he made us all laugh.

Finally, a very random selection of ideas from the poster session:

  • One finds certain V1 cells are selective to gratings. Are they only cartesian gratings or also polar and why? (I don’t know the answer).
  • “Grid cells may be understood as eigenvectors of place cells relationships” (help making sense of this is welcome).
  • Using Bayesian inference, with a Gabor prior, to infer V1 receptive fields with very few spikes.
  • Quite a few people are starting to use machine learning (it was inevitable). In particular, to decode.
  • Fovea as an emergent property of visual attention: train a NN by gradient descent, letting it decide position and size of “retinal ganglion cell” receptive fields. If “the eye can move”, you end up with a region (fovea-like) with small, close-by receptive fields, designed to read details, and broad RFs elsewhere. This doesn’t happen if we let “the eye” zoom other than move.

I hope this was interesting to who couldn’t go to Cosyne or to Cosyne attendees who maybe had a completely different understanding of the topics.

Next update on Friday night (Saturday morning Europe time).