Advertisements

Even passive dendrites extend neuron’s computation capacity

A neuron receives its input through passive or active dendrites. Active dendrites can sum excitatory inputs both below and above their sum. This feature turns a neuron in a two-layer network capable of universal computation while linearly separable computations defined what a single neuron could do. Some neuron, however, lacks active dendrites. We thus wonder here what happens when dendrites are passive and process two excitatory inputs below their sum. We enumerate parameter sets and focus on excitatory inputs to determine how many computations can be implemented by a neuron model with either active or passive dendrites. We then generalize these numerical results to an arbitrary number of dendrites. First, we show that a single dendrite either passive or active suffices to compute linearly non-separable computations. Second, we prove that a sufficient number of passive dendrites enable a neuron to be universal for positive computations. Third, we show how a neuron can implement these computations using two distinct strategies: 1) Where a single dendrite suffices to trigger a somatic spike; 2) where somatic spiking requires the cooperation of multiple dendrites. Only a neuron with active dendrites can use the strategy (1) while neuron with either passive or active dendrites can use strategy (2). Finally, we employ strategy (2) to implement a linearly non-separable function in a biophysical model with passive dendrites inspired by stellate cerebellar interneurons. We show here that even passive dendrites enable a neuron to extend its computation capacity well beyond what we previously thought.

Advertisements

Even a passive dendrite extends neuron’s computation capacity

A neuron possesses receptive organs called dendrites that are active or passive. Active dendrites can sum excitatory inputs both below and above their arithmetic sum. This feature turns a neuron in a two-layer neural network capable of universal computation. Linearly separable computations previously define what a single neuron could do. Some neuron, however, lacks active dendrites. We thus wonder here what happens when dendrites are passive and can only sum two excitatory inputs below their arithmetic sum. We enumerate parameter sets and focus on excitatory inputs to determine how many computations can be implemented by a neuron model with either an active or passive dendrite. We then analytically generalize these numerical results to an arbitrary number of dendrites. First, we show that a single dendrite either passive or active suffices to compute linearly non-separable computations. Second, we analytically prove that a sufficient number of passive dendrites enable a neuron to be universal for positive computations. Third, we show how a neuron can implement these computations using two distinct strategies: 1) Where a single dendrite suffices to trigger a somatic spike; 2) where somatic spiking requires the cooperation of multiple dendrites. Only a neuron with active dendrites can use the strategy (1) while neuron with either passive or active dendrites can use strategy (2). Finally, we employ strategy (2) to implement a linearly non-separable function in a biophysical model with passive dendrites inspired by stellate cerebellar interneurons. We show here that even passive dendrites enable a neuron to extend its computation capacity well beyond what we previously thought.

Rewriting abstracts

Scientific articles follow a logic in their structure.  They start by an abstract: a short text often below 300 words describing the article content. Next comes the introduction then result and methods. It ends with a discussion/conclusion. I would like to focus here on abstracts.

Abstracts constitute the beating heart of a paper. With the title, it will make or not people read the article. Yesterday I re-read the abstract of the paper I’m most proud of and I realize how poorly it was written.

I believe in second chances, even third or fourth. Writing is a painful and long exercise and there is no such thing as the perfect text from the first draft. In particular for abstracts. Every line, every word deserve to be carefully examined. I am going to rewrite the abstract first for some of my old papers but also of papers I enjoy reading. The latter being the most difficult thing to do because so of them are written by a native English speaker who writes much better than me. Yet I find the exercise useful not only for me but also to promote the paper.

A rewritten abstract offers a new perspective on a paper. I don’t speak about a lay abstract or a simplification like an author’s summary. I speak about a proper abstract with all the scientific rigour in it.

Thank you for reading! I hope to please you with my rewritten abstracts.

Computing using space in dendrites

After two years of work my last paper is finally out 17_05NECO. What I keep realizing is that science can take an awful long time. New idea need time to mature and even more time to be accepted. This paper was an uphill battle against editorial and peer reviewers. But it is finally out and a good excuse to write a new post.

This paper demonstrate an important thing. Dendrites can play a crucial role even for the most simple computation. The computation studied here is stimulus selectivity in other words a simple passing of information to signal the presence or absence of a particular input. Even in this case dendrites enables to make this computation more resilient to synaptic and dendritic failure. The implementation proposed here may seem too complex for a simple computation: to make the preferred inputs the most dispersed rather than the strongest. But it shows that with dendrites you can do more with less. Be more resilient with less synapses.

Another coming work will show that for the same computation an implementation employing dendrites use less synapses than a classic one. This becomes particularly interesting when one see that saturating dendrites can be implement using a resistor (that will saturate in all cases).

A mountain becoming flat underwater.

Some neurons respond preferentially to certain stimuli, like a sound or a picture reminding Jennifer Anniston. Hubel and Wiesel obtained the Nobel price, some 50 years ago, for the discovery of stimulus selective neurons in cats’ visual cortex and for the model associated with this discovery. This model shines by its simplicity. Imagine a mountain, each coordinates corresponds to a (visual) stimulus, the altitude associated to them corresponds to how strong a neuron responds to this stimulus. In more technical terms, the height of a given point equals the depolarization created by this stimulus. Now imagine that this mountain sits in the middle of a sea. The tip of the mountain outside of the water is the supra-threshold response, i.e. an activity level sufficient to trigger a neural activity noticeable by the rest of the brain. This metaphor seems a good way to understand neuron stimulus selectivity, but like all models it cannot explain everything.

Electrophysiology recently made astounding progresses and it is now possible to hyperpolarize a neuron in vivo. Using our metaphor it becomes feasible to increase the sea level. Scientists used this technique and made an unexpected observation.  When they hyperpolarize a neuron -increase the sea level-, the mountain, as it goes underwater, becomes flat. Meaning that the neuron loses its selectivity and responds equally to all stimuli. Why? This is the topic of one of my current project. But I will talk more about it in another post, where I will try to explain why a neuron might start to respond as strong to Jennifer Anniston as to any other Hollywood inhabitants.

Indifference (in Science) can sometimes be frustrating.

Motivated by my last post, I decided to more regularly update my blog.

In this post I am not asking a question but write about my life in Science and
my experience as a young researcher.

I enjoy a lot my life in Science, so I will try not to complain too much
(pledge), but life in science can sometimes be frustrating. I often
hear that the life of a young researcher can be difficult because of the
constant struggle. That he or she always has to fight against the existing
dogmas or ideas held by researchers higher up in the hierarchy. I tend to
disagree. Fighting against someone is motivating. You are never as
courageous as when you have a mighty opponent. Even a fierce and overwhelming
opponent that has much more means than you have. Worthwhile struggle is much
less frustrating than passive indifference. Science is just one area touched by
indifference and in our society overflowed by sounds, images and information
indifference sometimes is the only defense. So, I understand why people could
be indifferent. Still, I realize more and more that my frustration most often
comes from this indifference. Is there a solution? Well, it might be a clunky
solution but it is the only one I found.  Indifference. I just say to myself
that if my work is worth something then someday, somewhere, somebody will use
it. Today I have a decent place to live, someone I love to live with, food in
my plate and people let me do what I love to do -Science. So no complain really.

A NEURON + Python tutorial

I had the chance to give during the first week at OCNC (Okinawa Computational Neuroscience Course) a tutorial on NEURON neuron_tuto. This tutorial contains some self-advertisement ;), it demonstrates that a neuron with two passive dendrites can compute a linearly non-separable function, i.e. the feature binding problem. I hope it will be useful for your work.