Neuromorphic Implementation of Particle Filtering for event based faster convergence. Final Code and paper in progress...
I am considering a case where I have a 1-D environment. There are s-possible locations on the map and one of these location contains the robot; out of which k-locations contain a beacon to get the sensor reading from(we already know where exactly the beacons are located). We are considering n-particles to be randomly placed on the map with equal weight. Consider location 0 to be the leftmost spot on the 1-D map and location s-1 being the rightmost spot on the 1-D map. Also, it is important to note that in this entire architecture, all the neuron's once spiked, bring their potential to -1.
Memory Unit Explained: The memory unit has k columns and n rows with each row being a recurrent unit to represent where our particle is located. Consider that the first beacon has 2 spots to the left of it, then every LIF particle in column 1 will have a spiking threshold of (2+1 =) 3 and this trend follows for all the columns. For a particular row, the feature vector formed by viewing the volatges of all neurons of that row tells us the belief of that particle's location. For every row, the neurons are conected in feedback manner with each neuron sending an "excitory signal" to the next neuron to form this efficient memory storage structure. Once the neuron fires an HE spike, its threshold goes down to -1 and the next neuron's threshold becomes 0 voltage... So, for example if a particle is at the starting position, the belief vector would be [0,-1,-1,-1....-1]; if the particle has crossed the first beacon and moved 1 unit to the right of the first beacon, its feature vector would be [-1, 1,-1......-1]. It is important to note that input to all the neurons in this block is the input spike train and the "HE" signal is just part of the control path.
Measurement phase: each neuron in the memory unit recieves "u" spikes which represent the number of units the robot moved forward. The complex recurrent memory structure can thus efficiently update the beliefs of all the particles in parallel using the architecture.
Particle Layer: The layer contains "n" neurons for "n" particles represented as an n-row vector; with each neuron having a thresold of 0 i.e. they will spike as soon as they see an incoming spike. All the "n" neurons have in "inhibition signal" in its control path coming from all of k-neurons(of the memory unit) from their corresponding row block.
Update phase: In this phase, "s" spikes are sent as inputs to all three layers: memory block, particle layer and the output neuron. The memory block(s-spikes will not change the belief position as it will circle back to the same configuration after s-spikes) but the active neuron per coloumn for each particle will initiate an inhibitory signal for its particle in the particle layer. So, basically, the particle layer will keep generating spikes until the memory block inhibits it based on the closest beacon location. An additional gimmic is that the output layer is inhibitted after "y" spikes representing the measurement signal.
This entire architecture works on the fact that the output layer implements STDP on the spikes generated from the particle layer (or the input to the output layer, represnting the updated belief weights) and the outputs generated by sensor based inhibition in the output block. Thus, updating the particle weights which represent the sensor realtiy... This loop acts as particle filtering.