+
+
+
Spiking HM
+
+
+Created Thursday 30 May 2019
+
+
+
+
+
+
+
Neuromorphic computation
+
+
+
+- Neuroni, se li conosci li eviti
+- Algoritmi per Reti Neurali Impennanti
+- Tre chips
+
+
+
+
+
+
+
+
Sinapsi
+
+
+
+
+
+
+
+
+
+
+
Sistemi dinamici
+
+
+
+- Un set di equazioni differenziali
+
+
+
+
+
+- Una matrice di connettività
+- Descrizione matematica delle sinapsi
+
+
+
+
+
+
+
Learning
+
+
+
+Caso semplice, Hebbian: "fire together wire together":
+Caso difficile STDP → plasticità dendriti, LTP, scaling...
+Ma anche:
+
+- Adattamento neuronale (canali ionici)
+- Pruning
+
+
+
+
+
+
+
Modelli
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Plasticità
+
+
+
+
+
+
+
+
+
La verità
+
+
+
+
+
+
+
+
+
Algoritmi
+
+
+Un po di idee
+
+
+
+
+- Computazione: Trasferimento, processamento e stoccaggio di formazione.
+- -> Memoria, funzione di trasferimento etc..
+
+
+
+
+Poi volendo parliamo di cognizione e machine learning
+
+
+
+
+
+
Machine Learning
+
+
+
+
+
+
+
+
+
Reservoir Computing
+
+
+
+
+
+
+
+
+
backprop
+
+
+
+
+
+
+
+
+
E-prop
+
+
+
+
+
+
+
+
+
Von Neumann bottle-neck
+
+
+
+- Von Neumann bootlneck
+- MemResistor → Memoria + computing
+
+k
+
+
+
+
+
+
Turing cose
+
+
+A Neural Turing machine (NTMs) is a recurrent neural network model published by Alex Graves et. al. in 2014.[1] NTMs combine the fuzzy pattern matching capabilities of neural networks with the algorithmic power of programmable computers. An NTM has a neural network controller coupled to external memory resources, which it interacts with through attentional mechanisms. The memory interactions are differentiable end-to-end, making it possible to optimize them using gradient descent.[2] An NTM with a long short-term memory (LSTM) network controller can infer simple algorithms such as copying, sorting, and associative recall from input and output examples.[1] They can infer algorithms from input and output examples alone.
+https://en.wikipedia.org/wiki/Neural_Turing_machine
+https://en.wikipedia.org/wiki/Recurrent_neural_network
+
+
+
+
+
+
Conti
+
+
+1+11 =
+23*5 =
+25*77 =
+2346 - 1352353 =
+19*3245325=
+
+
+
+
+
+
Test
+
+
+
+
+
Rapido
+
+
+
+
+
+
+
+
+
Chi è?
+
+
+
+
+
In generale
+
+
+Quindi il problema non è di potere computazionale, gli spiking neural network non si sa bene cosa possono computare!
+Sono super turing? che vuol dire?
+La cognizione "animale" è super turing? è un'altra cosa?
+
+
+
+
+
+
+
Neuromorphic devices
+
+
+\
+
+
+
+
+
+
Moore's law
+
+
+Dennard scaling, also known as MOSFET scaling [bla bla 1974], states, roughly, that as transistors get smaller, their power density stays constant, so that the power use stays in proportion with area; both voltage and current scale (downward) with length.
+
+
+
+
+
+
+
Tempo/Spazio/Energia
+
+
+Note that a “human-scale” simulation with 100 trillion synapses (with relatively simple models of neurons and synapses)
+required 96 Blue Gene/Q racks of the Lawrence Livermore National Lab Sequoia supercomputer—and, yet, the simulation ran 1,500 times slower than real-time.
+A hypothetical computer to run this simulation in real-time would require 12GW, whereas the human brain consumes merely 20W.
+
+Consumo machine learning
+
+
+
+
+
+
+
Brain Drop
+
+
+Braindrop is the first neuromorphic system designed to be programmed at a high level of abstraction. Previous neuromorphic systems were programmed at the neurosynaptic level and required expert knowledge of the hardware to use. In stark contrast, Braindrop's computations are specified as coupled nonlinear dynamical systems and synthesized to the hardware by an automated procedure. This procedure not only leverages Braindrop's fabric of subthreshold analog circuits as dynamic computational primitives but also compensates for their mismatched and temperature-sensitive responses at the network level. Thus, a clean abstraction is presented to the user. Fabricated in a 28-nm FDSOI process, Braindrop integrates 4096 neurons in 0.65 mm 2 .
+
+
+
+
+
+
+
+
+
+
+