sito-hackit-19/talks/spiking-neural/res/index.html
2019-06-06 17:13:32 +02:00

381 lines
18 KiB
HTML
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta charset="UTF-8">
<title>Spiking HM</title>
<!-- metadata -->
<meta name="generator" content="Zim 0.70" />
<meta name="version" content="S5 1.1" />
<meta name="presdate" content="" />
<meta name="author" content="" />
<meta name="company" content="" />
<!-- configuration parameters -->
<meta name="defaultView" content="slideshow" />
<meta name="controlVis" content="hidden" />
<!-- style sheet links -->
<link rel="stylesheet" href="./Spiking_HM_files/_resources/ui/default/slides.css" type="text/css" media="projection" id="slideProj" />
<link rel="stylesheet" href="./Spiking_HM_files/_resources/ui/default/outline.css" type="text/css" media="screen" id="outlineStyle" />
<link rel="stylesheet" href="./Spiking_HM_files/_resources/ui/default/print.css" type="text/css" media="print" id="slidePrint" />
<link rel="stylesheet" href="./Spiking_HM_files/_resources/ui/default/opera.css" type="text/css" media="projection" id="operaFix" />
<!-- S5 JS -->
<script src="./Spiking_HM_files/_resources/ui/default/slides.js" type="text/javascript"></script>
</head>
<body>
<div class="layout">
<div id="controls"><!-- DO NOT EDIT --></div>
<div id="currentSlide"><!-- DO NOT EDIT --></div>
<div id="header"></div>
<div id="footer">
<!--
<h1>[location/date of presentation]</h1>
<h2>[slide show title here]</h2>
-->
</div>
</div>
<div class="presentation">
<div class="slide">
<h1>Spiking HM</h1>
<p>
Created Thursday 30 May 2019<br>
<img src="./Spiking_HM_files/pasted_image012.png" width="500">
</p>
</div>
<div class="slide">
<h1>Neuromorphic computation</h1>
<p>
<ul>
<li>Neuroni, se li conosci li eviti</li>
<li>Algoritmi per Reti Neurali Impennanti</li>
<li>Tre chips</li>
</ul>
</p>
</div>
<div class="slide">
<h1>Neuroni</h1>
<p>
<img src="./Spiking_HM_files/pasted_image007.png" width="700"><br>
<a href="http://jackterwilliger.com/biological-neural-networks-part-i-spiking-neurons/" title="http://jackterwilliger.com/biological-neural-networks-part-i-spiking-neurons/" class="http">http://jackterwilliger.com/biological-neural-networks-part-i-spiking-neurons/</a>
</p>
</div>
<div class="slide">
<h1>Sinapsi</h1>
<p>
<ul>
<li><img src="./Spiking_HM_files/pasted_image002.png" width="708"></li>
</ul>
</p>
</div>
<div class="slide">
<h1>Sistemi dinamici</h1>
<p>
<ul>
<li>Un set di equazioni differenziali </li>
</ul>
<div style='padding-left: 510pt'>
<img src="file:///home/cocco/Documents/neuroscienze/SNHM/equazione.png">
</div>
<ul>
<li>Una matrice di connettività</li>
<li>Descrizione matematica delle sinapsi</li>
</ul>
</p>
</div>
<div class="slide">
<h1>Learning</h1>
<p>
<img src="./Spiking_HM_files/pasted_image010.png"><br>
Caso semplice, Hebbian: "fire together wire together": <br>
Caso difficile <b>STDP → </b>plasticità dendriti, LTP, scaling...<br>
Ma anche:<br>
<ul>
<li>Adattamento neuronale (canali ionici)</li>
<li>Pruning</li>
</ul>
</p>
</div>
<div class="slide">
<h1>Modelli</h1>
<p>
<div style='padding-left: 90pt'>
<img src="file:///home/cocco/Documents/neuroscienze/phd_project/articles/images/models_izikhievich.png">
</div>
</p>
<p>
<ul>
<li><a href="http://jackterwilliger.com/attractor-networks/" title="http://jackterwilliger.com/attractor-networks/" class="http">http://jackterwilliger.com/attractor-networks/</a></li>
</ul>
</p>
</div>
<div class="slide">
<h1>Plasticità</h1>
<p>
<img src="file:///home/cocco/Documents/neuroscienze/SNHM/plasticity.png">
</p>
</div>
<div class="slide">
<h1>La verità</h1>
<p>
<img src="./Spiking_HM_files/pasted_image.png" height="600"><img src="./Spiking_HM_files/pasted_image014.png" height="600">
</p>
</div>
<div class="slide">
<h1>Algoritmi</h1>
<p>
Un po di idee
</p>
<p>
<ul>
<li>Computazione: Trasferimento, processamento e stoccaggio di formazione.</li>
<li>-&gt; Memoria, funzione di trasferimento etc..</li>
</ul>
</p>
<p>
Poi volendo parliamo di <b>cognizione e machine learning</b>
</p>
</div>
<div class="slide">
<h1>Machine Learning</h1>
<p>
<img src="file:///home/cocco/Documents/neuroscienze/SNHM/algoritmi.png" width="800">
</p>
</div>
<div class="slide">
<h1>Reservoir Computing</h1>
<p>
<img src="file:///home/cocco/Documents/neuroscienze/SNHM/reservoir.png"><img src="file:///home/cocco/Documents/neuroscienze/SNHM/reservoir_2.png">
</p>
</div>
<div class="slide">
<h1>backprop</h1>
<p>
<img src="./Spiking_HM_files/pasted_image001.png">
</p>
</div>
<div class="slide">
<h1>E-prop</h1>
<p>
<img src="file:///home/cocco/Documents/neuroscienze/SNHM/eprop.png">
</p>
</div>
<div class="slide">
<h1>Von Neumann bottle-neck</h1>
<p>
<ul>
<li>Von Neumann bootlneck</li>
<li><a href="" title="MemResistor" class="page">MemResistor</a> → Memoria + computing </li>
</ul>
k<img src="file:///home/cocco/Documents/neuroscienze/SNHM/memcomputing.png">
</p>
</div>
<div class="slide">
<h1>Turing cose</h1>
<p>
A Neural Turing machine (NTMs) is a recurrent neural network model published by Alex Graves et. al. in 2014.[1] NTMs combine the fuzzy pattern matching capabilities of neural networks with the algorithmic power of programmable computers. An NTM has a neural network controller coupled to external memory resources, which it interacts with through attentional mechanisms. The memory interactions are differentiable end-to-end, making it possible to optimize them using gradient descent.[2] An NTM with a long short-term memory (LSTM) network controller can infer simple algorithms such as copying, sorting, and associative recall from input and output examples.[1] They can infer algorithms from input and output examples alone. <br>
<a href="https://en.wikipedia.org/wiki/Neural_Turing_machine" title="https://en.wikipedia.org/wiki/Neural_Turing_machine" class="https">https://en.wikipedia.org/wiki/Neural_Turing_machine</a><br>
<a href="https://en.wikipedia.org/wiki/Recurrent_neural_network" title="https://en.wikipedia.org/wiki/Recurrent_neural_network" class="https">https://en.wikipedia.org/wiki/Recurrent_neural_network</a>
</p>
</div>
<div class="slide">
<h1>Conti</h1>
<p>
1+11 =<br>
23*5 = <br>
25*77 =<br>
2346 - 1352353 =<br>
19*3245325=
</p>
</div>
<div class="slide">
<h1>Test</h1>
</div>
<div class="slide">
<h1>Rapido</h1>
<p>
<img src="./Spiking_HM_files/pasted_image016.png">
</p>
</div>
<div class="slide">
<h1>Chi è?</h1>
</div>
<div class="slide">
<h1>In generale</h1>
<p>
Quindi il problema non è di potere computazionale, gli spiking neural network non si sa bene cosa possono computare!<br>
Sono super turing? che vuol dire?<br>
La cognizione "animale" è super turing? è un'altra cosa?<br>
<img src="./Spiking_HM_files/pasted_image011.png">
</p>
</div>
<div class="slide">
<h1>Neuromorphic devices</h1>
<p>
<img src="./Spiking_HM_files/pasted_image013.png">\
</p>
</div>
<div class="slide">
<h1>Moore's law</h1>
<p>
Dennard scaling, also known as MOSFET scaling [bla bla 1974], states, roughly, that as transistors get smaller, their power density stays constant, so that the power use stays in proportion with area; both voltage and current scale (downward) with length.<br>
<img src="./Spiking_HM_files/pasted_image005.png" width="500">
</p>
</div>
<div class="slide">
<h1>Tempo/Spazio/Energia</h1>
<p>
Note that a “human-scale” simulation with 100 trillion synapses (with relatively simple models of neurons and synapses) <br>
required 96 Blue Gene/Q racks of the Lawrence Livermore National Lab Sequoia supercomputer—and, yet, the simulation ran 1,500 times slower than real-time. <br>
A hypothetical computer to run this simulation in real-time would require 12GW, whereas the human brain consumes merely 20W.<br>
<img src="./Spiking_HM_files/pasted_image006.png" width="600"><br>
Consumo machine learning
</p>
</div>
<div class="slide">
<h1>True North</h1>
<p>
<a href="https://www.research.ibm.com/artificial-intelligence/experiments/try-our-tech/" title="https://www.research.ibm.com/artificial-intelligence/experiments/try-our-tech/" class="https">https://www.research.ibm.com/artificial-intelligence/experiments/try-our-tech/</a><br>
<a href="http://www.lescienze.it/news/2014/08/11/news/chip_reti_neurali_cervello_consumi_ridotti_efficienza_truenorth-2245681/" title="http://www.lescienze.it/news/2014/08/11/news/chip_reti_neurali_cervello_consumi_ridotti_efficienza_truenorth-2245681/" class="http">http://www.lescienze.it/news/2014/08/11/news/chip_reti_neurali_cervello_consumi_ridotti_efficienza_truenorth-2245681/</a><br>
<a href="https://en.wikipedia.org/wiki/TrueNorth" title="https://en.wikipedia.org/wiki/TrueNorth" class="https">https://en.wikipedia.org/wiki/TrueNorth</a>
</p>
</div>
<div class="slide">
<h1>Brain Drop</h1>
<p>
Braindrop is the first neuromorphic system designed to be programmed at a high level of abstraction. Previous neuromorphic systems were programmed at the neurosynaptic level and required expert knowledge of the hardware to use. In stark contrast, Braindrop's computations are specified as coupled nonlinear dynamical systems and synthesized to the hardware by an automated procedure. This procedure not only leverages Braindrop's fabric of subthreshold analog circuits as dynamic computational primitives but also compensates for their mismatched and temperature-sensitive responses at the network level. Thus, a clean abstraction is presented to the user. Fabricated in a 28-nm FDSOI process, Braindrop integrates 4096 neurons in 0.65 mm 2 .
</p>
</div>
<div class="slide">
<h1>Loihi</h1>
<p>
<a href="https://en.wikichip.org/wiki/intel/loihi" title="https://en.wikichip.org/wiki/intel/loihi" class="https">https://en.wikichip.org/wiki/intel/loihi</a>
</p>
</div>
<div class="slide">
<h1>Reference</h1>
<p>
<a href="https://web.stanford.edu/group/brainsinsilicon/neuromorphics.html" title="https://web.stanford.edu/group/brainsinsilicon/neuromorphics.html" class="https">https://web.stanford.edu/group/brainsinsilicon/neuromorphics.html</a><br>
<a href="http://www.human-memory.net/" title="http://www.human-memory.net/" class="http">http://www.human-memory.net/</a> | The Human Memory - what it is, how it works and how it can go wrong<br>
<a href="http://www.human-memory.net/processes_storage.html" title="http://www.human-memory.net/processes_storage.html" class="http">http://www.human-memory.net/processes_storage.html</a> | Memory Storage - Memory Processes - The Human Memory<br>
<a href="https://www.semanticscholar.org/paper/Foundations-of-computational-neuroscience-Piccinini-Shagrir/d01b28fb22346bea00b053b8ccbd00ffc202ccf0/figure/0" title="https://www.semanticscholar.org/paper/Foundations-of-computational-neuroscience-Piccinini-Shagrir/d01b28fb22346bea00b053b8ccbd00ffc202ccf0/figure/0" class="https">https://www.semanticscholar.org/paper/Foundations-of-computational-neuroscience-Piccinini-Shagrir/d01b28fb22346bea00b053b8ccbd00ffc202ccf0/figure/0</a> | Figure 1 from Foundations of <a href="https://www.scientificamerican.com/article/computers-vs-brains/?redirect=1" title="https://www.scientificamerican.com/article/computers-vs-brains/?redirect=1" class="https">https://www.scientificamerican.com/article/computers-vs-brains/?redirect=1</a><br>
computational neuroscience - Semantic Scholar<br>
<a href="https://en.wikipedia.org/wiki/Dennard_scaling#Breakdown_of_Dennard_scaling_around_2006" title="https://en.wikipedia.org/wiki/Dennard_scaling#Breakdown_of_Dennard_scaling_around_2006" class="https">https://en.wikipedia.org/wiki/Dennard_scaling#Breakdown_of_Dennard_scaling_around_2006</a> | Dennard scaling - Wikipedia<br>
<a href="http://jackterwilliger.com/biological-neural-network-synapses/" title="http://jackterwilliger.com/biological-neural-network-synapses/" class="http">http://jackterwilliger.com/biological-neural-network-synapses/</a> | Synapses, (A Bit of) Biological Neural Networks Part II<br>
<a href="https://en.wikipedia.org/wiki/Von_Neumann_architecture" title="https://en.wikipedia.org/wiki/Von_Neumann_architecture" class="https">https://en.wikipedia.org/wiki/Von_Neumann_architecture</a> | Von Neumann architecture - Wikipedia<br>
<a href="https://books.google.it/books?hl=it&lr=&id=_eDICgAAQBAJ&oi=fnd&pg=PR5&dq=Bert+Kappen+neuromorphic&ots=vO-WQ-2b91&sig=0qSWaafR2Dv2Dm0QCJAqHyEkKFU#v=onepage&q=Bert Kappen neuromorphic&f=false" title="https://books.google.it/books?hl=it&amp;lr=&amp;id=_eDICgAAQBAJ&amp;oi=fnd&amp;pg=PR5&amp;dq=Bert+Kappen+neuromorphic&amp;ots=vO-WQ-2b91&amp;sig=0qSWaafR2Dv2Dm0QCJAqHyEkKFU#v=onepage&amp;q=Bert Kappen neuromorphic&amp;f=false" class="https">https://books.google.it/books?hl=it&amp;lr=&amp;id=_eDICgAAQBAJ&amp;oi=fnd&amp;pg=PR5&amp;dq=Bert+Kappen+neuromorphic&amp;ots=vO-WQ-2b91&amp;sig=0qSWaafR2Dv2Dm0QCJAqHyEkKFU#v=onepage&amp;q=Bert Kappen neuromorphic&amp;f=false</a> | Modeling Language, Cognition And Action - Proceedings Of The Ninth Neural ... - Google Libri<br>
<a href="https://en.wikichip.org/wiki/intel/loihi" title="https://en.wikichip.org/wiki/intel/loihi" class="https">https://en.wikichip.org/wiki/intel/loihi</a> | Loihi - Intel - WikiChip<br>
<a href="https://en.wikichip.org/wiki/neuromorphic_chip" title="https://en.wikichip.org/wiki/neuromorphic_chip" class="https">https://en.wikichip.org/wiki/neuromorphic_chip</a> | Neuromorphic Chip - WikiChip<br>
<a href="https://en.wikipedia.org/wiki/Moore%27s_law" title="https://en.wikipedia.org/wiki/Moore%27s_law" class="https">https://en.wikipedia.org/wiki/Moore%27s_law</a> | Moore's law - Wikipedia<br>
<a href="https://en.wikipedia.org/wiki/Hebbian_theory" title="https://en.wikipedia.org/wiki/Hebbian_theory" class="https">https://en.wikipedia.org/wiki/Hebbian_theory</a> | Hebbian theory - Wikipedia
</p>
<p>
<a href="http://www.lescienze.it/news/2014/08/11/news/chip_reti_neurali_cervello_consumi_ridotti_efficienza_truenorth-2245681/" title="http://www.lescienze.it/news/2014/08/11/news/chip_reti_neurali_cervello_consumi_ridotti_efficienza_truenorth-2245681/" class="http">http://www.lescienze.it/news/2014/08/11/news/chip_reti_neurali_cervello_consumi_ridotti_efficienza_truenorth-2245681/</a> | TrueNorth, il chip a basso consumo che imita le reti cerebrali - Le Scienze<br>
<a href="https://www.slideshare.net/Funk98/end-of-moores-law-or-a-change-to-something-else" title="https://www.slideshare.net/Funk98/end-of-moores-law-or-a-change-to-something-else" class="https">https://www.slideshare.net/Funk98/end-of-moores-law-or-a-change-to-something-else</a> | End of Moore's Law?<br>
<a href="https://mms.businesswire.com/media/20150617006169/en/473143/5/2272063_Moores_Law_Graphic_Page_14.jpg" title="https://mms.businesswire.com/media/20150617006169/en/473143/5/2272063_Moores_Law_Graphic_Page_14.jpg" class="https">https://mms.businesswire.com/media/20150617006169/en/473143/5/2272063_Moores_Law_Graphic_Page_14.jpg</a> | 2272063_Moores_Law_Graphic_Page_14.jpg (JPEG Image, 2845 × 2134 pixels) - Scaled (44%)<br>
<a href="https://www.techopedia.com/definition/32953/neuromorphic-computing" title="https://www.techopedia.com/definition/32953/neuromorphic-computing" class="https">https://www.techopedia.com/definition/32953/neuromorphic-computing</a> | What is Neuromorphic Computing? - Definition from Techopedia<br>
<a href="http://www.messagetoeagle.com/artificial-intelligence-super-turing-machine-imitates-human-brain/" title="http://www.messagetoeagle.com/artificial-intelligence-super-turing-machine-imitates-human-brain/" class="http">http://www.messagetoeagle.com/artificial-intelligence-super-turing-machine-imitates-human-brain/</a> | Artificial Intelligence: Super-Turing Machine Imitates Human Brain | MessageToEagle.com<br>
<a href="https://www.quora.com/If-the-human-brain-were-a-computer-it-could-perform-38-thousand-trillion-operations-per-second-The-worlds-most-powerful-supercomputer-BlueGene-can-manage-only-002-of-that-But-we-cannot-perform-like-a-supercomputer-Why" title="https://www.quora.com/If-the-human-brain-were-a-computer-it-could-perform-38-thousand-trillion-operations-per-second-The-worlds-most-powerful-supercomputer-BlueGene-can-manage-only-002-of-that-But-we-cannot-perform-like-a-supercomputer-Why" class="https">https://www.quora.com/If-the-human-brain-were-a-computer-it-could-perform-38-thousand-trillion-operations-per-second-The-worlds-most-powerful-supercomputer-BlueGene-can-manage-only-002-of-that-But-we-cannot-perform-like-a-supercomputer-Why</a> | 'If the human brain were a computer, it could perform 38 thousand trillion operations per second. The worlds most powerful supercomputer, BlueGene, can manage only .002% of that.' But, we cannot perform like a supercomputer. Why? - Quora<br>
about:reader?url=https%3A%2F%2Fwww.nbcnews.com%2Fsciencemain%2Fhuman-brain-may-be-even-more-powerful-computer-thought-8C11497831 | Human brain may be even more powerful computer than thought<br>
<a href="https://www.scientificamerican.com/article/computers-vs-brains/?redirect=1" title="https://www.scientificamerican.com/article/computers-vs-brains/?redirect=1" class="https">https://www.scientificamerican.com/article/computers-vs-brains/?redirect=1</a> | Computers versus Brains - Scientific American<br>
<a href="https://www.youtube.com/results?search_query=go+artificial" title="https://www.youtube.com/results?search_query=go+artificial" class="https">https://www.youtube.com/results?search_query=go+artificial</a> | go artificial - YouTube<br>
<a href="https://www.youtube.com/watch?v=g-dKXOlsf98" title="https://www.youtube.com/watch?v=g-dKXOlsf98" class="https">https://www.youtube.com/watch?v=g-dKXOlsf98</a> | The computer that mastered Go - YouTube<br>
<a href="https://www.youtube.com/watch?v=TnUYcTuZJpM" title="https://www.youtube.com/watch?v=TnUYcTuZJpM" class="https">https://www.youtube.com/watch?v=TnUYcTuZJpM</a> | Google's Deep Mind Explained! - Self Learning A.I. - YouTube<br>
<a href="https://cacm.acm.org/magazines/2019/4/235577-neural-algorithms-and-computing-beyond-moores-law/fulltext#R33" title="https://cacm.acm.org/magazines/2019/4/235577-neural-algorithms-and-computing-beyond-moores-law/fulltext#R33" class="https">https://cacm.acm.org/magazines/2019/4/235577-neural-algorithms-and-computing-beyond-moores-law/fulltext#R33</a> | Neural Algorithms and Computing Beyond Moore's Law | April 2019 | Communications of the ACM<br>
<a href="https://medium.com/@thomas.moran23/the-amazing-neuroscience-and-physiology-of-learning-1247d453316b" title="https://medium.com/@thomas.moran23/the-amazing-neuroscience-and-physiology-of-learning-1247d453316b" class="https">https://medium.com/@thomas.moran23/the-amazing-neuroscience-and-physiology-of-learning-1247d453316b</a> | The Amazing Neuroscience and Physiology of Learning
</p>
</div>
</div>
</body>
</html>