Spikes and Computation in Sensory Processing Simon Thorpe CerCo ( - - PowerPoint PPT Presentation

spikes and computation in sensory processing
SMART_READER_LITE
LIVE PREVIEW

Spikes and Computation in Sensory Processing Simon Thorpe CerCo ( - - PowerPoint PPT Presentation

Spikes and Computation in Sensory Processing Simon Thorpe CerCo ( Brain and Cognition Research Center ) & SpikeNet Technology SARL, Toulouse, France Simon Thorpe 1974 - 77Physiology and Psychology ( Oxford ) 1977 - 81 Doctorate


slide-1
SLIDE 1

Spikes and Computation in Sensory Processing

Simon Thorpe

CerCo (Brain and Cognition Research Center) & SpikeNet Technology SARL, Toulouse, France

slide-2
SLIDE 2
  • 1974-77Physiology and Psychology (Oxford)
  • 1977-81 Doctorate with Edmund Rolls (Oxford)
  • Recording neurons in orbitofrontal cortex, striatum, parietal cortex, lateral

hypothalamus, amygdala, hippocampus, supplementary motor area (!!)

  • 1982-83Post-doc with Max Cynader (Halifax, Canada)
  • Attempting to test Hebb's hypothesis in kitten visual cortex
  • 1983-93Paris with Michel Imbert
  • Testing Hebb's Hypothesis (with Y

ves Frégnac & Elie Bienenstock)

  • Dynamics of responses in monkey V1 (with Simona Celebrini & Y

ves T rotter)

  • 1993-nowCERCO Toulouse
  • W
  • rk on ultra-rapid scene processing
  • 1999-now

SpikeNet

  • Spike-based image processing
  • Bioinspired vision

Simon Thorpe

slide-3
SLIDE 3

Overview

Part 1

  • Ultrarapid visual categorization
  • Biological vs Computer Vision
  • Temporal Constraints
  • Coding with Spikes
  • Convolutional Neural Networks

in 1999

  • SpikeNet
  • The current state of the art in

Convolutional Neural Networks

  • Supervision & GoogleNet

Part 2

  • Biologically inspired learning
  • Spike-Time Dependent Plasticity

(STDP)

  • Applications in Vision
  • Applications in Audition
  • Development of Neural Selectivity
  • Memories that can last a lifetime
  • Grandmother Cells
  • Neocortical Dark Matter
slide-4
SLIDE 4

Behavioural Reaction Times

Targets Distractors Difference

Event Related Potentials Scene Processing in 150 ms

Ultra-Rapid Visual Processing

slide-5
SLIDE 5
  • Saccades

towards faces

  • Latency

100ms!

Crouzet, ¡Kirchner ¡& ¡Thorpe, ¡2010

Ultra-Rapid Visual Processing

slide-6
SLIDE 6

RSVP at 10 fps

Ultra-Rapid Visual Processing

slide-7
SLIDE 7

Activating Memories

  • “That Dalmation picture

that I saw in Psych 101”

  • How does the brain

recognize visual objects and scenes?

  • Can a machine be built

that can do the same thing?

Some Questions

slide-8
SLIDE 8

Biological and Computer Vision

Common Problems

  • Identification and Categorising objects and events in complex dynamically changing natural

scenes

  • As fast as possible
  • As reliably as possible
  • Using the most energy efficient hardware possible
  • Using the smallest size and weight footprint

Common Solutions?

  • David Marr (1982) “Vision : A computational

investigation into the Human Representation and Processing of Visual Information”

  • Recent years - is there convergence?
slide-9
SLIDE 9

Hardware Constraints

Electronics

  • Nvidia GeForce GTX Titan
  • 4.5 TeraFlops!
  • 2668 cores
  • 7.1 billion transistors
  • 288 Gbytes/sec Memory bus
  • $999

Brain

  • 1 KHz
  • 86 billion processors
  • 1-2 m/s conduction velocity
  • 20 watts

Questions

Are we going to be able to implement brain style computing with conventional computing?

Response

It depends if we can understand how the brain computes How many teraflops does the brain need? How much memory bandwidth?

slide-10
SLIDE 10

Classic Neural Computing

  • To simulate the visual system
  • 4 billion neurons
  • 10000 connections each
  • Update at 1 kHz
  • 40 Petaflops
slide-11
SLIDE 11

What’s missing?

  • Real brains use spikes
  • Simulation
  • 16.7 million neurons
  • 21 billion synapses
  • The Brain
  • 86 billion neurons
  • 16 billion in the cortex
  • 100 trillion synapses
  • Spikes
  • Firing rate
  • 0-200 Hz
  • Spontaneous activity
  • 1-2 Hz?
slide-12
SLIDE 12

Classes of Brain Simulation

  • Connectionist Simulators (non-spiking)
  • Backpropagation, ART, Kohonen Maps, Time Delay Networks,
  • Computational Neuroscience Simulators
  • Up to 30 000 compartments
  • Lots of channel kinetics etc.
  • Spike-Based Simulators
  • Software
  • Hardware
slide-13
SLIDE 13

The Human Brain Project

  • European Flagship Proposal
  • Three simulation approaches
  • Blue Brain
  • Henry Markram (Lausanne, Switzerland)
  • Analog Chips
  • Karl-Heinz Meyer (Heidelburg, Germany)
  • SpiNNaker
  • Steve Furber (Manchester, UK)
slide-14
SLIDE 14

SpiNNaker Project

  • A simulation system for Spiking Neural Networks
  • Prof. Steve Furber, Computer Science, University of Manchester
  • 18 ARM968 cores
  • each with 64 Kbytes of Data and 32 Kbytes of Instructions
  • 128 MBytes of shared memory
  • 48 chips on a board
  • 18 boards in a 19” frame

A billion neurons in real time

slide-15
SLIDE 15

IBM T rueNorth Chip

  • What can be computed?
SCIENCE
slide-16
SLIDE 16
  • Jerry Feldman’s 100-step limit
  • High level decisions in about 0.5 seconds
  • Interval between spikes around 5 ms
  • No more than 100 (massively parallel) computational steps
  • Development of connectionist and PDP modelling
  • Surely, more detail can help

Temporal Constraints – Early 1980s

slide-17
SLIDE 17

Temporal Contraints - 1989

Argument

  • Roughly 10 layers
  • 10 ms per layer
  • Firing rates 0-100 Hz

Face selectivity at 100 ms Food selectivity at 150 ms

Therefore

  • Mainly feedforward (?!)
  • One spike per neuron (?!)

T T T T T T

Retina LGN V1 V2 V4 IT

slide-18
SLIDE 18

Temporal Constraints 1988-1989

Inferotemporal Cortex :Face selectivity at 100 ms

Perrett, Caan & Rolls, 1982

See also : Jerry Feldman & John Tsotsos Leonard Uhr (1987)

slide-19
SLIDE 19

How can you code with just one spike per neuron?

  • Feedforward

processing

  • Only a few

milliseconds per processing step

  • One spike per

neuron

  • V

ery sparse coding

  • Processing without

context based help

Ultra-Rapid Visual Processing

slide-20
SLIDE 20

Sensory Coding with Spikes

  • Adrian (1920s)
  • First recordings from sensory fibres
  • Hubel & Wiesel (1962)
  • Orientation selectivity in striate cortec
slide-21
SLIDE 21

Sensory Coding with Spikes

  • Wurtz & Goldberg (1972, 1976)

Raster Display Post Stimulus Time Histogram Assumption : Firing rate is enough to describe the response

slide-22
SLIDE 22

The Classic View

2 4 1 2 3 0,213 0,432 0,112 0,238 0,309

  • Spikes don't really matter
  • Neurons send floating point numbers
  • The floating point numbers are

transformed into spikes trains using a Poisson process

  • God plays dice with spike generation

0,375

slide-23
SLIDE 23

Temporal Coding Option

  • Spikes do really matter
  • The temporal patterning of

spikes across neurons is critical for computation

  • Synchrony
  • Repeating patterns
  • etc
  • The apparent noise in spiking

is unexplained variation

slide-24
SLIDE 24

Simon Thorpe's V ersion

Weak Stimulus Medium Stimulus Strong Stimulus Time Threshold

  • Ordering of spikes is critical
  • The most activated neurons

fire first

  • Temporal coding is used even

for stimuli that are not temporally structured

  • Computation theoretically

possible even when each neuron emits one spike

slide-25
SLIDE 25

Sensory Coding with Spikes

  • Adrian (1920s)
  • First recordings from sensory fibres

Low ¡intensity High ¡intensity

Higher ¡maintained ¡ firing Higher ¡peak ¡ firing Shorter ¡ Latency!

slide-26
SLIDE 26

Coding in the Optic Nerve

1,000,000 fibres

slide-27
SLIDE 27

Intensity

n n n n n n n n n n n n n n n n

Coding in the Optic Nerve

slide-28
SLIDE 28

Intensity

n n n n n n n n n n n n n n n n

A mini retina 32 x 32 pixels

Coding in the Optic Nerve

slide-29
SLIDE 29

Coding with Spike Ordering

Less than 1% of cells need to fire for recognition! Example

  • A toy retina
slide-30
SLIDE 30

Early Studies

  • Face identification directly

from the output of oriented filters

slide-31
SLIDE 31

Early Studies

  • Virtually all the faces correctly

identified

  • V

ery robust to low contrast

  • V

ery robust to noise

slide-32
SLIDE 32

SpikeNet Technology

  • Created in 1999
  • Simon Thorpe, Rufin V

anRullen & Arnaud Delorme

  • Currently 12 employees
slide-33
SLIDE 33

SpikeNet - Invariance

  • Luminance
  • Contrast
  • Blurring
  • Noise
slide-34
SLIDE 34

SpikeNet - Invariance

  • Size
  • Rotation
  • Perspective
slide-35
SLIDE 35

SpikeNet - Invariance

  • 3D Rotation
  • Identity
  • What is the current state of the art?
slide-36
SLIDE 36

Biological vs Computer Hardware

Computer

  • Nvidia GeForce GTX Titan
  • 4.5 TeraFlops!
  • 2668 cores
  • 7.1 billion transistors
  • 288 Gbytes/sec Memory bus
  • 200 watts
  • $999
  • 86 billion neurons
  • 16 billion in the cortex
  • 4 billion in the visual system
  • 1 KHz
  • 1-2 m/s conduction velocity
  • 20 watts

Is that enough to reproduce human performance? Brain

slide-37
SLIDE 37

The ImageNet Challenge

  • 10,000,000 training images
  • 10,000+ labels
  • Systems tested on new images, with 1000 possible labels
  • ECCV 2012 Firenze
  • The state of the art was beaten by a “simple”

feedforward convolutional neural network trained with Back-Propagation

slide-38
SLIDE 38

SuperVision

  • 650,000 “neurons”
  • 60,000,000 parameters
  • 630,000,000 “synapses”

Output Layer Convolutional Layers Fully Connected Layers

253440 186624 64896 64896 43264

slide-39
SLIDE 39

Animals

SuperVision

slide-40
SLIDE 40

And then….

  • Geoff Hinton and his two

students launched a start-up (DNNresearch)

  • bought by Google…
  • Y

ann LeCun, a pioneer of feed- forward convolutional networks since the end of the 1980s

  • Hired by Facebook…
slide-41
SLIDE 41

Supervision vs Primate Vision

Convergent Evolution!

slide-42
SLIDE 42

? ? ? ?

Coding at higher levels

slide-43
SLIDE 43

Basic Methodology

  • Computer models
  • 57 parameters
  • 1-3 layers
  • Neural Activity
  • Parallel Multielectrode Recordings
  • V4 and IT
  • Activity (70-170 ms)
  • Human performance
  • Crowd sourced data
  • Amazon Mechanical Turk
  • 104 subjects
slide-44
SLIDE 44

Comparison of Models

  • HMO - Hierarchical Modular Optimisation
  • 4 layer CNN with 1250 output units
  • Close match between models, humans and neural signals
slide-45
SLIDE 45

Human vs Machine

slide-46
SLIDE 46
  • Sorry, Andrej!
  • What about faces?

Feb ¡2015

Human vs Machine

slide-47
SLIDE 47

A range of architectures

  • SpikeNet (1999)
  • Identification

immediately after V1

  • Supervision (2012)
  • 7 layers
  • GoogleNet (2014)
  • 30 layers
  • 12 times less parameters
slide-48
SLIDE 48

The Surprises

  • Human levels of performance are possible despite
  • No feedback
  • No horizontal connections
  • No top-down control
  • No need for context
  • No dendrites
  • Complex channel dynamics
  • Only positive and negative weights
  • No dynamics
  • No memory
  • No oscillations
  • No “binding”
  • No attention!!!
  • No Spikes!!!
slide-49
SLIDE 49

Question : Is Spike Timing Precise Enough?

Fred Rieke, David W arland, Rob de Ruyter van Steveninck and William Bialek (1999)

Increasing evidence that the timing of spikes carries a great deal of information

slide-50
SLIDE 50

First Spike Coding : Evidence

Coding motion direction Coding stimulus shape

slide-51
SLIDE 51

Experimental proof in the visual system!

Salamander Retinal Ganglion cells Information in latency vs Information in the spike count

slide-52
SLIDE 52

Overview

Part 1

  • Ultrarapid visual categorization
  • Biological vs Computer Vision
  • Temporal Constraints
  • Coding with Spikes
  • Convolutional Neural Networks

in 1999

  • SpikeNet
  • The current state of the art in

Convolutional Neural Networks

  • Supervision & GoogleNet

Part 2

  • Biologically inspired learning
  • Spike-Time Dependent Plasticity

(STDP)

  • Applications in Vision
  • Applications in Audition
  • Development of Neural Selectivity
  • Memories that can last a lifetime
  • Grandmother Cells
  • Neocortical Dark Matter
slide-53
SLIDE 53

The Learning Problem

  • SuperVision uses Backpropagation
  • Billions of training examples with labeled data
  • 1 ms per image!
  • 100 images in 100 ms
  • Totally unbiological

Spike-Based Processing

slide-54
SLIDE 54

Song, Miller & Abbott, 2000

  • Synapses that fire before the target neuron get strengthened
  • Synapses that fire after the target neuron get weakened

Spike Time Dependent Plasticity (STDP)

Simplified STDP rule

  • All synapses are depressed after each output neuron spike
  • Except those that fire in just before
  • Natural Consequence
  • High synaptic weights concentrate on early firing inputs
slide-55
SLIDE 55

Threshold 3 Short time constant 12 inputs Initial synaptic weight = 0.25 Threshold = 3.0 All inputs must fire to reach threshold

slide-56
SLIDE 56

Learning of a repeating pattern

Threshold 3 Short time constant

Presentation 1

All synapses activated before the output neuron STDP reinforces all synapses

slide-57
SLIDE 57

Learning of a repeating pattern

Threshold 3 Short time constant

Presentation 2

9 synapses reinforced 3 depressed

slide-58
SLIDE 58

Learning of a repeating pattern

Threshold 3 Short time constant

Presentation 3

6 synapses reinforced 6 depressed

slide-59
SLIDE 59

Learning of a repeating pattern

Threshold 3 Short time constant

Presentation 4

3 synapses reinforced 9 depressed

slide-60
SLIDE 60

Learning of a repeating pattern

Threshold 3 Short time constant

Presentation 5

slide-61
SLIDE 61

Learning of a repeating pattern

Threshold 3 Short time constant

Presentation 6

3 fully potentiated synapses 9 set to zero

slide-62
SLIDE 62

Learning of a repeating pattern

Threshold 3 Short time constant

In six presentations STDP has found the first three inputs that fire duing the repeating pattern

slide-63
SLIDE 63

Multiple Units

Mutual inhibition Competitive Learning Mechanism

slide-64
SLIDE 64

Multiple Units

slide-65
SLIDE 65

Multiple Units

slide-66
SLIDE 66

Multiple Units

slide-67
SLIDE 67

Multiple Units

Neurons can detect patterns embedded in continous activity

slide-68
SLIDE 68

Neurons can detect patterns embedded in continous activity

Multiple Units

Add a second layer to detect combinations of combinations

Can this be scaled up beyond a toy demo?

slide-69
SLIDE 69

Learning Spike Sequences with STDP

slide-70
SLIDE 70
  • Initial State
  • During

Learning

  • After Learning

Learning Spike Sequences with STDP

slide-71
SLIDE 71

Learning Faces with STDP

slide-72
SLIDE 72

Learning Cars with STDP

Tobi Delbruck’s Spiking Retina STDP Rule

slide-73
SLIDE 73

12 seconds 30 seconds 90 seconds

  • The system has

learned to recognize cars

  • No supervision!

“STDP” Rule

Tobi Delbruck’s Spiking Retina

Learning Cars with STDP

slide-74
SLIDE 74

Building a complete visual system

1920 1080

  • Add more neurons
  • Add more layers of processing
  • Increase the input resolution
  • Use more sophisticated

preprocessing

  • Mexican hat convolutions
  • Different types of input signal
slide-75
SLIDE 75

Auditory Noise Learning in Humans

  • Learning of random noise patterns
  • Roughly 10 repetitions are

sufficient!

  • Learning appears to be all-or-none
  • Lasts for weeks at least
slide-76
SLIDE 76

Auditory Noise Learning with STDP

Olivier Bichler, Thesis Modified STDP rule

  • Post synaptic spike -

depresses all synapses except those activated recently

slide-77
SLIDE 77

Auditory Noise Learning with STDP

slide-78
SLIDE 78

Auditory Noise Learning -ERP data

Perceptual ¡learning ¡of ¡ acoustic ¡noise ¡generates ¡ memory-­‑evoked ¡ potentials ¡

Thomas ¡Andrillon, ¡ ¡ ¡ ¡ Sid ¡Kouider, ¡Trevor ¡Agus ¡ ¡& ¡ Daniel ¡Pressnitzer ¡ (Currrent ¡Biology, ¡ accepted)

Selective Brain Responses to Meaningless noise in just 5 presentations

slide-79
SLIDE 79

Selectivity Mechanisms

Threshold

slide-80
SLIDE 80

Controlling Sparsity

  • With Temporal Coding
  • Easy to control the

percentage of neurons that fire

  • 1%, 2%, 5%, 10%
slide-81
SLIDE 81

1 2 3 4 5 6 7 8 9 10

Controlling Sparsity

  • With Temporal Coding
  • Easy to control the

percentage of neurons that fire

  • 1%, 2%, 5%, 10%
slide-82
SLIDE 82

1,00E+00& 1,00E+02& 1,00E+04& 1,00E+06& 1,00E+08& 1,00E+10& 1,00E+12& 1,00E+14& 1,00E+16& 1,00E+18& 1,00E+20& 1,00E+22& 1,00E+24& 1,00E+26& 1,00E+28& 1,00E+30&

0& 10& 20& 30& 40& 50& 60& 70& 80& 90& 100&

Combina(ons*with*M*=*100*

1 2 3 4 5 6 7 8 9 10

N of M coding

  • Steve Furber
  • With 100 neurons
  • 2100 binary patterns
  • ~1030
  • What about controlling

the proportion of active neurons?

1,00E+02 4,95E+03 1,62E+05 3,92E+06 7,53E+07 1,19E+09 1,60E+10 1,86E+11 1,90E+12 1,73E+13

slide-83
SLIDE 83

Generating Selectivity

  • Suppose that the neuron has 10 synapses
  • Suppose that the percentage of active inputs is

fixed at 10%

  • What is the likelihood of having a given number of

synapses active?

  • With a threshold of 4 the

neuron would only have a 1% chance of firing with a random input

  • With a threshold of 5, the

probability drops to 0.1%

slide-84
SLIDE 84

Interim Conclusion - Learning

  • Error back-propagation is an unrealistic way to

determine the selectivity of neurons in the visual system

  • But simple spike-based learning mechanisms could

allow the system to generate the appropriate selectivity in an unsupervised way

  • Can we build a visual system without label based

training??

slide-85
SLIDE 85

Memory Mechanisms in Man & Machine

  • 1. Humans can recognise visual and auditory stimuli that they have not

experienced for decades.

  • 2. Recognition after very long delays is possible without ever

reactivating the memory trace in the intervening period.

  • 3. These very long term memories require an initial memorisation

phase, during which memory strength increases roughly linearly with the number of presentations

  • 4. A few tens of presentations can be enough to form a memory that can

last a lifetime.

  • 5. Attention-related oscillatory brain activity can help store memories

efficiently and rapidly

10 provocative claims

slide-86
SLIDE 86

Memory Mechanisms in Man & Machine

  • 6. Storing such very long-term memories involves the creation of highly selective

"Grandmother Cells" that only fire if the original training stimulus is experienced again.

  • 7. The neocortex contains large numbers of totally silent cells ("Neocortical Dark

Matter") that constitute the long-term memory store.

  • 8. Grandmother Cells can be produced using simple spiking neural network

models with Spike-Time Dependent Plasticity (STDP) and competitive inhibitory lateral connections.

  • 9. This selectivity only requires binary synaptic weights that are either "on" or

"off", greatly simplifying the problem of maintaining the memory over long periods. 10.Artificial systems using memristor-like devices can implement the same principles, allowing the development of powerful new processing architectures that could replace conventional computing hardware.

10 provocative claims

slide-87
SLIDE 87

STDP and Spiking Neurons

  • “Intelligent” sensory processing with small numbers
  • f neurons connected to spiking sensory devices
  • 2 layers of neurons connected to a spiking retina
  • Unsupervised learning of car counting
  • A single neuron connected to a spiking cochlear
  • Unsupervised learning of auditory noise patterns
  • What would happen with 16 billion neurons and

multiple sensory inputs?

16 bilmion neurons 2 million optic nerve fibres 60,000 auditory nerve fibres

slide-88
SLIDE 88

Grandmother Cells?

  • Jerzy Konorski (1967)
  • “gnostic neurons”
  • Horace Barlow (1972)
  • “cardinal cells”
  • Alberta Gilinsky (1984)
  • “cognons”
slide-89
SLIDE 89

A Jennifer Anniston Cell !

Grandmother Cells in Man?

  • Halle Berry
  • The Taj Mahal
  • Bill Clinton
  • Saddam Hussein
  • The Simpsons
  • The patient’s brother
  • Members of the research

team

  • ...
slide-90
SLIDE 90

Temporal Lobe Recordings in Humans

“Josh Brolin” “Marilyn Munroe”

slide-91
SLIDE 91
  • The hit rate (0.54%) is too high
  • Assume 109 neurons in the hippocampus
  • Assume 10-30 000 identifiable objects (Irv Biederman)
  • Each neuron should respond to 100-150 different objects
  • Number of identifiable objects probably much higher : 105 - 106
  • Rafi Malach : “Totem Pole Cells”
  • Alternative Hypothesis
  • The hippocampus keeps track of a few thousand objects and

events that have been experiences recently

Grandmother Cells in Man?

slide-92
SLIDE 92

How to make a Grandmother Cell

  • STPD allows neurons to become rapidly selective to repeated stimuli
  • This can happen in a few tens of presentations
  • Oscillatory activity could allow this to happen in a single presentation
  • Controlling the percentage of active inputs guarantees that the neuron will

not fire to random inputs

  • Setting a high threshold means that the neuron will never fire unless the

stimulus used for training was presented again

  • It could maintain its selectivity for several decades
  • Cortical Dark Matter?
slide-93
SLIDE 93

Dark Matter in Cortex?

  • Question
  • What is the true distribution of firing rates in the cortex?
  • Are there neurons that never fire for very long periods of time?
  • Problem
  • Nearly all single unit neurophysiological studies are biased towards neurons that

have spontaneous activity

  • The number of neurons recorded seems far lower than would be expected
  • Only 5-15 neurons recorded per descent (out of hundreds)
  • Possible methods
  • Implanted arrays
  • 2-photon optical imaging
  • Patch clamp recording
slide-94
SLIDE 94
  • Roughly 50% of output neurons have firing rates below 0.1 Hz!
  • Maybe the output neurons are a special class??
  • Dark matter in cortex??

V ery low firing rate neurons

slide-95
SLIDE 95

Invisible Cells?

slide-96
SLIDE 96

Overview

Part 1

  • Ultrarapid visual categorization
  • Biological vs Computer Vision
  • Temporal Constraints
  • Coding with Spikes
  • Convolutional Neural Networks

in 1999

  • SpikeNet
  • The current state of the art in

Convolutional Neural Networks

  • Supervision & GoogleNet

Part 2

  • Biologically inspired learning
  • Spike-Time Dependent Plasticity

(STDP)

  • Applications in Vision
  • Applications in Audition
  • Development of Neural Selectivity
  • Memories that can last a lifetime
  • Grandmother Cells
  • Neocortical Dark Matter
slide-97
SLIDE 97

Final Conclusions

  • Temporal Constraints on Vision
  • Feedforward Neural Networks
  • SpikeNet
  • Supervision & GoogleNet
  • Importance of Spikes
  • Ultrafast coding with one spike per neuron
  • V

ery efficient STDP schemes for unsupervised learning

  • Biological Vision has been and will continue to be a

source of inspiration for technology