ETV Bharat / science-and-technology

Brain simulation on the cheap

James Knight at the University of Sussex, UK, and his colleagues

cost of studying how our brains work,James Knigh
Brain simulation on the cheap
author img

By

Published : Feb 9, 2021, 10:09 AM IST

Updated : Feb 16, 2021, 7:53 PM IST

New Scientist, UK: A SIMULATION that runs faster on a commercial graphics card than on some supercomputers could drastically cut the cost of studying how our brains work.


Researchers have long used digital models to better understand our brains in the hope of developing cures for diseases such as Alzheimer's or Parkinson's, but simulating the number of neurons and synapses in even the simplest creature can be a struggle for supercomputers.


Before running a simulation of the brain's neurons and the vast number of synaptic connections, the model must be transferred into the computer's working memory, complete with the starting state of every synapse. As the simulation progresses, the computer must keep referring to this set of data to retrieve or update the state of each synaptic connection, which acts as a bottleneck on calculations.


Commercial graphics cards, known as GPUs, are designed to render 3D scenes by rapidly carrying out many arithmetic calculations in parallel, an ability that also makes them particularly speedy at other tasks, including simulating synaptic connections.


James Knight at the University of Sussex, UK, and his colleagues created a simulation that uses a random number generator as part of the process of creating a synaptic state. Although this random element means the simulation can't refer to the exact starting state of the model each time it needs to create a new connection, the team found it produced results comparable to conventional simulations. It also makes things faster, as the computer only needs to handle data about the synapses that it is currently modeling.


The team used an existing model of a macaque monkey's visual cortex, consisting of more than 4 million neurons, as a benchmark. In 2018, 1 second of brain activity inside the model was simulated on an IBM Blue Gene/Q supercomputer in 12 minutes. Using a commercially available graphics card, Knight's team was able to carry out the task in just under 8 minutes (Nature Computational Science, DOI: 10.1038/s43588-020-00022-7).


8 minutes to simulate 1 second of a monkey's visual cortex

A newer JURECA supercomputer has been able to run the same simulation in just 31 seconds, but these can cost tens of millions of pounds and require a team of staff to maintain. By contrast, Knight says the Nvidia Titan RTX hardware used in his tests costs just a few thousand pounds.


"This potentially means that researchers whose primary focus isn't dealing with supercomputers could explore things with this model," he says.


But there is a flaw. When we learn, our brains are constantly weakening or strengthening the connections between synapses, an ability known as synaptic plasticity. The GPU simulation can't do this, because it always has to recalculate the connections from scratch, reverting back to the model's original state.


Knight believes a hybrid approach using his new technique and a traditional model where the state of synapses is stored in memory and can be updated would allow plasticity where needed and high speed where it isn't, but the team has yet to try this.

(c) 2021 New Scientist Ltd.
Distributed by Tribune Content Agency, LLC

Also Read: Are there benefits to a raw-food diet?


New Scientist, UK: A SIMULATION that runs faster on a commercial graphics card than on some supercomputers could drastically cut the cost of studying how our brains work.


Researchers have long used digital models to better understand our brains in the hope of developing cures for diseases such as Alzheimer's or Parkinson's, but simulating the number of neurons and synapses in even the simplest creature can be a struggle for supercomputers.


Before running a simulation of the brain's neurons and the vast number of synaptic connections, the model must be transferred into the computer's working memory, complete with the starting state of every synapse. As the simulation progresses, the computer must keep referring to this set of data to retrieve or update the state of each synaptic connection, which acts as a bottleneck on calculations.


Commercial graphics cards, known as GPUs, are designed to render 3D scenes by rapidly carrying out many arithmetic calculations in parallel, an ability that also makes them particularly speedy at other tasks, including simulating synaptic connections.


James Knight at the University of Sussex, UK, and his colleagues created a simulation that uses a random number generator as part of the process of creating a synaptic state. Although this random element means the simulation can't refer to the exact starting state of the model each time it needs to create a new connection, the team found it produced results comparable to conventional simulations. It also makes things faster, as the computer only needs to handle data about the synapses that it is currently modeling.


The team used an existing model of a macaque monkey's visual cortex, consisting of more than 4 million neurons, as a benchmark. In 2018, 1 second of brain activity inside the model was simulated on an IBM Blue Gene/Q supercomputer in 12 minutes. Using a commercially available graphics card, Knight's team was able to carry out the task in just under 8 minutes (Nature Computational Science, DOI: 10.1038/s43588-020-00022-7).


8 minutes to simulate 1 second of a monkey's visual cortex

A newer JURECA supercomputer has been able to run the same simulation in just 31 seconds, but these can cost tens of millions of pounds and require a team of staff to maintain. By contrast, Knight says the Nvidia Titan RTX hardware used in his tests costs just a few thousand pounds.


"This potentially means that researchers whose primary focus isn't dealing with supercomputers could explore things with this model," he says.


But there is a flaw. When we learn, our brains are constantly weakening or strengthening the connections between synapses, an ability known as synaptic plasticity. The GPU simulation can't do this, because it always has to recalculate the connections from scratch, reverting back to the model's original state.


Knight believes a hybrid approach using his new technique and a traditional model where the state of synapses is stored in memory and can be updated would allow plasticity where needed and high speed where it isn't, but the team has yet to try this.

(c) 2021 New Scientist Ltd.
Distributed by Tribune Content Agency, LLC

Also Read: Are there benefits to a raw-food diet?


Last Updated : Feb 16, 2021, 7:53 PM IST
ETV Bharat Logo

Copyright © 2024 Ushodaya Enterprises Pvt. Ltd., All Rights Reserved.