New Scientist, UK: A SIMULATION that runs faster on a commercial graphics card than on some supercomputers could drastically cut the cost of studying how our brains work.
Researchers have long used digital models to better understand our brains in the hope of developing cures for diseases such as Alzheimer's or Parkinson's, but simulating the number of neurons and synapses in even the simplest creature can be a struggle for supercomputers.
Before running a simulation of the brain's neurons and the vast number of synaptic connections, the model must be transferred into the computer's working memory, complete with the starting state of every synapse. As the simulation progresses, the computer must keep referring to this set of data to retrieve or update the state of each synaptic connection, which acts as a bottleneck on calculations.
Commercial graphics cards, known as GPUs, are designed to render 3D scenes by rapidly carrying out many arithmetic calculations in parallel, an ability that also makes them particularly speedy at other tasks, including simulating synaptic connections.
James Knight at the University of Sussex, UK, and his colleagues created a simulation that uses a random number generator as part of the process of creating a synaptic state. Although this random element means the simulation can't refer to the exact starting state of the model each time it needs to create a new connection, the team found it produced results comparable to conventional simulations. It also makes things faster, as the computer only needs to handle data about the synapses that it is currently modeling.