Mic Mann,co-CEO of SingularityU South Africa and co-organiser of the SingularityU South Africa Summit, wrote an opinion article for The Star about how virtual reality is transforming the work space and the future of the professional. Exponential technologies, such as artificial intelligence, augmented and virtual reality as well as immersive computing are transforming the way in which we work and network across all industries.
For the first time, the Classic Business Breakfast and Moneyweb team conducted a radio interview while immersed in the virtual world. Watch the video below as Aaron Frank, principal faculty at Singularity University, and Mic Mann, director at Mic Mann and co-organiser of the SingularityU South Africa Summit, share developments and trends in the VR space.
There will be an abundance of delicious food throughout the two days, crafted by Karen Short from – By Word of Mouth to delight all palettes
Syndicated from singularityhub.com (Raya Bidshahri) with permission, edited by Mann Made Media.
We’ve all heard the warnings: automation will disrupt entire industries and put millions of people out of jobs. Up to 45 per cent of existing jobs can already be automated using current technology. However, this may not apply to the education sector. After analysing more than 2 000 work activities for more than 800 occupations, McKinsey & Co. reported that of all the sectors examined, “the technical feasibility of automation is lowest in education.”
There’s no doubt that technology will continue to have a powerful impact on global education, both by improving the learning experience and by increasing global access to education. Massive open online courses (MOOCs), chatbot tutors, and AI-powered lesson plans are a few examples of the digital transformation in global education. But will robots and artificial intelligence ever fully replace teachers?
The first-ever Singularity University South Africa Summit (23-24 August 2017) on the African continent aims to equip attendees with exponential knowledge and understanding about how to tackle the country’s and continent’s grand challenges, such as the lack of quality education, high unemployment rates, food security, disaster relief, governance to name a few, through practical and applicable teachings. Thought leaders and industry specialists from various sectors, such as education, healthcare, finance, and energy will present at this thought-provoking summit.
Sizwe Nxasana is the founder of Sifiso Learning Group, which is involved in Edtech and academic publishing, and also founded Future Nation Schools – a chain of affordable private schools in South Africa. He holds education in very high esteem and has a BCom, BCompt (Hons), CA (SA) qualifications and has also been conferred with honorary doctorates by the University of Fort Hare, the Durban University of Technology, the University of Johannesburg and the Walter Sisulu University. Nxasana is the co-founder and chairman of the National Education Collaboration Trust and was appointed chairman of the National Student Financial Aid Scheme. He is also chairman of the Ministerial Task Team that’s developing a new funding model for students who come from poor and “missing middle” backgrounds. Nxasana understands the importance of education and its impact on individuals – especially women and children from disadvantaged backgrounds – the economy, and how it positively affects prosperity, future economic growth and social stability. All of which makes his more than qualified and experienced to speak about the future of education at the inaugural Singularity University South Africa Summit.
The Most Difficult Sector to Automate
While tasks revolving around education – like administration or facilities maintenance – are open to automation, teaching is not.
Effective education involves more than just the transfer of information from teacher to student. Good teaching requires complex social interactions and adaptation to each student’s learning needs and their cultural-social context. An effective teacher is not just responsive to each student’s strengths and weaknesses, but is also empathetic towards their state of mind. Teachers aim to maximise human potential. Vienne Ming, SU Faculty member of Cognitive Neuroscience, will be speaking on that very topic at the Summit in South Africa.
Students also rely on teachers for life guidance and career mentorship. Deep and meaningful human interaction is crucial, and very difficult, if not impossible, to automate. Automating teaching would require artificial general intelligence (as opposed to narrow or specific intelligence). It would require an AI that understands natural human language, can be empathetic towards emotions, plan, strategise and make impactful decisions under unpredictable circumstances. This would be the kind of machine that can do anything a human can do, and it doesn’t exist – yet.
Just because it’s difficult to fully automate teaching, doesn’t mean AI experts aren’t trying.
Jill Watson, a teaching assistant at Georgia Institute of Technology, is an IBM-powered artificial intelligence that’s being implemented in universities around the world. She is able to answer students’ questions with 97 per cent accuracy. Technologies like this also have applications in grading and providing feedback. Some AI algorithms are being refined to perform automatic essay scoring. One project has achieved a 0.945 correlation with human graders. This will have remarkable impacts on online education and will dramatically increase online student retention rates.
Any student with internet can access information and free courses (MOOCs) from universities around the world, but not all students can receive customised feedback due to the limit of manpower. Chatbots like Jill Watson allow the opportunity for students to have their work reviewed and all their questions answered at a minimal cost.
AI algorithms also have a significant role to play in the personalisation of education. Data analysis helps improves students’ results by assessing each student’s learning strengths and weaknesses and creating mass-customised programmes. Algorithms can analyse student data and create flexible programmes that adapt based on real-time feedback. According to the McKinsey Global Institute, all of this data could unlock up to $1.2 trillion in global economic value.
Beyond Automated Teaching
But technological automation alone won’t even begin to tackle the many issues in our global education system. Outdated curricula, standardised tests, and an emphasis on short-term knowledge, call for a transformation of how we teach. It’s not sufficient to automate the process. We must not only be innovative with our automation capabilities, but also with educational content, strategy and policies. And on the continent, there’s an even more pressing issue, the fact that many children don’t even have access or can’t afford quality teachers, school facilities and adequate learning materials in the first place. The lack of education is one of the most vital grand challenges that needs to be addressed in order to move Africa forward and to help realise the future of human potential.
To learn more about how disruptive and exponential technologies will transform your business, and revolutionise the education sector, book your tickets to the upcoming Singularity University South Africa Summit . SingularityU South Africa Summit in collaboration with Standard Bank, global partners Deloitte and strategic partners MTN and SAP, is produced by Mann Made Media and will take place on the 23-24 August 2017 at Kyalami Grand Prix and International Convention Centre.
There is nothing to be gained from blind optimism. But an optimistic mindset can be grounded in rationality and evidence. It may be hard to believe, but we are living in the most exciting time in human history. Despite all of our ongoing global challenges, humanity has never been better off. Not only are we living healthier, happier, and safer lives than ever before, but new technological tools are also opening up a universe of opportunities.
In order to continue to launch moonshot ideas, tackle global challenges, and push humanity forward, it’s important to be intelligently optimistic about the future.
Our Pessimism Bias
When we think about the future of our species, many of us are inherently pessimistic. Our brains are wired to pay more attention to the threats in our personal lives and our world at large.
Many studies have shown we react more strongly to negative stimuli than positive stimuli, and that we dedicate more of our brain resources to negative information. Some psychologists have also shown that we tend to give greater weight to negative thoughts when making decisions and that we tend to remember negative events in our lives more than positives.
There is an evolutionary advantage to these tendencies. We often forget that our neural hardware has been developed to survive the African savannah, where survival depended on being aware of constant sources of danger. But it may no longer serve its purpose in our modern world.
The media is partially to blame for adding fuel to the fire. In fact, studies show that bad news outweighs good news by as much as seventeen negative news reports for every one good one. News agencies know very well that we will pay more attention to bad news and hence, “If it bleeds, it leads.”
Another team of psychologists from McGill revealed that people tend to choose to read articles with negative tones and respond much faster to headlines with negative words. You’re not constantly seeing negative headlines because the world is getting worse, you’re constantly seeing negative headlines because that’s what audiences react to.
Studies have shown that the public tends to pay most attention to news about war and terrorism and least about science and technology. Consequently, we have trained journalists and news channels to focus on those issues more than on our innovative breakthroughs. What does that say about us as a society?
A Need for Intelligent Optimism
Intelligent optimism is all about being excited about the future in an informed and rational way. The mindset is critical if we are to get everyone excited about the future by highlighting the rapid progress we have made and recognizing the tremendous potential humans have to find solutions to our problems.
Despite ongoing challenges, we have a lot to celebrate about how far we’ve come as a species. As optimists like Peter Diamandis point out, we are living in an era of abundance, and there’s a lot of evidence to prove it.
Let’s be very clear: being intelligently optimistic does not mean we turn our backs to the many global challenges we are faced with today. Our world is far from perfect. The refugee crisis, climate change, wealth inequality, and other global issues are significant and worthy of our attention.
But as physicist and futurist David Deutsch points out, “Problems exist; and problems are soluble with the right knowledge.” Intelligent optimism involves recognizing the many problems we are faced with and acknowledging that we can solve them just as we have overcome many other challenges in the past.
A Critical Mindset for Progress
We can’t let negative headlines and the media shape our perception of ourselves as a species, and the vision we have for the future. As legendary astronomer Carl Sagan said, “For all of our failings, despite our limitations and fallibility, we humans are capable of greatness.”
Hollywood likes to paint disproportionately dystopian visions of the world, and while those are possible futures, we can and must also imagine a future of humanity where we live in abundance, prosperity, and transcendence. We can’t expect current innovators and future generations to make this positive vision a reality if they believe our species is doomed for failure. It inspires us to continue to contribute to human progress and feel that we can push humanity forward.
It’s absolutely critical that our journalists cover the many challenges, threats, and issues in our world today. But just as we report the significant negative news in the world, we must also continue to highlight humanity’s accomplishments. After all, how can our youth grow up believing they can have a positive impact on the world if the news is suggesting otherwise?
In 47 CE, Scribonius Largus, court physician to the Roman emperor Claudius, described in his Compositiones a method for treating chronic migraines: place torpedo fish on the scalps of patients to ease their pain with electric shocks. Largus was on the right path; our brains are comprised of electrical signals that influence how brain cells communicate with each other and in turn affect cognitive processes such as memory, emotion and attention.
The science of brain stimulation—altering electrical signals in the brain—has, needless to say, changed in the past 2,000 years. Today we have a handful of transcranial direct current stimulation (tDCS) devices that deliver constant, low current to specific regions of the brain through electrodes on the scalp, for users ranging from online video-gamers to professional athletes and people with depression. Yet cognitive neuroscientists are still working to understand just how much we can influence brain signals and improve cognition with these techniques.
Brain stimulation by tDCS is non-invasive and inexpensive. Some scientists think it increases the likelihood that neurons will fire, altering neural connections and potentially improving the cognitive skills associated with specific brain regions. Neural networks associated with attention control can be targeted to improve focus in people with attention deficit hyperactivity disorder (ADHD). Or people who have a hard time remembering shopping lists and phone numbers might like to target brain areas associated with short-term (also known as working) memory in order to enhance this cognitive process. However, the effects of tDCS are inconclusive across a wide body of peer-reviewed studies, particularly after a single session. In fact, some experts question whether enough electrical stimulation from the technique is passing through the scalp into the brain to alter connections between brain cells at all.
Notably, the neuroscientist György Buzsáki at New York University presented research conducted with cadavers, concluding that very little of the current administered through tDCS actually travels into the brain, perhaps under 10 percent. Other researchers report the opposite. Recent neuroimaging studieshave shown significant increases in neurotransmitter levels and bloodflow at the site of tDCS stimulation during a single session. Still, in response to growing concern, many researchers have begun to administer tDCS over a period of days for an additive effect. Studies have shown enhanced treatment effects (yet no increase in side effects) attributable to repeated sessions as opposed to a single session of tDCS.
Even more basic concerns about tDCS research need to be addressed; particularly, tDCS protocols are rather inconsistent between research labs. For example, one lab might administer tDCS for 20 minutes at the maximum voltage of 2 mA while another lab might administer tDCS for 25 minutes at 1 mA, and another still might administer for 15 minutes at 1.5 mA. Combining each of these studies into a literature review proves time-consuming and confusing. We do not know yet what the optimal time and voltage levels are for tDCS. Let’s say 1 mA is too low to initiate neural changes and improve cognitive abilities. Then handfuls of papers and years of research could turn out to be quite uninformative.
Lately, the technology has been combined with cognitive training to achieve long-term improvements. This is a natural progression of the work. It is thought that tDCS allows neurons to fire more readily. Then on top of that, just like working out a muscle, a cognitive training task is an exercise that will work out the neurons in brain regions heavily involved with employing that cognitive process (muscles). To take advantage of both of these techniques, shouldn’t we then encourage those neurons and brain regions to work even harder during tDCS by engaging the specific brain areas being targeted with a cognitive task? In fact, studies confirm this theory, and show that heightened performance and longer-lasting improvements result from the combination of cognitive training with tDCS.
In a several-year collaboration between the Cognitive Neuroimaging Lab at the University of Michigan and the Working Memory and Plasticity Lab at the University of California at Irvine, we have been investigating working-memory training in conjunction with tDCS. During the training task, participants are asked to hold progressively more information in their working memory while simultaneously undergoing tDCS. Although the results are still limited and somewhat mixed, evidence suggests that the combination of brain stimulation and training is more effective in improving working memory than either technique alone. For the experimental tDCS group, better performance could be measured even a year after our sessions, an improvement not found with placebo controls. And our collaboration has even uncovered a nuance of combined working-memory training and tDCS: participants who began training with a lower baseline working memory improved more than those who began with a higher baseline performance.
Clearly there is much more work to do to understand tDCS and cognitive training. To create more consistency in the literature, researchers will need to investigate optimal parameters (such as time length and current intensity) for tDCS as a form of cognitive and therapeutic enhancement. A next step is to understand the underlying neural mechanisms of tDCS and cognitive training, which will require a multidisciplinary approach using neuroimaging techniques such as fMRI. This would then make it possible for researchers to visualize (1) activation of brain regions due to tDCS, (2) activation due to tDCS and a cognitive task, and even (3) changes in activation specifically due to combined tDCS and cognitive training over cognitive training alone.
I am cautiously optimistic about the promise of tDCS; cognitive training paired with tDCS specifically could lead to improvements in attention and memory for people of all ages and make some huge changes in society. Maybe we could help to stave off cognitive decline in older adults or enhance cognitive skills, such as focus, in people such as airline pilots or soldiers, who need it the most. Still, I am happy to report that we have at least moved on from torpedo fish.
This article was originally published at Aeon and has been republished under Creative Commons.
For some die-hard tech evangelists, using neural interfaces to merge with AI is the inevitable next step in humankind’s evolution. But a group of 27 neuroscientists, ethicists, and machine learning experts have highlighted the myriad ethical pitfalls that could be waiting.
To be clear, it’s not just futurologists banking on the convergence of these emerging technologies. The Morningside Group estimates that private spending on neurotechnology is in the region of $100 million a year and growing fast, while in the US alone public funding since 2013 has passed the $500 million mark.
The group is made up of representatives from international brain research projects, tech companies like Google and neural interface startup Kernel, and academics from the US, Canada, Europe, Israel, China, Japan, and Australia. They met in May to discuss the ethics of neurotechnologies and AI, and have now published their conclusions in the journal Nature.
While the authors concede it’s likely to be years or even decades before neural interfaces are used outside of limited medical contexts, they say we are headed towards a future where we can decode and manipulate people’s mental processes, communicate telepathically, and technologically augment human mental and physical capabilities.
“Such advances could revolutionize the treatment of many conditions…and transform human experience for the better,” they write. “But the technology could also exacerbate social inequalities and offer corporations, hackers, governments, or anyone else new ways to exploit and manipulate people. And it could profoundly alter some core human characteristics: private mental life, individual agency, and an understanding of individuals as entities bound by their bodies.”
“The researchers identify four key areas of concern: privacy and consent, agency and identity, augmentation, and bias.”
The researchers identify four key areas of concern: privacy and consent, agency and identity, augmentation, and bias. The first and last topics are already mainstays of warnings about the dangers of unregulated and unconscientious use of machine learning, and the problems and solutions the authors highlight are well-worn.
On privacy, the concerns are much the same as those raised about the reams of personal data corporations and governments are already hoovering up. The added sensitivity of neural data makes the suggestion of an automatic opt-out for sharing of neural data and bans on individuals selling their data more feasible.
But other suggestions to use technological approaches to better protect data like “differential privacy,” “federated learning,” and blockchain are equally applicable to non-neural data. Similarly, the ability of machine learning algorithms to pick up bias inherent in training data is already a well-documented problem, and one with ramifications that go beyond just neurotechnology.
When it comes to identity, agency, and augmentation, though, the authors show how the convergence of AI and neurotechnology could result in entirely novel challenges that could test our assumptions about the nature of the self, personal responsibility, and what ties humans together as a species.
They ask the reader to imagine if machine learning algorithms combined with neural interfaces allowed a form of ‘auto-complete’ function that could fill the gap between intention and action, or if you could telepathically control devices at great distance or in collaboration with other minds. These are all realistic possibilities that could blur our understanding of who we are and what actions we can attribute as our own.
The authors suggest adding “neurorights” that protect identity and agency to international treaties like the Universal Declaration of Human Rights, or possibly the creation of a new international convention on the technology. This isn’t an entirely new idea; in May, I reported on a proposal for four new human rights to protect people against neural implants being used to monitor their thoughts or interfere with or hijack their mental processes.
But these rights were designed primarily to protect against coercive exploitation of neurotechnology or the data it produces. The concerns around identity and agency are more philosophical, and it’s less clear that new rights would be an effective way to deal with them. While the examples the authors highlight could be forced upon someone, they sound more like something a person would willingly adopt, potentially waiving rights in return for enhanced capabilities.
The authors suggest these rights could enshrine a requirement to educate people about the possible cognitive and emotional side effects of neurotechnologies rather than the purely medical impacts. That’s a sensible suggestion, but ultimately people may have to make up their own minds about what they are willing to give up in return for new abilities.
This leads to the authors’ final area of concern—augmentation. As neurotechnology makes it possible for people to enhance their mental, physical, and sensory capacities, it is likely to raise concerns about equitable access, pressure to keep up, and the potential for discrimination against those who don’t. There’s also the danger that military applications could lead to an arms race.
“The temptation could be to simply ban the technology altogether, but the researchers warn that this could simply push it underground.”
The authors suggest that guidelines should be drawn up at both the national and international levels to set limits on augmentation in a similar way to those being drawn up to control gene editing in humans, but they admit that “any lines drawn will inevitably be blurry.” That’s because it’s hard to predict the impact these technologies will have and building international consensus will be hard because different cultures lend more weight to things like privacy and individuality than others.
The temptation could be to simply ban the technology altogether, but the researchers warn that this could simply push it underground. In the end, they conclude that it may come down to the developers of the technology to ensure it does more good than harm. Individual engineers can’t be expected to shoulder this burden alone, though.
“History indicates that profit hunting will often trump social responsibility in the corporate world,” the authors write. “And even if, at an individual level, most technologists set out to benefit humanity, they can come up against complex ethical dilemmas for which they aren’t prepared.”
For this reason, they say, industry and academia need to devise a code of conduct similar to the Hippocratic Oath doctors are required to take, and rigorous ethical training needs to become standard when joining a company or laboratory.
Artificial intelligence has received its fair share of hype recently. However, it’s hype that’s well-founded: IDC predicts worldwide spend on AI and cognitive computing will culminate to a whopping $46 billion (with a “b”) by 2020, and all the tech giants are jumping on board faster than you can say “ROI.” But what is AI, exactly?
According to Hilary Mason, AI today is being misused as a sort of catch-all term to basically describe “any system that uses data to do anything.” But it’s so much more than that. A truly artificially intelligent system is one that learns on its own, one that’s capable of crunching copious amounts of data in order to create associations and intelligently mimic actual human behavior.
It’s what powers the technology anticipating our next online purchase (Amazon), or the virtual assistant that deciphers our voice commands with incredible accuracy (Siri), or even the hipster-friendly recommendation enginethat helps you discover new music before your friends do (Pandora). But AI is moving past these consumer-pleasing “nice-to-haves” and getting down to serious business: saving our butts.
Much in the same way robotics entered manufacturing, AI is making its mark in healthcare by automating mundane, repetitive tasks. This is especially true in the case of detecting cancer. By leveraging the power of deep learning, algorithms can now be trained to distinguish between sets of pixels in an image that represents cancer versus sets that don’t—not unlike how Facebook’s image recognition software tags pictures of our friends without us having to type in their names first. This software can then go ahead and scour millions of medical images (MRIs, CT scans, etc.) in a single day to detect anomalies on a scope that humans just aren’t capable of. That’s huge.
As if that wasn’t enough, these algorithms are constantly learning and evolving, getting better at making these associations with each new data set that gets fed to them. Radiology, dermatology, and pathology will experience a giant upheaval as tech giants and startups alike jump in to bring these deep learning algorithms to a hospital near you.
In fact, some already are: the FDA recently gave their seal of approval for an AI-powered medical imaging platform that helps doctors analyze and diagnose heart anomalies. This is the first time the FDA has approved a machine learning application for use in a clinical setting.
But how efficient is AI compared to humans, really? Well, aside from the obvious fact that software programs don’t get bored or distracted or have to check Facebook every twenty minutes, AI is exponentially better than us at analyzing data.
Take, for example, IBM’s Watson. Watson analyzed genomic data from both tumor cells and healthy cells and was ultimately able to glean actionable insights in a mere 10 minutes. Compare that to the 160 hours it would have taken a human to analyze that same data. Diagnoses aside, AI is also being leveraged in pharmaceuticals to aid in the very time-consuming grunt work of discovering new drugs, and all the big players are getting involved.
But AI is far from being just a behind-the-scenes player. Gartner recently predicted that by 2025, 50 percent of the population will rely on AI-powered “virtual personal health assistants” for their routine primary care needs. What this means is that consumer-facing voice and chat-operated “assistants” (think Siri or Cortana) would, in effect, serve as a central hub of interaction for all our connected health devices and the algorithms crunching all our real-time biometric data. These assistants would keep us apprised of our current state of well-being, acting as a sort of digital facilitator for our personal health objectives and an always-on health alert system that would notify us when we actually need to see a physician.
Slowly, and thanks to the tsunami of data and advancements in self-learning algorithms, healthcare is transitioning from a reactive model to more of a preventative model—and it’s completely upending the way care is delivered. Whether Elon Musk’s dystopian outlook on AI holds any weight or not is yet to be determined. But one thing’s certain: for the time being, artificial intelligence is saving our lives.
The renowned physicist Dr. Richard Feynman once said: “What I cannot create, I do not understand. Know how to solve every problem that has been solved.”
An increasingly influential subfield of neuroscience has taken Feynman’s words to heart. To theoretical neuroscientists, the key to understanding how intelligence works is to recreate it inside a computer. Neuron by neuron, these whizzes hope to reconstruct the neural processes that lead to a thought, a memory, or a feeling.
With a digital brain in place, scientists can test out current theories of cognition or explore the parameters that lead to a malfunctioning mind. As philosopher Dr. Nick Bostrom at the University of Oxford argues, simulating the human mind is perhaps one of the most promising (if laborious) ways to recreate—and surpass—human-level ingenuity.
There’s just one problem: our computers can’t handle the massively parallel nature of our brains. Squished within a three-pound organ are over 100 billion interconnected neurons and trillions of synapses.
Even the most powerful supercomputers today balk at that scale: so far, machines such as the K computer at the Advanced Institute for Computational Science in Kobe, Japan can tackle at most ten percent of neurons and their synapses in the cortex.
This ineptitude is partially due to software. As computational hardware inevitably gets faster, algorithms increasingly become the linchpin towards whole-brain simulation.
This month, an international team completely revamped the structure of a popular simulation algorithm, developing a powerful piece of technology that dramatically slashes computing time and memory use.
The new algorithm is compatible with a range of computing hardware, from laptops to supercomputers. When future exascale supercomputers hit the scene—projected to be 10 to 100 times more powerful than today’s top performers—the algorithm can immediately run on those computing beasts.
“With the new technology we can exploit the increased parallelism of modern microprocessors a lot better than previously, which will become even more important in exascale computers,” said study author Jakob Jordan at the Jülich
Research Center in Germany, who published the work in Frontiers in Neuroinformatics.
“It’s a decisive step towards creating the technology to achieve simulations of brain-scale networks,” the authors said.
The Trouble With Scale
Current supercomputers are composed of hundreds of thousands of subdomains called nodes. Each node has multiple processing centers that can support a handful of virtual neurons and their connections.
A main issue in brain simulation is how to effectively represent millions of neurons and their connections inside these processing centers to cut time and power.
One of the most popular simulation algorithms today is the Memory-Usage Model. Before scientists simulate changes in their neuronal network, they need to first create all the neurons and their connections within the virtual brain using the algorithm.
Here’s the rub: for any neuronal pair, the model stores all information about connectivity in each node that houses the receiving neuron—the postsynaptic neuron.
In other words, the presynaptic neuron, which sends out electrical impulses, is shouting into the void; the algorithm has to figure out where a particular message came from by solely looking at the receiver neuron and data stored within its node.
It sounds like a strange setup, but the model allows all the nodes to construct their particular portion of the neural network in parallel. This dramatically cuts down boot-up time, which is partly why the algorithm is so popular.
But as you probably guessed, it comes with severe problems in scaling. The sender node broadcasts its message to all receiver neuron nodes. This means that each receiver node needs to sort through every single message in the network—even ones meant for neurons housed in other nodes.
That means a huge portion of messages get thrown away in each node, because the addressee neuron isn’t present in that particular node. Imagine overworked post office staff skimming an entire country’s worth of mail to find the few that belong to their jurisdiction. Crazy inefficient, but that’s pretty much what goes on in the Memory-Usage Model.
The problem becomes worse as the size of the simulated neuronal networkgrows. Each node needs to dedicate memory storage space to an “address book” listing all its neural inhabitants and their connections. At the scale of billions of neurons, the “address book” becomes a huge memory hog.
Size Versus Source
The team hacked the problem by essentially adding a zip code to the algorithm.
Here’s how it works. The receiver nodes contain two blocks of information. The first is a database that stores data about all the sender neurons that connect to the nodes. Because synapses come in several sizes and types that differ in their memory consumption, this database further sorts its information based on the type of synapses formed by neurons in the node.
This setup already dramatically differs from its predecessor, in which connectivity data is sorted by the incoming neuronal source, not synapse type. Because of this, the node no longer has to maintain its “address book.”
“The size of the data structure is therefore independent of the total number of neurons in the network,” the authors explained.
The second chunk stores data about the actual connections between the receiver node and its senders. Similar to the first chunk, it organizes data by the type of synapse. Within each type of synapse, it then separates data by the source (the sender neuron).
In this way, the algorithm is far more specific than its predecessor: rather than storing all connection data in each node, the receiver nodes only store data relevant to the virtual neurons housed within.
The team also gave each sender neuron a target address book. During transmission the data is broken up into chunks, with each chunk containing a zip code of sorts directing it to the correct receiving nodes.
Rather than a computer-wide message blast, here the data is confined to the receiver neurons that they’re supposed to go to.
Speedy and Smart
The modifications panned out.
In a series of tests, the new algorithm performed much better than its predecessors in terms of scalability and speed. On the supercomputer JUQUEEN in Germany, the algorithm ran 55 percent faster than previous models on a random neural network, mainly thanks to its streamlined data transfer scheme.
At a network size of half a billion neurons, for example, simulating one second of biological events took about five minutes of JUQUEEN runtime using the new algorithm. Its predecessor clocked in at six times that.
This really “brings investigations of fundamental aspects of brain function, like plasticity and learning unfolding over minutes…within our reach,” said study author Dr. Markus Diesmann at the Jülich Research Centre.
As expected, several scalability tests revealed that the new algorithm is far more proficient at handling large networks, reducing the time it takes to process tens of thousands of data transfers by roughly threefold.
“The novel technology profits from sending only the relevant spikes to each process,” the authors concluded. Because computer memory is now uncoupled from the size of the network, the algorithm is poised to tackle brain-wide simulations, the authors said.
While revolutionary, the team notes that a lot more work remains to be done. For one, mapping the structure of actual neuronal networks onto the topology of computer nodes should further streamline data transfer. For another, brain simulation software needs to regularly save its process so that in case of a computer crash, the simulation doesn’t have to start over.
“Now the focus lies on accelerating simulations in the presence of various forms of network plasticity,” the authors concluded. With that solved, the digital human brain may finally be within reach.