There will be an abundance of delicious food throughout the two days, crafted by Karen Short from – By Word of Mouth to delight all palettes
22 May 2018, Johannesburg: “Globally, cashless mobile payments are expected to grow five-fold and reach $30 trillion by 2022, according to ARK Invest. This illustrates huge potential for businesses across the continent that are catering to largely unbanked consumers with high smartphone adoption,” said Mic Mann, co-organiser of SingularityU South Africa Summit 2018 and SingularityU Johannesburg chapter leader.
In a world where exponential technologies are becoming more ubiquitous, it’s crucial to understand how to incorporate them into our businesses to remain relevant and grow, as well as how they are also being used to solve some of the world’s grand challenges.
In the run up to the highly-anticipated SingularityU South Africa Summit 2018, the South African SingularityU chapter hosts a number of non-profit events aimed at strengthening the community. The first in the ‘Future of’ series of chapter events, which addressed the Future of Finance, took place in Johannesburg and Cape Town on the 16th and 17th of May, respectively. The focus was on the rise in cashless and frictionless payment systems, the blockchain and digital currencies, and the regulation thereof.
Speakers at the SingularityU Johannesburg Chapter event included, Antoinette Hoffman – Head of Digital Payment Wallets at Standard Bank Group (the headline sponsor for the SingularityU South Africa Summit); Arif Ismail – Head of Fintech at the South African Reserve Bank; and Nasreen Saunders – Consultant at the Blockchain Academy. The SingularityU Cape Town Chapter event speakers Arif Ismail and Nasreen Saunders were joined by Brad McGrath – co-founder of Zoona and Rupert Sully – Head of Sales and Business Development at SnapScan.
Cashless, frictionless and deviceless payments
While payment solutions have evolved over time, there are still some challenges that need to be addressed to make the world cashless. Antoinette Hoffman explained: “We have learned that it is not the technology that is the challenge, but rather the payment ritual. Cash is king – across the African continent, more so than in South Africa. The challenge is to get the customer to choose the cashless option at the point of sale, which is not always easy. The convenience of cash always being accessible and available with ATMs at every corner, gives that warm feeling of security that nothing can go wrong. Everyone accepts cash. There is no point of sale that will reject cash because of the perceived notion that fraud doesn’t happen with cash. You have the money in your hand.”
Exponential technologies are removing cash from the equation slowly but surely. More consumers and vendors are making and receiving payments using payment innovations that leverage smartphones, such as SnapScan, Zapper, Zoona and YoCo, among numerous others. They modify front-end processes to improve the customer and merchant experience, without disrupting the underlying payments infrastructure.
Integrated billing solutions allow customers and merchants to transact on integrated mobile shopping platforms with ease and security. Hoffman added that “The future of payments is about reducing the use of cash. Data-driven customer engagement platforms and customer behaviour change at the point of interaction – this will be key in driving the success of reducing cash in the future”. She highlighted the characteristics of successful cashless payment innovations as follows:
- Interoperability – the solution needs to be developed in collaboration with what the customer wants, for example, one app that addresses all payment solutions.
- Simplicity – the solution needs to be easy to use and navigate to drive extensive usage.
- Value-added services – the solution must provide an incentive that will encourage usage, for example a lower cost-option for transactions.
Blockchain and cryptocurrency
Blockchain allows for more direct payment and bookkeeping solutions that eliminate the need for middle-man mediators. Nasreen Saunders explained how BitPesa, which was founded by Elizabeth Rossiello in Nairobi in 2013, is eliminating the high costs associated with cross-border payments across the continent. “Today BitPesa’s largest customer segment are remittance companies, who have reported that BitPesa halves their cost of doing business,” she said.
The BitPesa model complements the banking industry by addressing underserved clients, upselling to existing clients, and adding to intra-bank efficiency. There is a compliance check at every stage of each of their fund-flows, which allows the model to mitigate the risks normally associated with digital currency transactions.
Saunders noted the rise in blockchain applications beyond the financial space, including its use in the publication of medical and public records, to register land rights, execute legal contracts, for voting, and to certify supply chains, among others.
Through crowdfunding platform Usizo Project, Emaweni Primary school in Soweto had Bankymoon Bitcoin-funded, smart-energy meters installed. The meters allow anyone to make Bitcoin payments directly to the meter to fund the energy and water needs of the school. “This revolutionary approach to foreign-aid removes the need for donors to make contributions via an organisation, which adds costs, and it distributes funds transparently. Donors can now directly fund the causes they believe in and it decreases the probability of fraud, as well as squandered funds,” said Saunders.
Brothers Brett and Brad Magrath founded fintech platform Zoona to cater to Africa’s primarily cash-based economies, high unemployment rate and the limited access to formal financial services. “As a solution to overcome this, we developed a financial application that enables grass-root entrepreneurs to become agents, who not only provide key financial services to their communities, but also earn an income for themselves at the same time,” said Brad Magrath. “Creating value lies in solving the customers’ problems and companies that do this will win.”
Rupert Sally noted, “The change has been in the ways that we are addressing challenges differently and this is where innovative solutions come into play. At SnapScan we are experimenting with person-to-person transfers, which is fascinating in that it is about people paying one another, instead of a person paying a business. We are investigating whether payments can go into e-wallet or how one can transact. We are looking at how we can make this easy and convertible.”
Regulation in support of disruption and innovation
The rise in adoption of blockchain technology and cryptocurrencies are demanding new regulations as transactions become more agile, and their reporting more dynamic and automated.
“Regulators now need to balance their rules and principles to support, rather than hinder, innovation. Global collaboration and co-operation with standard-setting bodies will be key and regulators will shift to build staff capacity through deep knowledge on exponential technologies,” said Arif Ismail. “Highly effective regulators will work to create innovation facilitators such as hubs and sandboxes to keep close to emerging developments and foster shared learning.”
Ismail noted seven phenomena and their implications that financial regulators should consider when drawing up national and global regulations. These include: Artificial Intelligence and autonomous technologies; biotechnology and nanotechnology; cloud and Q-computing; distributed data, ledgers and big data; the current energy landscape; fintech; GAFA (an acronym for the world’s largest tech companies: Google, Apple, Facebook and Amazon) and e-platforms.
“Through continued conversations and sharing of information between different disciplines and industries, we open up the possibility for innovative solutions to answer the African continent’s challenges,” said Mann in closing.
About the SingularityU South Africa Summit
In its second year, the two-day SingularityU South Africa Summit – hosted in collaboration with Standard Bank, global partner Deloitte and strategic partners HP, Liberty, SAP and MTN – continues its quest to accelerate South Africa’s culture of innovation. The summit will bring together some of the world’s most forward-thinking individuals and the continent’s most curious minds to help #futureproofAfrica.
Delegates at the 2018 SingularityU South Africa Summit can expect compelling presentations and discussions on exponential technologies that can be used to create positive change and foster economic growth on the continent. Popular Singularity University speakers Ramez Naam and David Roberts are returning and will be joined on stage by:
- Aubrey de Grey – biomedical gerontologist, mathematician and longevity expert
- Jason Dunn – co-founder of Made in Space
- Tiffany Vora – Singularity University Principal Faculty in Medicine and Digital Biology
- Jody Medich – Director of Design for SU Labs, speaking about Augmented, Virtual and Extended Reality
- Stacey Ferreira – Entrepreneur and bestselling author of 2 billion under 20
Save the date for the mind-expanding, innovation-stimulating, SingularityU South Africa Summit 2018, taking place at the Kyalami Grand Prix Circuit in Johannesburg, South Africa, on the 15th and 16th of October 2018. Join the SingularityU Chapter events here: https://singularityuglobal.org/chapters. To learn more about the SingularityU South Africa Summit visit: https://singularityusouthafricasummit.org/. For a media pass, contact Yolanda Zondo: Yolanda.Zondo@edelman.com or +27(0)11-504-4000.
About Singularity University:
Singularity University is a global community that uses exponential technologies to tackle the world’s biggest challenges. Our learning and innovation platform empowers individuals and organisations with the mindset, skillset and network to build break-through solutions that leverage emerging technologies like artificial intelligence, robotics, and digital biology. With our community of entrepreneurs, corporations, development organisations, governments, investors, and academic institutions, we have the necessary ingredients to create a more abundant future for all.
April 2018, Johannesburg: Save the date for the mind-expanding, paradigm-shifting learning experience that is the SingularityU South Africa Summit 2018. It will take place at the Kyalami International Conference Centre in Johannesburg, South Africa, on 15-16 October 2018.
Imagine a world where you are able to reverse the aging process, to repair your body to the way it was in early adulthood. Consider how using a tiny corner of South Africa could harvest enough solar energy to power the whole country without burning harmful fossil fuels. Until a few years ago, these types of technology-based concepts seemed a distant possibility, a theory rather than reality, and a costly one at that.
Today, all that has changed with the rise of exponential technologies like robotics, artificial intelligence, biotechnology, nanotechnology, and quantum computing. These disruptive technologies can make that happen, at a faster pace than ever before and at a cost exponentially less than it was 10 years ago. Today, huge strides are being made as different disciplines and technologies converge to answer the question of “How can we positively impact a billion people?,” and “How can we make it happen using exponential technology?”
Now in its second year, the two-day SingularityU South Africa Summit – in collaboration with Standard Bank, global partner Deloitte, and strategic partners HP, Liberty, MTN, and SAP – continues its mission to accelerate South Africa’s culture of innovation and disruption to address the world’s greatest challenges. This year we are excited to announce that Power 98.7 will be our official radio media partner.
The SingularityU South Africa Summit is bringing together some of the world’s most forward-thinking individuals, and our society’s most curious minds, to help people think about, and find solutions that could #futureproofAfrica, because the African continent has so much potential, and the future is African.
Rob Nail, Associate Founder and CEO of Singularity University, explains: “The challenge we face is not simply one of understanding how new technological advances might impact our society. We also need to learn to think differently and have an exponential mindset about what is now possible as a result of the 10x impact of technology. This conference is for anyone, young and old, leaders in education, business, science, and technology, essentially anyone who wants to shape the future of our world.”
Mic Mann, Director of Mann Made, and co-organiser of SingularityU South Africa Summit 2018, promises an incredible experience, and a line-up of local and international speakers that will be as compelling as the inaugural summit last year. 2018 will see two summit tracks, one for SU alumni who attended in 2017 and another for those joining us on our journey towards disruption for the first time. SU alumni will get the chance to update their knowledge base and learn about the latest developments from around the continent and the world.
“We believe that the SingularityU South Africa Summit is the perfect platform on which to build a hub for African futurists, technologists, executives, entrepreneurs and innovators to come together, exchange ideas and resolve some of the challenges facing the continent. Thanks to the impressive line-up of speakers and participants last year, we were able to communicate and develop ideas for the future of Africa. We are eager to continue engaging those seeking solutions through technology, at our second annual summit,” says Mic Mann.
“As a universal financial services organisation with digital ambitions, Standard Bank is increasingly leaning on exponentially accelerating technology to innovate, create efficiencies and deliver solutions that make a difference in the lives of customers. We are very proud to be involved in the second SingularityU South Africa Summit as we believe it is a fabulous opportunity to drive new thinking and modes of doing business that can help future-proof Africa. Access to insights, networks and technology is just as important as access to finance and we believe that exponential technologies are helping us find real solutions to some of the major challenges on this continent we call home. As Africans we need to think big about how we can leverage technology to drive innovation. The time to think differently is now – and it’s up to all of us,” says Bellinda Carreira, Executive Head: Interactive Marketing, Standard Bank Group.
Delegates at the 2018 summit can once again expect compelling presentations and discussions on emerging, exponential technologies and innovations that can be used to create positive change and foster economic growth on the continent. They will also learn about the far-reaching impact of the SingularityU South Africa Summit 2017. Popular Singularity University speakers, Ramez Naam and David Roberts are returning, and will be joined on stage by longevity expert Aubrey de Grey, alongside numerous African and South African speakers.
The 2018 SingularityU South Africa Summit will also offer a display of advanced technologies from African and global innovators that will enable delegates and their organisations to keep pace with advances in science and technology, including in the areas of Biotechnology, FinTech, Robotics, AI, Augmented and Virtual Reality and Nanotechnology.
To learn more about the SingularityU South Africa Summit and to buy your ticket, visit: https://singularityusouthafricasummit.org/
Notes to the Editor:
About Singularity University:
Singularity University (SU) is a global learning and innovation community using exponential technologies to tackle the world’s biggest challenges and build an abundant future for all. SU’s collaborative platform empowers individuals and organisations across the globe to learn, connect, and innovate breakthrough solutions using accelerating technologies like artificial intelligence, robotics, and digital biology. A certified benefit corporation headquartered at NASA Research Park in Silicon Valley, SU was founded in 2008 by renowned innovators Ray Kurzweil and Peter H. Diamandis with programme funding from leading organisations including Google, Deloitte, and UNICEF. To learn more, visit SU.org, join us on Facebook, follow us on Twitter @SingularityU, and download the SingularityU Hub mobile app.
About SingularityU International Summits:
SingularityU International Summits are hosted by SU partners across the globe to help local leaders understand how to apply exponential technologies to create positive change and economic growth in their regions. The two-day conferences are a catalyst to accelerate a local culture of innovation, a platform to work on impact initiatives and an opportunity to convene members of the community for networking and discussions around exponential technologies.
For press registrations and further information, please contact:
Edelman South Africa
Syndicated from singularityhub.com (Raya Bidshahri) with permission, edited by Mann Made Media.
We’ve all heard the warnings: automation will disrupt entire industries and put millions of people out of jobs. Up to 45 per cent of existing jobs can already be automated using current technology. However, this may not apply to the education sector. After analysing more than 2 000 work activities for more than 800 occupations, McKinsey & Co. reported that of all the sectors examined, “the technical feasibility of automation is lowest in education.”
There’s no doubt that technology will continue to have a powerful impact on global education, both by improving the learning experience and by increasing global access to education. Massive open online courses (MOOCs), chatbot tutors, and AI-powered lesson plans are a few examples of the digital transformation in global education. But will robots and artificial intelligence ever fully replace teachers?
The first-ever Singularity University South Africa Summit (23-24 August 2017) on the African continent aims to equip attendees with exponential knowledge and understanding about how to tackle the country’s and continent’s grand challenges, such as the lack of quality education, high unemployment rates, food security, disaster relief, governance to name a few, through practical and applicable teachings. Thought leaders and industry specialists from various sectors, such as education, healthcare, finance, and energy will present at this thought-provoking summit.
Sizwe Nxasana is the founder of Sifiso Learning Group, which is involved in Edtech and academic publishing, and also founded Future Nation Schools – a chain of affordable private schools in South Africa. He holds education in very high esteem and has a BCom, BCompt (Hons), CA (SA) qualifications and has also been conferred with honorary doctorates by the University of Fort Hare, the Durban University of Technology, the University of Johannesburg and the Walter Sisulu University. Nxasana is the co-founder and chairman of the National Education Collaboration Trust and was appointed chairman of the National Student Financial Aid Scheme. He is also chairman of the Ministerial Task Team that’s developing a new funding model for students who come from poor and “missing middle” backgrounds. Nxasana understands the importance of education and its impact on individuals – especially women and children from disadvantaged backgrounds – the economy, and how it positively affects prosperity, future economic growth and social stability. All of which makes his more than qualified and experienced to speak about the future of education at the inaugural Singularity University South Africa Summit.
The Most Difficult Sector to Automate
While tasks revolving around education – like administration or facilities maintenance – are open to automation, teaching is not.
Effective education involves more than just the transfer of information from teacher to student. Good teaching requires complex social interactions and adaptation to each student’s learning needs and their cultural-social context. An effective teacher is not just responsive to each student’s strengths and weaknesses, but is also empathetic towards their state of mind. Teachers aim to maximise human potential. Vienne Ming, SU Faculty member of Cognitive Neuroscience, will be speaking on that very topic at the Summit in South Africa.
Students also rely on teachers for life guidance and career mentorship. Deep and meaningful human interaction is crucial, and very difficult, if not impossible, to automate. Automating teaching would require artificial general intelligence (as opposed to narrow or specific intelligence). It would require an AI that understands natural human language, can be empathetic towards emotions, plan, strategise and make impactful decisions under unpredictable circumstances. This would be the kind of machine that can do anything a human can do, and it doesn’t exist – yet.
Just because it’s difficult to fully automate teaching, doesn’t mean AI experts aren’t trying.
Jill Watson, a teaching assistant at Georgia Institute of Technology, is an IBM-powered artificial intelligence that’s being implemented in universities around the world. She is able to answer students’ questions with 97 per cent accuracy. Technologies like this also have applications in grading and providing feedback. Some AI algorithms are being refined to perform automatic essay scoring. One project has achieved a 0.945 correlation with human graders. This will have remarkable impacts on online education and will dramatically increase online student retention rates.
Any student with internet can access information and free courses (MOOCs) from universities around the world, but not all students can receive customised feedback due to the limit of manpower. Chatbots like Jill Watson allow the opportunity for students to have their work reviewed and all their questions answered at a minimal cost.
AI algorithms also have a significant role to play in the personalisation of education. Data analysis helps improves students’ results by assessing each student’s learning strengths and weaknesses and creating mass-customised programmes. Algorithms can analyse student data and create flexible programmes that adapt based on real-time feedback. According to the McKinsey Global Institute, all of this data could unlock up to $1.2 trillion in global economic value.
Beyond Automated Teaching
But technological automation alone won’t even begin to tackle the many issues in our global education system. Outdated curricula, standardised tests, and an emphasis on short-term knowledge, call for a transformation of how we teach. It’s not sufficient to automate the process. We must not only be innovative with our automation capabilities, but also with educational content, strategy and policies. And on the continent, there’s an even more pressing issue, the fact that many children don’t even have access or can’t afford quality teachers, school facilities and adequate learning materials in the first place. The lack of education is one of the most vital grand challenges that needs to be addressed in order to move Africa forward and to help realise the future of human potential.
To learn more about how disruptive and exponential technologies will transform your business, and revolutionise the education sector, book your tickets to the upcoming Singularity University South Africa Summit . SingularityU South Africa Summit in collaboration with Standard Bank, global partners Deloitte and strategic partners MTN and SAP, is produced by Mann Made Media and will take place on the 23-24 August 2017 at Kyalami Grand Prix and International Convention Centre.
There is nothing to be gained from blind optimism. But an optimistic mindset can be grounded in rationality and evidence. It may be hard to believe, but we are living in the most exciting time in human history. Despite all of our ongoing global challenges, humanity has never been better off. Not only are we living healthier, happier, and safer lives than ever before, but new technological tools are also opening up a universe of opportunities.
In order to continue to launch moonshot ideas, tackle global challenges, and push humanity forward, it’s important to be intelligently optimistic about the future.
Our Pessimism Bias
When we think about the future of our species, many of us are inherently pessimistic. Our brains are wired to pay more attention to the threats in our personal lives and our world at large.
Many studies have shown we react more strongly to negative stimuli than positive stimuli, and that we dedicate more of our brain resources to negative information. Some psychologists have also shown that we tend to give greater weight to negative thoughts when making decisions and that we tend to remember negative events in our lives more than positives.
There is an evolutionary advantage to these tendencies. We often forget that our neural hardware has been developed to survive the African savannah, where survival depended on being aware of constant sources of danger. But it may no longer serve its purpose in our modern world.
The media is partially to blame for adding fuel to the fire. In fact, studies show that bad news outweighs good news by as much as seventeen negative news reports for every one good one. News agencies know very well that we will pay more attention to bad news and hence, “If it bleeds, it leads.”
Another team of psychologists from McGill revealed that people tend to choose to read articles with negative tones and respond much faster to headlines with negative words. You’re not constantly seeing negative headlines because the world is getting worse, you’re constantly seeing negative headlines because that’s what audiences react to.
Studies have shown that the public tends to pay most attention to news about war and terrorism and least about science and technology. Consequently, we have trained journalists and news channels to focus on those issues more than on our innovative breakthroughs. What does that say about us as a society?
A Need for Intelligent Optimism
Intelligent optimism is all about being excited about the future in an informed and rational way. The mindset is critical if we are to get everyone excited about the future by highlighting the rapid progress we have made and recognizing the tremendous potential humans have to find solutions to our problems.
Despite ongoing challenges, we have a lot to celebrate about how far we’ve come as a species. As optimists like Peter Diamandis point out, we are living in an era of abundance, and there’s a lot of evidence to prove it.
Let’s be very clear: being intelligently optimistic does not mean we turn our backs to the many global challenges we are faced with today. Our world is far from perfect. The refugee crisis, climate change, wealth inequality, and other global issues are significant and worthy of our attention.
But as physicist and futurist David Deutsch points out, “Problems exist; and problems are soluble with the right knowledge.” Intelligent optimism involves recognizing the many problems we are faced with and acknowledging that we can solve them just as we have overcome many other challenges in the past.
A Critical Mindset for Progress
We can’t let negative headlines and the media shape our perception of ourselves as a species, and the vision we have for the future. As legendary astronomer Carl Sagan said, “For all of our failings, despite our limitations and fallibility, we humans are capable of greatness.”
Hollywood likes to paint disproportionately dystopian visions of the world, and while those are possible futures, we can and must also imagine a future of humanity where we live in abundance, prosperity, and transcendence. We can’t expect current innovators and future generations to make this positive vision a reality if they believe our species is doomed for failure. It inspires us to continue to contribute to human progress and feel that we can push humanity forward.
It’s absolutely critical that our journalists cover the many challenges, threats, and issues in our world today. But just as we report the significant negative news in the world, we must also continue to highlight humanity’s accomplishments. After all, how can our youth grow up believing they can have a positive impact on the world if the news is suggesting otherwise?
In 47 CE, Scribonius Largus, court physician to the Roman emperor Claudius, described in his Compositiones a method for treating chronic migraines: place torpedo fish on the scalps of patients to ease their pain with electric shocks. Largus was on the right path; our brains are comprised of electrical signals that influence how brain cells communicate with each other and in turn affect cognitive processes such as memory, emotion and attention.
The science of brain stimulation—altering electrical signals in the brain—has, needless to say, changed in the past 2,000 years. Today we have a handful of transcranial direct current stimulation (tDCS) devices that deliver constant, low current to specific regions of the brain through electrodes on the scalp, for users ranging from online video-gamers to professional athletes and people with depression. Yet cognitive neuroscientists are still working to understand just how much we can influence brain signals and improve cognition with these techniques.
Brain stimulation by tDCS is non-invasive and inexpensive. Some scientists think it increases the likelihood that neurons will fire, altering neural connections and potentially improving the cognitive skills associated with specific brain regions. Neural networks associated with attention control can be targeted to improve focus in people with attention deficit hyperactivity disorder (ADHD). Or people who have a hard time remembering shopping lists and phone numbers might like to target brain areas associated with short-term (also known as working) memory in order to enhance this cognitive process. However, the effects of tDCS are inconclusive across a wide body of peer-reviewed studies, particularly after a single session. In fact, some experts question whether enough electrical stimulation from the technique is passing through the scalp into the brain to alter connections between brain cells at all.
Notably, the neuroscientist György Buzsáki at New York University presented research conducted with cadavers, concluding that very little of the current administered through tDCS actually travels into the brain, perhaps under 10 percent. Other researchers report the opposite. Recent neuroimaging studieshave shown significant increases in neurotransmitter levels and bloodflow at the site of tDCS stimulation during a single session. Still, in response to growing concern, many researchers have begun to administer tDCS over a period of days for an additive effect. Studies have shown enhanced treatment effects (yet no increase in side effects) attributable to repeated sessions as opposed to a single session of tDCS.
Even more basic concerns about tDCS research need to be addressed; particularly, tDCS protocols are rather inconsistent between research labs. For example, one lab might administer tDCS for 20 minutes at the maximum voltage of 2 mA while another lab might administer tDCS for 25 minutes at 1 mA, and another still might administer for 15 minutes at 1.5 mA. Combining each of these studies into a literature review proves time-consuming and confusing. We do not know yet what the optimal time and voltage levels are for tDCS. Let’s say 1 mA is too low to initiate neural changes and improve cognitive abilities. Then handfuls of papers and years of research could turn out to be quite uninformative.
Lately, the technology has been combined with cognitive training to achieve long-term improvements. This is a natural progression of the work. It is thought that tDCS allows neurons to fire more readily. Then on top of that, just like working out a muscle, a cognitive training task is an exercise that will work out the neurons in brain regions heavily involved with employing that cognitive process (muscles). To take advantage of both of these techniques, shouldn’t we then encourage those neurons and brain regions to work even harder during tDCS by engaging the specific brain areas being targeted with a cognitive task? In fact, studies confirm this theory, and show that heightened performance and longer-lasting improvements result from the combination of cognitive training with tDCS.
In a several-year collaboration between the Cognitive Neuroimaging Lab at the University of Michigan and the Working Memory and Plasticity Lab at the University of California at Irvine, we have been investigating working-memory training in conjunction with tDCS. During the training task, participants are asked to hold progressively more information in their working memory while simultaneously undergoing tDCS. Although the results are still limited and somewhat mixed, evidence suggests that the combination of brain stimulation and training is more effective in improving working memory than either technique alone. For the experimental tDCS group, better performance could be measured even a year after our sessions, an improvement not found with placebo controls. And our collaboration has even uncovered a nuance of combined working-memory training and tDCS: participants who began training with a lower baseline working memory improved more than those who began with a higher baseline performance.
Clearly there is much more work to do to understand tDCS and cognitive training. To create more consistency in the literature, researchers will need to investigate optimal parameters (such as time length and current intensity) for tDCS as a form of cognitive and therapeutic enhancement. A next step is to understand the underlying neural mechanisms of tDCS and cognitive training, which will require a multidisciplinary approach using neuroimaging techniques such as fMRI. This would then make it possible for researchers to visualize (1) activation of brain regions due to tDCS, (2) activation due to tDCS and a cognitive task, and even (3) changes in activation specifically due to combined tDCS and cognitive training over cognitive training alone.
I am cautiously optimistic about the promise of tDCS; cognitive training paired with tDCS specifically could lead to improvements in attention and memory for people of all ages and make some huge changes in society. Maybe we could help to stave off cognitive decline in older adults or enhance cognitive skills, such as focus, in people such as airline pilots or soldiers, who need it the most. Still, I am happy to report that we have at least moved on from torpedo fish.
This article was originally published at Aeon and has been republished under Creative Commons.
For some die-hard tech evangelists, using neural interfaces to merge with AI is the inevitable next step in humankind’s evolution. But a group of 27 neuroscientists, ethicists, and machine learning experts have highlighted the myriad ethical pitfalls that could be waiting.
To be clear, it’s not just futurologists banking on the convergence of these emerging technologies. The Morningside Group estimates that private spending on neurotechnology is in the region of $100 million a year and growing fast, while in the US alone public funding since 2013 has passed the $500 million mark.
The group is made up of representatives from international brain research projects, tech companies like Google and neural interface startup Kernel, and academics from the US, Canada, Europe, Israel, China, Japan, and Australia. They met in May to discuss the ethics of neurotechnologies and AI, and have now published their conclusions in the journal Nature.
While the authors concede it’s likely to be years or even decades before neural interfaces are used outside of limited medical contexts, they say we are headed towards a future where we can decode and manipulate people’s mental processes, communicate telepathically, and technologically augment human mental and physical capabilities.
“Such advances could revolutionize the treatment of many conditions…and transform human experience for the better,” they write. “But the technology could also exacerbate social inequalities and offer corporations, hackers, governments, or anyone else new ways to exploit and manipulate people. And it could profoundly alter some core human characteristics: private mental life, individual agency, and an understanding of individuals as entities bound by their bodies.”
“The researchers identify four key areas of concern: privacy and consent, agency and identity, augmentation, and bias.”
The researchers identify four key areas of concern: privacy and consent, agency and identity, augmentation, and bias. The first and last topics are already mainstays of warnings about the dangers of unregulated and unconscientious use of machine learning, and the problems and solutions the authors highlight are well-worn.
On privacy, the concerns are much the same as those raised about the reams of personal data corporations and governments are already hoovering up. The added sensitivity of neural data makes the suggestion of an automatic opt-out for sharing of neural data and bans on individuals selling their data more feasible.
But other suggestions to use technological approaches to better protect data like “differential privacy,” “federated learning,” and blockchain are equally applicable to non-neural data. Similarly, the ability of machine learning algorithms to pick up bias inherent in training data is already a well-documented problem, and one with ramifications that go beyond just neurotechnology.
When it comes to identity, agency, and augmentation, though, the authors show how the convergence of AI and neurotechnology could result in entirely novel challenges that could test our assumptions about the nature of the self, personal responsibility, and what ties humans together as a species.
They ask the reader to imagine if machine learning algorithms combined with neural interfaces allowed a form of ‘auto-complete’ function that could fill the gap between intention and action, or if you could telepathically control devices at great distance or in collaboration with other minds. These are all realistic possibilities that could blur our understanding of who we are and what actions we can attribute as our own.
The authors suggest adding “neurorights” that protect identity and agency to international treaties like the Universal Declaration of Human Rights, or possibly the creation of a new international convention on the technology. This isn’t an entirely new idea; in May, I reported on a proposal for four new human rights to protect people against neural implants being used to monitor their thoughts or interfere with or hijack their mental processes.
But these rights were designed primarily to protect against coercive exploitation of neurotechnology or the data it produces. The concerns around identity and agency are more philosophical, and it’s less clear that new rights would be an effective way to deal with them. While the examples the authors highlight could be forced upon someone, they sound more like something a person would willingly adopt, potentially waiving rights in return for enhanced capabilities.
The authors suggest these rights could enshrine a requirement to educate people about the possible cognitive and emotional side effects of neurotechnologies rather than the purely medical impacts. That’s a sensible suggestion, but ultimately people may have to make up their own minds about what they are willing to give up in return for new abilities.
This leads to the authors’ final area of concern—augmentation. As neurotechnology makes it possible for people to enhance their mental, physical, and sensory capacities, it is likely to raise concerns about equitable access, pressure to keep up, and the potential for discrimination against those who don’t. There’s also the danger that military applications could lead to an arms race.
“The temptation could be to simply ban the technology altogether, but the researchers warn that this could simply push it underground.”
The authors suggest that guidelines should be drawn up at both the national and international levels to set limits on augmentation in a similar way to those being drawn up to control gene editing in humans, but they admit that “any lines drawn will inevitably be blurry.” That’s because it’s hard to predict the impact these technologies will have and building international consensus will be hard because different cultures lend more weight to things like privacy and individuality than others.
The temptation could be to simply ban the technology altogether, but the researchers warn that this could simply push it underground. In the end, they conclude that it may come down to the developers of the technology to ensure it does more good than harm. Individual engineers can’t be expected to shoulder this burden alone, though.
“History indicates that profit hunting will often trump social responsibility in the corporate world,” the authors write. “And even if, at an individual level, most technologists set out to benefit humanity, they can come up against complex ethical dilemmas for which they aren’t prepared.”
For this reason, they say, industry and academia need to devise a code of conduct similar to the Hippocratic Oath doctors are required to take, and rigorous ethical training needs to become standard when joining a company or laboratory.
Artificial intelligence has received its fair share of hype recently. However, it’s hype that’s well-founded: IDC predicts worldwide spend on AI and cognitive computing will culminate to a whopping $46 billion (with a “b”) by 2020, and all the tech giants are jumping on board faster than you can say “ROI.” But what is AI, exactly?
According to Hilary Mason, AI today is being misused as a sort of catch-all term to basically describe “any system that uses data to do anything.” But it’s so much more than that. A truly artificially intelligent system is one that learns on its own, one that’s capable of crunching copious amounts of data in order to create associations and intelligently mimic actual human behavior.
It’s what powers the technology anticipating our next online purchase (Amazon), or the virtual assistant that deciphers our voice commands with incredible accuracy (Siri), or even the hipster-friendly recommendation enginethat helps you discover new music before your friends do (Pandora). But AI is moving past these consumer-pleasing “nice-to-haves” and getting down to serious business: saving our butts.
Much in the same way robotics entered manufacturing, AI is making its mark in healthcare by automating mundane, repetitive tasks. This is especially true in the case of detecting cancer. By leveraging the power of deep learning, algorithms can now be trained to distinguish between sets of pixels in an image that represents cancer versus sets that don’t—not unlike how Facebook’s image recognition software tags pictures of our friends without us having to type in their names first. This software can then go ahead and scour millions of medical images (MRIs, CT scans, etc.) in a single day to detect anomalies on a scope that humans just aren’t capable of. That’s huge.
As if that wasn’t enough, these algorithms are constantly learning and evolving, getting better at making these associations with each new data set that gets fed to them. Radiology, dermatology, and pathology will experience a giant upheaval as tech giants and startups alike jump in to bring these deep learning algorithms to a hospital near you.
In fact, some already are: the FDA recently gave their seal of approval for an AI-powered medical imaging platform that helps doctors analyze and diagnose heart anomalies. This is the first time the FDA has approved a machine learning application for use in a clinical setting.
But how efficient is AI compared to humans, really? Well, aside from the obvious fact that software programs don’t get bored or distracted or have to check Facebook every twenty minutes, AI is exponentially better than us at analyzing data.
Take, for example, IBM’s Watson. Watson analyzed genomic data from both tumor cells and healthy cells and was ultimately able to glean actionable insights in a mere 10 minutes. Compare that to the 160 hours it would have taken a human to analyze that same data. Diagnoses aside, AI is also being leveraged in pharmaceuticals to aid in the very time-consuming grunt work of discovering new drugs, and all the big players are getting involved.
But AI is far from being just a behind-the-scenes player. Gartner recently predicted that by 2025, 50 percent of the population will rely on AI-powered “virtual personal health assistants” for their routine primary care needs. What this means is that consumer-facing voice and chat-operated “assistants” (think Siri or Cortana) would, in effect, serve as a central hub of interaction for all our connected health devices and the algorithms crunching all our real-time biometric data. These assistants would keep us apprised of our current state of well-being, acting as a sort of digital facilitator for our personal health objectives and an always-on health alert system that would notify us when we actually need to see a physician.
Slowly, and thanks to the tsunami of data and advancements in self-learning algorithms, healthcare is transitioning from a reactive model to more of a preventative model—and it’s completely upending the way care is delivered. Whether Elon Musk’s dystopian outlook on AI holds any weight or not is yet to be determined. But one thing’s certain: for the time being, artificial intelligence is saving our lives.
The renowned physicist Dr. Richard Feynman once said: “What I cannot create, I do not understand. Know how to solve every problem that has been solved.”
An increasingly influential subfield of neuroscience has taken Feynman’s words to heart. To theoretical neuroscientists, the key to understanding how intelligence works is to recreate it inside a computer. Neuron by neuron, these whizzes hope to reconstruct the neural processes that lead to a thought, a memory, or a feeling.
With a digital brain in place, scientists can test out current theories of cognition or explore the parameters that lead to a malfunctioning mind. As philosopher Dr. Nick Bostrom at the University of Oxford argues, simulating the human mind is perhaps one of the most promising (if laborious) ways to recreate—and surpass—human-level ingenuity.
There’s just one problem: our computers can’t handle the massively parallel nature of our brains. Squished within a three-pound organ are over 100 billion interconnected neurons and trillions of synapses.
Even the most powerful supercomputers today balk at that scale: so far, machines such as the K computer at the Advanced Institute for Computational Science in Kobe, Japan can tackle at most ten percent of neurons and their synapses in the cortex.
This ineptitude is partially due to software. As computational hardware inevitably gets faster, algorithms increasingly become the linchpin towards whole-brain simulation.
This month, an international team completely revamped the structure of a popular simulation algorithm, developing a powerful piece of technology that dramatically slashes computing time and memory use.
The new algorithm is compatible with a range of computing hardware, from laptops to supercomputers. When future exascale supercomputers hit the scene—projected to be 10 to 100 times more powerful than today’s top performers—the algorithm can immediately run on those computing beasts.
“With the new technology we can exploit the increased parallelism of modern microprocessors a lot better than previously, which will become even more important in exascale computers,” said study author Jakob Jordan at the Jülich
Research Center in Germany, who published the work in Frontiers in Neuroinformatics.
“It’s a decisive step towards creating the technology to achieve simulations of brain-scale networks,” the authors said.
The Trouble With Scale
Current supercomputers are composed of hundreds of thousands of subdomains called nodes. Each node has multiple processing centers that can support a handful of virtual neurons and their connections.
A main issue in brain simulation is how to effectively represent millions of neurons and their connections inside these processing centers to cut time and power.
One of the most popular simulation algorithms today is the Memory-Usage Model. Before scientists simulate changes in their neuronal network, they need to first create all the neurons and their connections within the virtual brain using the algorithm.
Here’s the rub: for any neuronal pair, the model stores all information about connectivity in each node that houses the receiving neuron—the postsynaptic neuron.
In other words, the presynaptic neuron, which sends out electrical impulses, is shouting into the void; the algorithm has to figure out where a particular message came from by solely looking at the receiver neuron and data stored within its node.
It sounds like a strange setup, but the model allows all the nodes to construct their particular portion of the neural network in parallel. This dramatically cuts down boot-up time, which is partly why the algorithm is so popular.
But as you probably guessed, it comes with severe problems in scaling. The sender node broadcasts its message to all receiver neuron nodes. This means that each receiver node needs to sort through every single message in the network—even ones meant for neurons housed in other nodes.
That means a huge portion of messages get thrown away in each node, because the addressee neuron isn’t present in that particular node. Imagine overworked post office staff skimming an entire country’s worth of mail to find the few that belong to their jurisdiction. Crazy inefficient, but that’s pretty much what goes on in the Memory-Usage Model.
The problem becomes worse as the size of the simulated neuronal networkgrows. Each node needs to dedicate memory storage space to an “address book” listing all its neural inhabitants and their connections. At the scale of billions of neurons, the “address book” becomes a huge memory hog.
Size Versus Source
The team hacked the problem by essentially adding a zip code to the algorithm.
Here’s how it works. The receiver nodes contain two blocks of information. The first is a database that stores data about all the sender neurons that connect to the nodes. Because synapses come in several sizes and types that differ in their memory consumption, this database further sorts its information based on the type of synapses formed by neurons in the node.
This setup already dramatically differs from its predecessor, in which connectivity data is sorted by the incoming neuronal source, not synapse type. Because of this, the node no longer has to maintain its “address book.”
“The size of the data structure is therefore independent of the total number of neurons in the network,” the authors explained.
The second chunk stores data about the actual connections between the receiver node and its senders. Similar to the first chunk, it organizes data by the type of synapse. Within each type of synapse, it then separates data by the source (the sender neuron).
In this way, the algorithm is far more specific than its predecessor: rather than storing all connection data in each node, the receiver nodes only store data relevant to the virtual neurons housed within.
The team also gave each sender neuron a target address book. During transmission the data is broken up into chunks, with each chunk containing a zip code of sorts directing it to the correct receiving nodes.
Rather than a computer-wide message blast, here the data is confined to the receiver neurons that they’re supposed to go to.
Speedy and Smart
The modifications panned out.
In a series of tests, the new algorithm performed much better than its predecessors in terms of scalability and speed. On the supercomputer JUQUEEN in Germany, the algorithm ran 55 percent faster than previous models on a random neural network, mainly thanks to its streamlined data transfer scheme.
At a network size of half a billion neurons, for example, simulating one second of biological events took about five minutes of JUQUEEN runtime using the new algorithm. Its predecessor clocked in at six times that.
This really “brings investigations of fundamental aspects of brain function, like plasticity and learning unfolding over minutes…within our reach,” said study author Dr. Markus Diesmann at the Jülich Research Centre.
As expected, several scalability tests revealed that the new algorithm is far more proficient at handling large networks, reducing the time it takes to process tens of thousands of data transfers by roughly threefold.
“The novel technology profits from sending only the relevant spikes to each process,” the authors concluded. Because computer memory is now uncoupled from the size of the network, the algorithm is poised to tackle brain-wide simulations, the authors said.
While revolutionary, the team notes that a lot more work remains to be done. For one, mapping the structure of actual neuronal networks onto the topology of computer nodes should further streamline data transfer. For another, brain simulation software needs to regularly save its process so that in case of a computer crash, the simulation doesn’t have to start over.
“Now the focus lies on accelerating simulations in the presence of various forms of network plasticity,” the authors concluded. With that solved, the digital human brain may finally be within reach.
Gene editing is in the news a lot these days, but what is it exactly? Gene editing is the process of making precise and permanent changes to living things at the level of DNA, or more specifically, to the four molecular building blocks of DNA.
Though there are multiple ways to perform targeted gene editing, the most commonly discussed method these days is CRISPR/Cas9.
Watch this episode of Tech-x-planations to learn how CRISPR/Cas9 works.