Ask any neuroscientist to draw you a neuron, and it’ll probably look something like a star with two tails: one stubby with extensive tree-like branches, the other willowy, lengthy and dotted with spindly spikes.
While a decent abstraction, this cartoonish image hides the uncomfortable truth that scientists still don’t know much about what many neurons actually look like, not to mention the extent of their connections.
But without untangling the jumbled mess of neural wires that zigzag across the brain, scientists are stumped in trying to answer one of the most fundamental mysteries of the brain: how individual neuronal threads carry and assemble information, which forms the basis of our thoughts, memories, consciousness, and self.
What if there was a way to virtually trace and explore the brain’s serpentine fibers, much like the way Google Maps allows us to navigate the concrete tangles of our cities’ highways?
Thanks to an interdisciplinary team at Janelia Research Campus, we’re on our way. Meet MouseLight, the most extensive map of the mouse brain ever attempted. The ongoing project has an ambitious goal: reconstructing thousands—if not more—of the mouse’s 70 million neurons into a 3D map. (You can play with it here!)
With map in hand, neuroscientists around the world can begin to answer how neural circuits are organized in the brain, and how information flows from one neuron to another across brain regions and hemispheres.
The first release, presented Monday at the Society for Neuroscience Annual Conference in Washington, DC, contains information about the shape and sizes of 300 neurons.
And that’s just the beginning.
MouseLight is hardly the first rodent brain atlasing project.
The Mouse Brain Connectivity Atlas at the Allen Institute for Brain Science in Seattle tracks neuron activity across small circuits in an effort to trace a mouse’s connectome—a complete atlas of how the firing of one neuron links to the next.
MICrONS (Machine Intelligence from Cortical Networks), the $100 million government-funded “moonshot” hopes to distill brain computation into algorithms for more powerful artificial intelligence. Its first step? Brain mapping.
What makes MouseLight stand out is its scope and level of detail.
MICrONS, for example, is focused on dissecting a cubic millimeter of the mouse visual processing center. In contrast, MouseLight involves tracing individual neurons across the entire brain.
And while connectomics outlines the major connections between brain regions, the birds-eye view entirely misses the intricacies of each individual neuron. This is where MouseLight steps in.
Slice and Dice
With a width only a fraction of a human hair, neuron projections are hard to capture in their native state. Tug or squeeze the brain too hard, and the long, delicate branches distort or even shred into bits.
In fact, previous attempts at trying to reconstruct neurons at this level of detail topped out at just a dozen, stymied by technological hiccups and sky-high costs.
A few years ago, the MouseLight team set out to automate the entire process, with a few time-saving tweaks. Here’s how it works.
After injecting a mouse with a virus that causes a handful of neurons to produce a green-glowing protein, the team treated the brain with a sugar alcohol solution. This step “clears” the brain, transforming the beige-colored organ to translucent, making it easier for light to penetrate and boosting the signal-to-background noise ratio. The brain is then glued onto a small pedestal and ready for imaging.
Building upon an established method called “two-photon microscopy,” the team then tweaked several parameters to reduce imaging time from days (or weeks) down to a fraction of that. Endearingly known as “2P” by the experts, this type of laser microscope zaps the tissue with just enough photos to light up a single plane without damaging the tissue—sharper plane, better focus, crisper image.
After taking an image, the setup activates its vibrating razor and shaves off the imaged section of the brain—a waspy slice about 200 micrometers thick. The process is repeated until the whole brain is imaged.
The resulting images strikingly highlight every crook and cranny of a neuronal branch, popping out against a pitch-black background. But pretty pictures come at a hefty data cost: each image takes up a whopping 20 terabytes of data—roughly the storage space of 4,000 DVDs, or 10,000 hours of movies.
Stitching individual images back into 3D is an image-processing nightmare. The MouseLight team used a combination of computational power and human prowess to complete this final step.
The reconstructed images are handed off to a mighty team of seven trained neuron trackers. With the help of tracing algorithms developed in-house and a keen eye, each member can track roughly a neuron a day—significantly less time than the week or so previously needed.
A Numbers Game
Even with just 300 fully reconstructed neurons, MouseLight has already revealed new secrets of the brain.
While it’s widely accepted that axons, the neurons’ outgoing projection, can span the entire length of the brain, these extra-long connections were considered relatively rare. (In fact, one previously discovered “giant neuron” was thought to link to consciousness because of its expansive connections).
MouseLight blows that theory out of the water.
The data clearly shows that “giant neurons” are far more common than previously thought. For example, four neurons normally associated with taste had wiry branches that stretched all the way into brain areas that control movement and process touch.
“We knew that different regions of the brain talked to each other, but seeing it in 3D is different,” says Dr. Eve Marder at Brandeis University.
“The results are so stunning because they give you a really clear view of how the whole brain is connected.”
With a tested and true system in place, the team is now aiming to add 700 neurons to their collection within a year.
But appearance is only part of the story.
We can’t tell everything about a person simply by how they look. Neurons are the same: scientists can only infer so much about a neuron’s function by looking at their shape and positions. The team also hopes to profile the gene expression patterns of each neuron, which could provide more hints to their roles in the brain.
MouseLight essentially dissects the neural infrastructure that allows information traffic to flow through the brain. These anatomical highways are just the foundation. Just like Google Maps, roads form only the critical first layer of the map. Street view, traffic information and other add-ons come later for a complete look at cities in flux.
The same will happen for understanding our ever-changing brain.
Image Credit: Janelia Research Campus, MouseLight project team
Technology has the potential to solve some of our most intractable healthcare problems. In fact, it’s already doing so, with inventions getting us closer to a medical Tricorder, and progress toward 3D printed organs, and AIs that can do point-of-care diagnosis.
No doubt these applications of cutting-edge tech will continue to push the needle on progress in medicine, diagnosis, and treatment. But what if some of the healthcare hacks we need most aren’t high-tech at all?
According to Dr. Darshak Sanghavi, this is exactly the case. In a talk at Singularity University’s Exponential Medicine last week, Sanghavi told the audience, “We often think in extremely complex ways, but I think a lot of the improvements in health at scale can be done in an analog way.”
Sanghavi is the chief medical officer and senior vice president of translation at OptumLabs, and was previously director of preventive and population health at the Center for Medicare and Medicaid Innovation, where he oversaw the development of large pilot programs aimed at improving healthcare costs and quality.
“How can we improve health at scale, not for only a small number of people, but for entire populations?” Sanghavi asked. With programs that benefit a small group of people, he explained, what tends to happen is that the average health of a population improves, but the disparities across the group worsen.
“My mantra became, ‘The denominator is everybody,’” he said. He shared details of some low-tech but crucial fixes he believes could vastly benefit the US healthcare system.
1. Regulatory Hacking
Healthcare regulations are ultimately what drive many aspects of patient care, for better or worse. Worse because the mind-boggling complexity of regulations (exhibit A: the Affordable Care Act is reportedly about 20,000 pages long) can make it hard for people to get the care they need at a cost they can afford, but better because, as Sanghavi explained, tweaking these regulations in the right way can result in across-the-board improvements in a given population’s health.
An adjustment to Medicare hospitalization rules makes for a relevant example. The code was updated to state that if people who left the hospital were re-admitted within 30 days, that hospital had to pay a penalty. The result was hospitals taking more care to ensure patients were released not only in good health, but also with a solid understanding of what they had to do to take care of themselves going forward. “Here, arguably the writing of a few lines of regulatory code resulted in a remarkable decrease in 30-day re-admissions, and the savings of several billion dollars,” Sanghavi said.
2. Long-Term Focus
It’s easy to focus on healthcare hacks that have immediate, visible results—but what about fixes whose benefits take years to manifest? How can we motivate hospitals, regulators, and doctors to take action when they know they won’t see changes anytime soon?
“I call this the reality TV problem,” Sanghavi said. “Reality shows don’t really care about who’s the most talented recording artist—they care about getting the most viewers. That is exactly how we think about health care.”
Sanghavi’s team wanted to address this problem for heart attacks. They found they could reliably determine someone’s 10-year risk of having a heart attack based on a simple risk profile. Rather than monitoring patients’ cholesterol, blood pressure, weight, and other individual factors, the team took the average 10-year risk across entire provider panels, then made providers responsible for controlling those populations.
“Every percentage point you lower that risk, by hook or by crook, you get some people to stop smoking, you get some people on cholesterol medication. It’s patient-centered decision-making, and the provider then makes money. This is the world’s first predictive analytic model, at scale, that’s actually being paid for at scale,” he said.
3. Aligned Incentives
If hospitals are held accountable for the health of the communities they’re based in, those hospitals need to have the right incentives to follow through. “Hospitals have to spend money on community benefit, but linking that benefit to a meaningful population health metric can catalyze significant improvements,” Sanghavi said.
He used smoking cessation as an example. His team designed a program where hospitals were given a score (determined by the Centers for Disease Control and Prevention) based on the smoking rate in the counties where they’re located, then given monetary incentives to improve their score. Improving their score, in turn, resulted in better health for their communities, which meant fewer patients to treat for smoking-related health problems.
4. Social Determinants of Health
Social determinants of health include factors like housing, income, family, and food security. The answer to getting people to pay attention to these factors at scale, and creating aligned incentives, Sanghavi said, is “Very simple. We just have to measure it to start with, and measure it universally.”
His team was behind a $157 million pilot program called Accountable Health Communities that went live this year. The program requires all Medicare and Medicaid beneficiaries get screened for various social determinants of health. With all that data being collected, analysts can pinpoint local trends, then target funds to address the underlying problem, whether it’s job training, drug use, or nutritional education. “You’re then free to invest the dollars where they’re needed…this is how we can improve health at scale, with very simple changes in the incentive structures that are created,” he said.
5. ‘Securitizing’ Public Health
Sanghavi’s final point tied back to his discussion of aligning incentives. As misguided as it may seem, the reality is that financial incentives can make a huge difference in healthcare outcomes, from both a patient and a provider perspective.
Sanghavi’s team did an experiment in which they created outcome benchmarks for three major health problems that exist across geographically diverse areas: smoking, adolescent pregnancy, and binge drinking. The team proposed measuring the baseline of these issues then creating what they called a social impact bond. If communities were able to lower their frequency of these conditions by a given percent within a stated period of time, they’d get paid for it.
“What that did was essentially say, ‘you have a buyer for this outcome if you can achieve it,’” Sanghavi said. “And you can try to get there in any way you like.” The program is currently in CMS clearance.
AI and Robots Not Required
Using robots to perform surgery and artificial intelligence to diagnose disease will undoubtedly benefit doctors and patients around the US and the world. But Sanghavi’s talk made it clear that our healthcare system needs much more than this, and that improving population health on a large scale is really a low-tech project—one involving more regulatory and financial innovation than technological innovation.
“The things that get measured are the things that get changed,” he said. “If we choose the right outcomes to predict long-term benefit, and we pay for those outcomes, that’s the way to make progress.”
Arthur C. Clarke, a British science fiction writer, is well known for once writing, “Any sufficiently advanced technology is indistinguishable from magic.”
Consumer virtual reality is going through a rough patch as high expectations and hype have deflated somewhat, but when VR does work, it can feel a bit like magic.
At Singularity University’s Exponential Medicine Summit this week, the audience learned about fascinating virtual reality applications within a mix of medical contexts.
Here’s a look at two we found particularly interesting.
Surgical Training in Virtual Reality
Shafi Ahmed, co-founder of Virtual Medics and Medical Realities, spoke again this year at Exponential Medicine. Last year we wrote about Ahmed’s efforts to solve the huge global shortage of trained surgeons:
“According to the Lancet commission on global surgery, the surgical workforce would have to double to meet the needs of basic surgical care for the developing world by 2030. Dr. Ahmed imagines being able to train thousands of surgeons simultaneously in virtual reality.”
With this in mind, Ahmed made a splash back in 2014 when he reached 14,000 surgeons across 100 different countries by using Google Glass to stream a surgical training session. In 2016, Ahmed took this a step further by live-streaming a cancer surgery in virtual reality that was shot in 360-degree video while he removed a colon tumor from a patient.
Ahmed’s philosophy is clear. He says, “Forget one-to-one. My idea is one to many. I want to share knowledge with the masses.” To achieve this, his company Medical Realities is building the world’s first interactive VR training module for surgeons. After these successes, Ahmed began searching for other low-cost, high-tech platforms to leverage for surgical training. He landed on social media.
Last year, Ahmed used Snapchat glasses to record an operation in ten-second clips that were uploaded to his Snapchat story. It was a huge success, receiving two million views and 100,000 YouTube downloads. Ahmed said, “It’s incredible reach, and it’s free. That’s the kind of world we live in.” Ahmed also streamed Twitter’s first live operation.
Now, Ahmed is working with virtual reality company Thrive to push the boundaries of remote collaboration in virtual reality. The platform enables doctors to remotely log into a shared virtual office to discuss patient cases. Ahmed showed an example of four doctors from four different locations who logged into a virtual office together to discuss a patient’s case in real time. Inside the virtual office the doctors were even able to access and review patients’ medical files.
Virtual Reality for Therapeutics
Brennan Spiegel, a pioneer of VR in healthcare at Cedars-Sinai, has witnessed firsthand the positive impact of using virtual reality with patients for therapeutic treatment. At Cedars-Sinai, Spiegel leads a team that studies how technologies like smartphone apps, VR, wearable biosensors, and social media can improve health outcomes.
Some of the findings have been incredible.
Spiegel told the story of a young adult suffering from severe Crohn’s disease, which forced him to spend 100 days of the last year in the hospital. The most healing environment he can think of, however, is his grandmother’s living room. Spiegel’s team was able to place a Samsung 360 camera in the grandmother’s living room then give the patient a VR headset to virtually transport him there. The experience nearly brought him to tears and is a perfect example of how VR can make patients in hospital treatment more comfortable.
Spiegel’s team also had success using VR to help men with high blood pressure. Inside of the VR program, users are transported into a kitchen and educated on which types of food contain sodium. The program then brings users inside a human body, where they can see the targeted impact of the sodium intake.
Spiegel’s dream is to see a VR pharmacy where the right treatment experience is mapped to the right patient.
Virtual and augmented reality are creating novel methods in health care for treatment, training, and doctor collaboration. These are just a few examples of practical uses showing VR’s potential applied to medicine. In many ways, however, this is only the beginning of what’s to come as VR and AR mature.
Technology doesn’t always need to feel like magic, but when it can for a struggling patient or doctor seeking access to training, that’s an extraordinary thing for health care.
To Bob Hariri, the body is a machine. Hariri is a surgeon, entrepreneur, and biomedical scientist. But perhaps it’s his time flying jets that most strongly lends itself to such thinking.
“I’ve been flying longer than anything I’ve done in my life,” Hariri said in an interview with Peter Diamandis this week at Singularity University’s Exponential Medicine in San Diego.
“You know why aviation is so incredibly safe? Because the machine, the airplane—and a human being is a machine–that airplane undergoes a continuous process of renovation and repair.”
Pilots take care of their planes by replacing parts before they wear out, Hariri said. Medicine, on the other hand, is reactive. Nothing gets replaced until it’s already broken. Hariri got into regenerative medicine and founded Celgene Cellular Therapeutics to help remedy this situation.
Regenerative medicine targets the body’s most basic parts: its trillions of specialized cells. Though medicine is reactive, the body itself is a bit like a jet, one that makes new parts to replace old ones on the fly. The vast majority of cells in the body are no more than two years old, Hariri said. (Although, the time it takes new cells to replace old cells varies by type.)
Our regenerative capability is greatest before we’re born. In an onstage interview, Hariri explained he first glimpsed this as a young surgeon. It’s long been possible to identify certain defects in a developing fetus and surgically repair them. The surgery itself requires serious incisions, but after further development in the womb, the baby is born with no scar.
“Is this a way to unlock a human being’s inner salamander?” he wondered.
The human body’s natural regenerative powers derive from its reservoir of stem cells. These cells can differentiate into any other cell type in the body, and they can also divide and replicate themselves to keep the reservoir stocked. Throughout life, the body taps into its stock of stem cells to repair, replace, and renovate—basically, to keep the system fit for flight.
Over the years, however, the regenerative reservoir is depleted. The big question: Can we therapeutically administer stem cells to supplement the body’s natural regenerative tendencies? “The old excitement and optimism around ‘can a stem cell a day keep the doctor away’ is something that drove me into the field,” Hariri said.
So, how near are we to more proactively maintaining our bodies like one of his jets?
There’s explanatory power in analogy, but pose it to scientists specializing in nearly any field, and you’ll get a “Yes, but…” In biology and medicine, the story is especially complex. As a scientist in a still-developing field, this is something Hariri likely sympathizes with.
Stem cell medicine has been evangelized for a long time. We tend to see the endgame of a new discovery at the beginning. This is a useful catalyst for more research, but it takes years of further problem-solving between catalyst and common clinical use.
“I call this the fog of cellular medicine,” Hariri said. “There was an absence of a deep understanding of the underlying biology of stem cells. More importantly, there was absence of understanding what cell was actually going to be the one we derive into a product.”
Going from discovery to product in medicine faces a number of hurdles. Questions of source quality, scalability, cost, and regulatory standards are challenges still being solved. Stem cell therapies have also had their fair share of criticism, depending on the source of the cells. Embryonic stem cells, for instance, kicked up dust earlier this century.
But after 20 years, Hariri said, the field has come a long way.
Today, there are a number of stem cell sources including somatic stem cells found in tissues throughout the body. Scientists have even learned to make stem cells by genetically reprogramming differentiated adult cells, such as skin or blood cells.
Hariri’s favored source lines up with that initial inspiration, the regenerative potential present in the womb. Placental stem cells, Hariri said, are great because they’re of the highest quality. Over the years stem cells in the body accumulate defects, but placental stem cells are yet incorrupt, a fresh set of instructions to reboot the system. Further, such cells are fairly easily obtained, as many people dispose of the placenta after birth.
There also appears to be regulatory progress. “Right now, most of us recognize that there are beginning to be approvals using cellular medicine to treat cancer, to treat a variety of autoimmune diseases, degenerative diseases,” Hariri said.
And unsurprisingly, there’s plenty of demand too—although, perhaps because of the field’s long promise, this demand is sometimes outpacing science. Stem cell clinics are opening worldwide, often in less-regulated environs, and this can be problematic, even dangerous.
Three women suffered “severe, permanent eye damage” after receiving stem cell injections at a Florida clinic. The FDA acknowledges the potential of stem cell therapy but warns patients hoping for yet unavailable cures are vulnerable to unscrupulous providers.
“It’s time for the industry to be corralled and controlled in order to make sure that the products are of the highest quality and the clinical application is at the highest standards,” Hariri said.
Still, Hariri is optimistic and looking to the future. His dream is to extend cell therapies beyond the most severe diseases to more common disorders, such as diabetes, concussion, and cartilage repair. Even further out, he sees it as a way to prolong good health much further into old age—to keep our bodily machine in good nick.
It isn’t about “living forever” which strikes Hariri’s engineering brain as a rather vague notion. It’s about living a full life with the extra years modern medicine has given us.
“[If we can do it] in orthopedics, it’ll maintain a level of mobility that we’ve never seen before. If we can do it in the brain; if we can do it in the heart; if we can do it in other organ systems, I think that’s the key to living to 100 and being as active at 95 as you were at 45.”
The last few decades show progress can feel incremental as knowledge and ability catch up to dreams. But perhaps, in the years ahead, his vision will be within our grasp. Hariri said he’s launching a new company next month aimed at cellular therapy 2.0. The effort will target the more common diseases that make life less worth living as we get older.
“Our objective is we want you to die young as old as possible,” he said.
Recently, I interviewed my friend Ray Kurzweil at the Googleplex for a 90-minute webinar on disruptive and dangerous ideas, a prelude to my fireside chat with Ray at Abundance 360 this January.
It’s my pleasure to share with you three compelling ideas that came from our conversation.
1. The nation-state will soon be irrelevant.
Historically, we humans don’t like change. We like waking up in the morning and knowing that the world is the same as the night before.
That’s one reason why government institutions exist: to stabilize society.
But how will this change in 20 or 30 years? What role will stabilizing institutions play in a world of continuous, accelerating change?
“Institutions stick around, but they change their role in our lives,” Ray explained. “They already have. The nation-state is not as profound as it was. Religion used to direct every aspect of your life, minute to minute. It’s still important in some ways, but it’s much less important, much less pervasive. [It] plays a much smaller role in most people’s lives than it did, and the same is true for governments.”
Ray continues: “We are fantastically interconnected already. Nation-states are not islands anymore. So we’re already much more of a global community. The generation growing up today really feels like world citizens much more than ever before, because they’re talking to people all over the world, and it’s not a novelty.”
I’ve previously shared my belief that national borders have become extremely porous, with ideas, people, capital, and technology rapidly flowing between nations. In decades past, your cultural identity was tied to your birthplace. In the decades ahead, your identify is more a function of many other external factors. If you love space, you’ll be connected with fellow space-cadets around the globe more than you’ll be tied to someone born next door.
2. We’ll hit longevity escape velocity before we realize we’ve hit it.
Ray and I share a passion for extending the healthy human lifespan.
I frequently discuss Ray’s concept of “longevity escape velocity”—the point at which, for every year that you’re alive, science is able to extend your life for more than a year.
Scientists are continually extending the human lifespan, helping us cure heart disease, cancer, and eventually, neurodegenerative disease. This will keep accelerating as technology improves.
During my discussion with Ray, I asked him when he expects we’ll reach “escape velocity…”
His answer? “I predict it’s likely just another 10 to 12 years before the general public will hit longevity escape velocity.”
“At that point, biotechnology is going to have taken over medicine,” Ray added. “The next decade is going to be a profound revolution.”
From there, Ray predicts that nanorobots will “basically finish the job of the immune system,” with the ability to seek and destroy cancerous cells and repair damaged organs.
As we head into this sci-fi-like future, your most important job for the next 15 years is to stay alive. “Wear your seatbelt until we get the self-driving cars going,” Ray jokes.
The implications to society will be profound. While the scarcity-minded in government will react saying, “Social Security will be destroyed,” the more abundance-minded will realize that extending a person’s productive earning life space from 65 to 75 or 85 years old would be a massive boon to GDP.
3. Technology will help us define and actualize human freedoms.
The third dangerous idea from my conversation with Ray is about how technology will enhance our humanity, not detract from it.
You may have heard critics complain that technology is making us less human and increasingly disconnected.
Ray and I share a slightly different viewpoint: that technology enables us to tap into the very essence of what it means to be human.
“I don’t think humans even have to be biological,” explained Ray. “I think humans are the species that changes who we are.”
Ray argues that this began when humans developed the earliest technologies—fire and stone tools. These tools gave people new capabilities and became extensions of our physical bodies.
At its base level, technology is the means by which we change our environment and change ourselves. This will continue, even as the technologies themselves evolve.
“People say, ‘Well, do I really want to become part machine?’ You’re not even going to notice it,” Ray says, “because it’s going to be a sensible thing to do at each point.”
Today, we take medicine to fight disease and maintain good health and would likely consider it irresponsible if someone refused to take a proven, life-saving medicine.
In the future, this will still happen—except the medicine might have nanobots that can target disease or will also improve your memory so you can recall things more easily.
And because this new medicine works so well for so many, public perception will change. Eventually, it will become the norm… as ubiquitous as penicillin and ibuprofen are today.
In this way, ingesting nanorobots, uploading your brain to the cloud, and using devices like smart contact lenses can help humans become, well, better at being human.
Ray sums it up: “We are the species that changes who we are to become smarter and more profound, more beautiful, more creative, more musical, funnier, sexier.”
Speaking of sexuality and beauty, Ray also sees technology expanding these concepts. “In virtual reality, you can be someone else. Right now, actually changing your gender in real reality is a pretty significant, profound process, but you could do it in virtual reality much more easily and you can be someone else. A couple could become each other and discover their relationship from the other’s perspective.”
In the 2030s, when Ray predicts sensor-laden nanorobots will be able to go inside the nervous system, virtual or augmented reality will become exceptionally realistic, enabling us to “be someone else and have other kinds of experiences.”
Why Dangerous Ideas Matter
Why is it so important to discuss dangerous ideas?
I often say that the day before something is a breakthrough, it’s a crazy idea.
By consuming and considering a steady diet of “crazy ideas,” you train yourself to think bigger and bolder, a critical requirement for making impact.
As humans, we are linear and scarcity-minded.
As entrepreneurs, we must think exponentially and abundantly.
At the end of the day, the formula for a true breakthrough is equal to “having a crazy idea” you believe in, plus the passion to pursue that idea against all naysayers and obstacles.
Abundance Digital Online Community: I’ve created a digital/online community of bold, abundance-minded entrepreneurs called Abundance Digital.
Abundance Digital is my ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.
Around 50 million people worldwide are thought to have Alzheimer’s disease. And with rapidly aging populations in many countries, the number of sufferers is steadily rising.
We know that Alzheimer’s is caused by problems in the brain. Cells begin to lose their functions and eventually die, leading to memory loss, a decline in thinking abilities and even major personality changes. Specific regions of the brain also shrink, a process known as atrophy, causing a significant loss of brain volume. But what’s actually happening in the brain to cause this?
The main way the disease works is to disrupt communication between neurons, the specialized cells that process and transmit electrical and chemical signals between regions of the brain. This is what is responsible for the cell death in the brain—and we think its due to a build up of two types of protein, called amyloid and tau. The exact interaction between these two proteins is largely unknown, but amyloid accumulates into sticky clusters known as beta-amyloid “plaques”, while tau builds up inside dying cells as “neurofibrillary tangles”.
One of the difficulties of diagnosing Alzheimer’s is that we’ve no reliable and accurate way of measuring this protein build-up during the early stages of the disease. In fact, we can’t definitively diagnose Alzheimer’s until after the patient has died, by examining their actual brain tissue.
Another problem we have is that beta-amyloid plaques can also be found in the brains of healthy patients. This suggests the presence of the amyloid and tau proteins may not tell the whole story of the disease.
More recent research suggests chronic inflammation may play a role. Inflammation is part of the body’s defence system against disease and occurs when white blood cells release chemicals to protect the body from foreign substances. But, over a long enough period, it can also cause damage.
In the brain, tissue-damaging long-term inflammation can also be caused by a build-up of cells known as microglia. In a healthy brain, these cells engulf and destroy waste and toxins. But in Alzheimer’s patients, the microglia fail to clear away this debris, which can include toxic tau tangles or amyloid plaques. The body then activates more microglia to try to clear the waste but this in turn causes inflammation. Long-term or chronic inflammation is particularly damaging to brain cells and ultimately leads to brain cell death.
Scientists recently identified a gene called TREM2 that could be responsible for this problem. Normally TREM2 acts to guide microglia to clear beta-amyloid plaques from the brain, and to help fight inflammation within the brain. But researchers have found that the brains of patients whose TREM2 gene doesn’t work properly have a build-up of beta-amyloid plaques between neurons.
Many Alzheimer’s patients also experience problems with their heart and circulatory system. Beta-amyloid deposits in the brain arteries, atherosclerosis (hardening of the arteries), and mini-strokes may also be at play.
These “vascular” problems can reduce blood flow in the brain even more and break down the blood-brain barrier, a structure that is critical for removing toxic waste from the brain. This can also prevent the brain from absorbing as much glucose—some studies have suggested this may actually occur before the onset of toxic proteins associated within Alzheimer’s disease within the brain.
More recently, researchers have been looking deeper into the brain, specifically at the precise connections between neurons, known as synapses. A recent study published in Nature describes a process in the cells that may contribute to the breakdown of these synaptic communications between neurons. The findings indicate that this may happen when there isn’t enough of a specific synaptic protein (known as RBFOX1).
Thanks to this kind of research, there are now many new drugs in development and in clinical trials that could target one or more of the many brain-wide changes that occur with Alzheimer’s disease. Many researchers now believe that a more personalized approach to Alzheimer’s patients is the future.
This would involve a combination of drugs tailored to target several of the problems mentioned above, much like current treatments available for cancer. The hope is that this innovative research will challenge and pioneer a new way of treating this complex disease.
Swarms of drones buzz overhead, while robotic vehicles crawl across the landscape. Orbiting satellites snap high-resolution images of the scene far below. Not one human being can be seen in the pre-dawn glow spreading across the land.
This isn’t some post-apocalyptic vision of the future à la The Terminator. This is a snapshot of the farm of the future. Every phase of the operation—from seed to harvest—may someday be automated, without the need to ever get one’s fingernails dirty.
In fact, it’s science fiction already being engineered into reality. Today, robots empowered with artificial intelligence can zap weeds with preternatural precision, while autonomous tractors move with tireless efficiency across the farmland. Satellites can assess crop health from outer space, providing gobs of data to help produce the sort of business intelligence once accessible only to Fortune 500 companies.
“Precision agriculture is on the brink of a new phase of development involving smart machines that can operate by themselves, which will allow production agriculture to become significantly more efficient. Precision agriculture is becoming robotic agriculture,” said professor Simon Blackmore last year during a conference in Asia on the latest developments in robotic agriculture. Blackmore is head of engineering at Harper Adams University and head of the National Centre for Precision Farming in the UK.
It’s Blackmore’s university that recently showcased what may someday be possible. The project, dubbed Hands Free Hectare and led by researchers from Harper Adams and private industry, farmed one hectare (about 2.5 acres) of spring barley without one person ever setting foot in the field.
The team re-purposed, re-wired and roboticized farm equipment ranging from a Japanese tractor to a 25-year-old combine. Drones served as scouts to survey the operation and collect samples to help the team monitor the progress of the barley. At the end of the season, the robo farmers harvested about 4.5 tons of barley at a price tag of £200,000.
“This project aimed to prove that there’s no technological reason why a field can’t be farmed without humans working the land directly now, and we’ve done that,” said Martin Abell, mechatronics researcher for Precision Decisions, which partnered with Harper Adams, in a press release.
I, Robot Farmer
The Harper Adams experiment is the latest example of how machines are disrupting the agricultural industry. Around the same time that the Hands Free Hectare combine was harvesting barley, Deere & Company announced it would acquire a startup called Blue River Technology for a reported $305 million.
Blue River has developed a “see-and-spray” system that combines computer vision and artificial intelligence to discriminate between crops and weeds. It hits the former with fertilizer and blasts the latter with herbicides with such precision that it can eliminate 90 percent of the chemicals used in conventional agriculture.
It’s not just farmland that’s getting a helping hand from robots. A California company called Abundant Robotics, spun out of the nonprofit research institute SRI International, is developing robots capable of picking apples with vacuum-like arms that suck the fruit straight off the trees in the orchards.
“Traditional robots were designed to perform very specific tasks over and over again. But the robots that will be used in food and agricultural applications will have to be much more flexible than what we’ve seen in automotive manufacturing plants in order to deal with natural variation in food products or the outdoor environment,” Dan Harburg, an associate at venture capital firm Anterra Capital who previously worked at a Massachusetts-based startup making a robotic arm capable of grabbing fruit, told AgFunder News.
“This means ag-focused robotics startups have to design systems from the ground up, which can take time and money, and their robots have to be able to complete multiple tasks to avoid sitting on the shelf for a significant portion of the year,” he noted.
Eyes in the Sky
It will take more than an army of robotic tractors to grow a successful crop. The farm of the future will rely on drones, satellites, and other airborne instruments to provide data about their crops on the ground.
Companies like Descartes Labs, for instance, employ machine learning to analyze satellite imagery to forecast soy and corn yields. The Los Alamos, New Mexico startup collects five terabytes of data every day from multiple satellite constellations, including NASA and the European Space Agency. Combined with weather readings and other real-time inputs, Descartes Labs can predict cornfield yields with 99 percent accuracy. Its AI platform can even assess crop health from infrared readings.
The US agency DARPA recently granted Descartes Labs $1.5 million to monitor and analyze wheat yields in the Middle East and Africa. The idea is that accurate forecasts may help identify regions at risk of crop failure, which could lead to famine and political unrest. Another company called TellusLabs out of Somerville, Massachusetts also employs machine learning algorithms to predict corn and soy yields with similar accuracy from satellite imagery.
Farmers don’t have to reach orbit to get insights on their cropland. A startup in Oakland, Ceres Imaging, produces high-resolution imagery from multispectral cameras flown across fields aboard small planes. The snapshots capture the landscape at different wavelengths, identifying insights into problems like water stress, as well as providing estimates of chlorophyll and nitrogen levels. The geo-tagged images mean farmers can easily locate areas that need to be addressed.
Growing From the Inside
Even the best intelligence—whether from drones, satellites, or machine learning algorithms—will be challenged to predict the unpredictable issues posed by climate change. That’s one reason more and more companies are betting the farm on what’s called controlled environment agriculture. Today, that doesn’t just mean fancy greenhouses, but everything from warehouse-sized, automated vertical farms to grow rooms run by robots, located not in the emptiness of Kansas or Nebraska but smack dab in the middle of the main streets of America.
Proponents of these new concepts argue these high-tech indoor farms can produce much higher yields while drastically reducing water usage and synthetic inputs like fertilizer and herbicides.
Iron Ox, out of San Francisco, is developing one-acre urban greenhouses that will be operated by robots and reportedly capable of producing the equivalent of 30 acres of farmland. Powered by artificial intelligence, a team of three robots will run the entire operation of planting, nurturing, and harvesting the crops.
Vertical farming startup Plenty, also based in San Francisco, uses AI to automate its operations, and got a $200 million vote of confidence from the SoftBank Vision Fund earlier this year. The company claims its system uses only 1 percent of the water consumed in conventional agriculture while producing 350 times as much produce. Plenty is part of a new crop of urban-oriented farms, including Bowery Farming and AeroFarms.
“What I can envision is locating a larger scale indoor farm in the economically disadvantaged food desert, in order to stimulate a broader economic impact that could create jobs and generate income for that area,” said Dr. Gary Stutte, an expert in space agriculture and controlled environment agriculture, in an interview with AgFunder News. “The indoor agriculture model is adaptable to becoming an engine for economic growth and food security in both rural and urban food deserts.”
Still, the model is not without its own challenges and criticisms. Most of what these farms can produce falls into the “leafy greens” category and often comes with a premium price, which seems antithetical to the proposed mission of creating oases in the food deserts of cities. While water usage may be minimized, the electricity required to power the operation, especially the LEDs (which played a huge part in revolutionizing indoor agriculture), are not cheap.
Still, all of these advances, from robo farmers to automated greenhouses, may need to be part of a future where nearly 10 billion people will inhabit the planet by 2050. An oft-quoted statistic from the Food and Agriculture Organization of the United Nations says the world must boost food production by 70 percent to meet the needs of the population. Technology may not save the world, but it will help feed it.
“We cannot be conscious of what we are not conscious of.” – Julian Jaynes, The Origin of Consciousness in the Breakdown of the Bicameral Mind
Unlike the director leads you to believe, the protagonist of Ex Machina, Andrew Garland’s 2015 masterpiece, isn’t Caleb, a young programmer tasked with evaluating machine consciousness. Rather, it’s his target Ava, a breathtaking humanoid AI with a seemingly child-like naïveté and an enigmatic mind.
Like most cerebral movies, Ex Machina leaves the conclusion up to the viewer: was Ava actually conscious? In doing so, it also cleverly avoids a thorny question that has challenged most AI-centric movies to date: what is consciousness, and can machines have it?
Hollywood producers aren’t the only people stumped. As machine intelligence barrels forward at breakneck speed—not only exceeding human performance on games such as DOTA and Go, but doing so without the need for human expertise—the question has once more entered the scientific mainstream.
Are machines on the verge of consciousness?
This week, in a review published in the prestigious journal Science, cognitive scientists Drs. Stanislas Dehaene, Hakwan Lau and Sid Kouider of the Collège de France, University of California, Los Angeles and PSL Research University, respectively, argue: not yet, but there is a clear path forward.
The reason? Consciousness is “resolutely computational,” the authors say, in that it results from specific types of information processing, made possible by the hardware of the brain.
There is no magic juice, no extra spark—in fact, an experiential component (“what is it like to be conscious?”) isn’t even necessary to implement consciousness.
If consciousness results purely from the computations within our three-pound organ, then endowing machines with a similar quality is just a matter of translating biology to code.
Much like the way current powerful machine learning techniques heavily borrow from neurobiology, the authors write, we may be able to achieve artificial consciousness by studying the structures in our own brains that generate consciousness and implementing those insights as computer algorithms.
From Brain to Bot
Without doubt, the field of AI has greatly benefited from insights into our own minds, both in form and function.
For example, deep neural networks, the architecture of algorithms that underlie AlphaGo’s breathtaking sweep against its human competitors, are loosely based on the multi-layered biological neural networks that our brain cells self-organize into.
Reinforcement learning, a type of “training” that teaches AIs to learn from millions of examples, has roots in a centuries-old technique familiar to anyone with a dog: if it moves toward the right response (or result), give a reward; otherwise ask it to try again.
In this sense, translating the architecture of human consciousness to machines seems like a no-brainer towards artificial consciousness. There’s just one big problem.
“Nobody in AI is working on building conscious machines because we just have nothing to go on. We just don’t have a clue about what to do,” said Dr. Stuart Russell, the author of Artificial Intelligence: A Modern Approach in a 2015 interview with Science.
The hard part, long before we can consider coding machine consciousness, is figuring out what consciousness actually is.
To Dehaene and colleagues, consciousness is a multilayered construct with two “dimensions:” C1, the information readily in mind, and C2, the ability to obtain and monitor information about oneself. Both are essential to consciousness, but one can exist without the other.
Say you’re driving a car and the low fuel light comes on. Here, the perception of the fuel-tank light is C1—a mental representation that we can play with: we notice it, act upon it (refill the gas tank) and recall and speak about it at a later date (“I ran out of gas in the boonies!”).
“The first meaning we want to separate (from consciousness) is the notion of global availability,” explains Dehaene in an interview with Science. When you’re conscious of a word, your whole brain is aware of it, in a sense that you can use the information across modalities, he adds.
But C1 is not just a “mental sketchpad.” It represents an entire architecture that allows the brain to draw multiple modalities of information from our senses or from memories of related events, for example.
Unlike subconscious processing, which often relies on specific “modules” competent at a defined set of tasks, C1 is a global workspace that allows the brain to integrate information, decide on an action, and follow through until the end.
Like The Hunger Games, what we call “conscious” is whatever representation, at one point in time, wins the competition to access this mental workspace. The winners are shared among different brain computation circuits and are kept in the spotlight for the duration of decision-making to guide behavior.
Because of these features, C1 consciousness is highly stable and global—all related brain circuits are triggered, the authors explain.
For a complex machine such as an intelligent car, C1 is a first step towards addressing an impending problem, such as a low fuel light. In this example, the light itself is a type of subconscious signal: when it flashes, all of the other processes in the machine remain uninformed, and the car—even if equipped with state-of-the-art visual processing networks—passes by gas stations without hesitation.
With C1 in place, the fuel tank would alert the car computer (allowing the light to enter the car’s “conscious mind”), which in turn checks the built-in GPS to search for the next gas station.
“We think in a machine this would translate into a system that takes information out of whatever processing module it’s encapsulated in, and make it available to any of the other processing modules so they can use the information,” saysDehaene. “It’s a first sense of consciousness.”
In a way, C1 reflects the mind’s capacity to access outside information. C2 goes introspective.
The authors define the second facet of consciousness, C2, as “meta-cognition:” reflecting on whether you know or perceive something, or whether you just made an error (“I think I may have filled my tank at the last gas station, but I forgot to keep a receipt to make sure”). This dimension reflects the link between consciousness and sense of self.
C2 is the level of consciousness that allows you to feel more or less confident about a decision when making a choice. In computational terms, it’s an algorithm that spews out the probability that a decision (or computation) is correct, even if it’s often experienced as a “gut feeling.”
C2 also has its claws in memory and curiosity. These self-monitoring algorithms allow us to know what we know or don’t know—so-called “meta-memory,” responsible for that feeling of having something at the tip of your tongue. Monitoring what we know (or don’t know) is particularly important for children, says Dehaene.
“Young children absolutely need to monitor what they know in order to…inquire and become curious and learn more,” he explains.
The two aspects of consciousness synergize to our benefit: C1 pulls relevant information into our mental workspace (while discarding other “probable” ideas or solutions), while C2 helps with long-term reflection on whether the conscious thought led to a helpful response.
Going back to the low fuel light example, C1 allows the car to solve the problem in the moment—these algorithms globalize the information, so that the car becomes aware of the problem.
But to solve the problem, the car would need a “catalog of its cognitive abilities”—a self-awareness of what resources it has readily available, for example, a GPS map of gas stations.
“A car with this sort of self-knowledge is what we call having C2,” saysDehaene. Because the signal is globally available and because it’s being monitored in a way that the machine is looking at itself, the car would careabout the low gas light and behave like humans do—lower fuel consumption and find a gas station.
“Most present-day machine learning systems are devoid of any self-monitoring,” the authors note.
But their theory seems to be on the right track. The few examples whereby a self-monitoring system was implemented—either within the structure of the algorithm or as a separate network—the AI has generated “internal models that are meta-cognitive in nature, making it possible for an agent to develop a (limited, implicit, practical) understanding of itself.”
Towards conscious machines
Would a machine endowed with C1 and C2 behave as if it were conscious? Very likely: a smartcar would “know” that it’s seeing something, express confidence in it, report it to others, and find the best solutions for problems. If its self-monitoring mechanisms break down, it may also suffer “hallucinations” or even experience visual illusions similar to humans.
Thanks to C1 it would be able to use the information it has and use it flexibly, and because of C2 it would know the limit of what it knows, says Dehaene. “I think (the machine) would be conscious,” and not just merely appearing so to humans.
If you’re left with a feeling that consciousness is far more than global information sharing and self-monitoring, you’re not alone.
“Such a purely functional definition of consciousness may leave some readers unsatisfied,” the authors acknowledge.
“But we’re trying to take a radical stance, maybe simplifying the problem. Consciousness is a functional property, and when we keep adding functions to machines, at some point these properties will characterize what we mean by consciousness,” Dehaene concludes.
Picture this: you’re at a boisterous party, trying to listen in on a group conversation. People are talking over each other and going a mile a minute, but you can only pick up snippets from one person at a time.
Confusing? Sure! Frustrating? Absolutely!
Yet this is how neuroscientists eavesdrop on all the electrical chatter going on in our heads. So much depends on understanding these neuronal conversations: deciphering their secret language is key to understanding—and manipulating—the memories, habits, and other cognitive processes that define us.
To monitor the signals zipping through a network of neurons, scientists often stick a tiny electrode into each single contributor and track its activity. It’s not easy to tease out an entire conversation that way—the process is tedious and prone to serious misunderstandings.
“If you put an electrode in the brain, it’s like trying to understand a phone conversation by hearing only one person talk,” said Dr. Ed Boyden at MIT. A pioneer of optogenetics and the inflatable brain, the neuroscience wunderkind has spent the past decade developing creative neurotechnological toolkits that have sparked excitement and garnered praise.
Now Boyden may have a way to tap into an entire neuronal group chat.
With the help of a robot, the team designed a protein that tunnels into the outer shell, or membrane, of a neuron. If there’s a slight change in the voltage, as when the neuron fires, the protein immediately transforms into a fluorescent torch that’s easy to spot under a microscope.
With a whole network of neurons, the embedded sensors spark like fireworks.
“Now we can record the neural activity of many cells in a neural circuit and hear them as they talk to each other,” said Boyden.
But the new sensor isn’t even the big advance. The robotic system, pieced together from easily available components, allows other neuroscientists to develop their own sensors.
By releasing the blueprint in Nature Chemical Biology, Boyden and his team hope the community will rapidly evolve stronger and more sensitive activity probes for the brain, thereby lighting the way to finally figuring out what exactly is a thought, a decision, or a feeling.
The Neural Lighthouse
To be fair, Boyden is far from the first to come up with these so-called “voltage sensors.”
But finding the perfect one has eluded neuroscientists for two decades. To precisely report neuronal firing, these proteins need to be able to rapidly turn on their light beams after the neuron fires—with a reaction time in the range of a tenth of a second, if not faster.
What’s more, they also need to be able to find the best seat in the house: smack on the neuronal membrane, where the voltage change happens, as opposed to inside a cell.
Finally, they need to shine long and bright. Lots of sensors lose their glow rapidly after exposure to light—dubbed “photobleaching,” the bane of neural cartographers. To match neuronal activity to behaviors, the indicators need to stay bright for at least several seconds.
Developing these sensors has traditionally been an extremely tedious affair. Scientists often start with a known sensor, swap some of its constituent molecules with others like Lego pieces, test the resulting new sensor in cells, and hope for the best. The process can take weeks, if not months.
It works like this:
In a process that resembles accelerated evolution, the team started with a known light-sensitive sensor and randomly triggered mutations into the protein, making 1.5 million (!!) versions in total.
They then inserted all of the variants into mammalian cells—one variant per cell—and waited for the sensors to reach the cell’s membrane. Next, they programmed a microscope to automatically take photos of the cells.
It’s a powerful algorithm. “This version was modified from previous versions to be compatible with any microscope…camera and/or other optional hardware,” the authors said.
Once the microscope identified each individual cell, a robotic glass tube sucked up the cell into its private glass tube and examined whether the sensor variant satisfied all the requirements. Here, the team specifically focused on two criteria: the protein’s location and its brightness.
In this way, the team rapidly identified the top five candidates, and then subjected them to another round of mutations generating eight million (!!!) new variants. With help from their trusty robot cell picker, they narrowed the best performers down to seven proteins, which they then characterized using good old electrical recordings to see how fast the sensors responded to voltage fluctuations.
In the end, only two sensors met all criteria, and the authors named them Archon1 and Archon2 respectively.
Normally it’s excruciatingly hard to find sensors that excel in multiple domains, the authors say. The robotic screen works so well because it acts like a multi-round game show. To remain a candidate, each variant has to stand out in each round of testing, whether for its brightness, location, or speed.
“(It’s) a very clever high-throughput screening approach,” said Harvard professor Dr. Adam Cohen, who was not involved in this study. Cohen previously developed a sensor called QuasAr2 (get it?) that Boyden used here as a starting point to generate his mutant forms.
Putting Archon1 to the test, the team inserted the protein onto the neuronal membranes of cortical neurons in mice. These cells come from the outermost region of the brain—the cortex—often considered the seat of higher cognitive functions.
Archon1 performed fabulously in brain slices from these mice. When stimulated with a reddish-orange light, the protein emitted a longer wavelength of red light that matched up to the neuron’s voltage swings—the brightness of the protein corresponds to a particular voltage.
The sensor was extremely quick on its feet, capable of reporting each time a neuron fired in near real time.
The team also tested Archon1 in two of neuroscience’s darling translucent animal models: a zebrafish and a tiny worm called C. elegans. Don’t underestimate these critters: zebrafish are often used to study how the brain encodes vision, hearing movement or fear, whereas C. elegans has shed lighton the circuits that drive eating, socializing, and even sex.
Because of their see-through bodies, it’s particularly useful to watch their neurons light up in action because of the higher signal-to-noise ratio. As in the mouse brain, Archon1 performed beautifully, rapidly emitting light that lasted at least eight minutes.
“(This) supports recordings of neural activity over behaviorally relevant timescales,” the authors said.
Even cooler, Archon1 can be used in conjunction with optogenetic tools. In a proof-of-concept, the team used blue light to activate a neuron in C. elegans and watched Archon1 light up in response—an amazing visual feedback, especially since neuroscientists often use electrical recordings to see whether their optogenetic tricks worked.
The team is now looking to test their sensor in living mice while performing certain behaviors and tasks.
The sensor “opens up the exciting possibility of simultaneous recordings of large populations of neurons” and of capturing each individual firing from every single neuron, the authors said. We’ll be watching neural computations happen in real time under the microscope.
And the best is yet to come. Scientific-grade cameras are increasingly capable of taking images at faster speeds and allowing for higher resolutions with a broader field of view. Mapping the brain with Archon1 and future generation sensors will no doubt yield buckets of new findings and theories about how the brain works.
“Over the next five years or so we’re going to try to solve some small brain circuits completely,” said Boyden.
It is one of the top 10 deadliest diseases in the United States, and it cannot be cured or prevented. But new studies are finding ways to diagnose Alzheimer’s disease in its earliest stages, while some of the latest research says technologies like artificial intelligence can detect dementia years before the first symptoms occur.
These advances, in turn, will help bolster clinical trials seeking a cure or therapies to slow or prevent the disease. Catching Alzheimer’s disease or other forms of dementia early in their progression can help ease symptoms in some cases.
“Often neurodegeneration is diagnosed late when massive brain damage has already occurred,” says professor Francis L Martin at the University of Central Lancashire in the UK, in an email to Singularity Hub. “As we know more about the molecular basis of the disease, there is the possibility of clinical interventions that might slow or halt the progress of the disease, i.e., before brain damage. Extending cognitive ability for even a number of years would have huge benefit.”
The researchers used sensor-based technology with a diamond core to analyze about 550 blood samples. They identified specific chemical bonds within the blood after passing light through the diamond core and recording its interaction with the sample. The results were then compared against blood samples from cases of Alzheimer’s disease and other neurodegenerative diseases, along with those from healthy individuals.
“From a small drop of blood, we derive a fingerprint spectrum. That fingerprint spectrum contains numerical data, which can be inputted into a computational algorithm we have developed,” Martin explains. “This algorithm is validated for prediction of unknown samples. From this we determine sensitivity and specificity. Although not perfect, my clinical colleagues reliably tell me our results are far better than anything else they have seen.”
Martin says the breakthrough is the result of more than 10 years developing sensor-based technologies for routine screening, monitoring, or diagnosing neurodegenerative diseases and cancers.
“My vision was to develop something low-cost that could be readily applied in a typical clinical setting to handle thousands of samples potentially per day or per week,” he says, adding that the technology also has applications in environmental science and food security.
The new test can also distinguish accurately between Alzheimer’s disease and other forms of neurodegeneration, such as Lewy body dementia, which is one of the most common causes of dementia after Alzheimer’s.
“To this point, other than at post-mortem, there has been no single approach towards classifying these pathologies,” Martin notes. “MRI scanning is often used but is labor-intensive, costly, difficult to apply to dementia patients, and not a routine point-of-care test.”
Canadian researchers at McGill University believe they can predict Alzheimer’s disease up to two years before its onset using big data and artificial intelligence. They developed an algorithm capable of recognizing the signatures of dementia using a single amyloid PET scan of the brain of patients at risk of developing the disease.
Alzheimer’s is caused by the accumulation of two proteins—amyloid beta and tau. The latest research suggests that amyloid beta leads to the buildup of tau, which is responsible for damaging nerve cells and connections between cells called synapses.
The work was recently published in the journal Neurobiology of Aging.
“Despite the availability of biomarkers capable of identifying the proteins causative of Alzheimer’s disease in living individuals, the current technologies cannot predict whether carriers of AD pathology in the brain will progress to dementia,” Sulantha Mathotaarachchi, lead author on the paper and an expert in artificial neural networks, tells Singularity Hub by email.
The algorithm, trained on a population with amnestic mild cognitive impairment observed over 24 months, proved accurate 84.5 percent of the time. Mathotaarachchi says the algorithm can be trained on different populations for different observational periods, meaning the system can grow more comprehensive with more data.
“The more biomarkers we incorporate, the more accurate the prediction could be,” Mathotaarachchi adds. “However, right now, acquiring [the] required amount of training data is the biggest challenge. … In Alzheimer’s disease, it is known that the amyloid protein deposition occurs decades before symptoms onset.”
Unfortunately, the same process occurs in normal aging as well. “The challenge is to identify the abnormal patterns of deposition that lead to the disease later on,” he says
One of the key goals of the project is to improve the research in Alzheimer’s disease by ensuring those patients with the highest probability to develop dementia are enrolled in clinical trials. That will increase the efficiency of clinical programs, according to Mathotaarachchi.
“One of the most important outcomes from our study was the pilot, online, real-time prediction tool,” he says. “This can be used as a framework for patient screening before recruiting for clinical trials. … If a disease-modifying therapy becomes available for patients, a predictive tool might have clinical applications as well, by providing to the physician information regarding clinical progression.”
Pixel by Pixel Prediction
Private industry is also working toward improving science’s predictive powers when it comes to detecting dementia early. One startup called Darmiyan out of San Francisco claims its proprietary software can pick up signals before the onset of Alzheimer’s disease by up to 15 years.
Darmiyan didn’t respond to a request for comment for this article. Venture Beat reported that the company’s MRI-analyzing software “detects cell abnormalities at a microscopic level to reveal what a standard MRI scan cannot” and that the “software measures and highlights subtle microscopic changes in the brain tissue represented in every pixel of the MRI image long before any symptoms arise.”
Darmiyan claims to have a 90 percent accuracy rate and says its software has been vetted by top academic institutions like New York University, Rockefeller University, and Stanford, according to Venture Beat. The startup is awaiting FDA approval to proceed further but is reportedly working with pharmaceutical companies like Amgen, Johnson & Johnson, and Pfizer on pilot programs.
“Our technology enables smarter drug selection in preclinical animal studies, better patient selection for clinical trials, and much better drug-effect monitoring,” Darmiyan cofounder and CEO Padideh Kamali-Zare told Venture Beat.
An estimated 5.5 million Americans have Alzheimer’s, and one in 10 people over age 65 have been diagnosed with the disease. By mid-century, the number of Alzheimer’s patients could rise to 16 million. Health care costs in 2017 alone are estimated to be $259 billion, and by 2050 the annual price tag could be more than $1 trillion.
In sum, it’s a disease that cripples people and the economy.
Researchers are always after more data as they look to improve outcomes, with the hope of one day developing a cure or preventing the onset of neurodegeneration altogether. If interested in seeing this medical research progress, you can help by signing up on the Brain Health Registry to improve the quality of clinical trials.