Picture this: you’re at a boisterous party, trying to listen in on a group conversation. People are talking over each other and going a mile a minute, but you can only pick up snippets from one person at a time.
Confusing? Sure! Frustrating? Absolutely!
Yet this is how neuroscientists eavesdrop on all the electrical chatter going on in our heads. So much depends on understanding these neuronal conversations: deciphering their secret language is key to understanding—and manipulating—the memories, habits, and other cognitive processes that define us.
To monitor the signals zipping through a network of neurons, scientists often stick a tiny electrode into each single contributor and track its activity. It’s not easy to tease out an entire conversation that way—the process is tedious and prone to serious misunderstandings.
“If you put an electrode in the brain, it’s like trying to understand a phone conversation by hearing only one person talk,” said Dr. Ed Boyden at MIT. A pioneer of optogenetics and the inflatable brain, the neuroscience wunderkind has spent the past decade developing creative neurotechnological toolkits that have sparked excitement and garnered praise.
Now Boyden may have a way to tap into an entire neuronal group chat.
With the help of a robot, the team designed a protein that tunnels into the outer shell, or membrane, of a neuron. If there’s a slight change in the voltage, as when the neuron fires, the protein immediately transforms into a fluorescent torch that’s easy to spot under a microscope.
With a whole network of neurons, the embedded sensors spark like fireworks.
“Now we can record the neural activity of many cells in a neural circuit and hear them as they talk to each other,” said Boyden.
But the new sensor isn’t even the big advance. The robotic system, pieced together from easily available components, allows other neuroscientists to develop their own sensors.
By releasing the blueprint in Nature Chemical Biology, Boyden and his team hope the community will rapidly evolve stronger and more sensitive activity probes for the brain, thereby lighting the way to finally figuring out what exactly is a thought, a decision, or a feeling.
The Neural Lighthouse
To be fair, Boyden is far from the first to come up with these so-called “voltage sensors.”
But finding the perfect one has eluded neuroscientists for two decades. To precisely report neuronal firing, these proteins need to be able to rapidly turn on their light beams after the neuron fires—with a reaction time in the range of a tenth of a second, if not faster.
What’s more, they also need to be able to find the best seat in the house: smack on the neuronal membrane, where the voltage change happens, as opposed to inside a cell.
Finally, they need to shine long and bright. Lots of sensors lose their glow rapidly after exposure to light—dubbed “photobleaching,” the bane of neural cartographers. To match neuronal activity to behaviors, the indicators need to stay bright for at least several seconds.
Developing these sensors has traditionally been an extremely tedious affair. Scientists often start with a known sensor, swap some of its constituent molecules with others like Lego pieces, test the resulting new sensor in cells, and hope for the best. The process can take weeks, if not months.
It works like this:
In a process that resembles accelerated evolution, the team started with a known light-sensitive sensor and randomly triggered mutations into the protein, making 1.5 million (!!) versions in total.
They then inserted all of the variants into mammalian cells—one variant per cell—and waited for the sensors to reach the cell’s membrane. Next, they programmed a microscope to automatically take photos of the cells.
It’s a powerful algorithm. “This version was modified from previous versions to be compatible with any microscope…camera and/or other optional hardware,” the authors said.
Once the microscope identified each individual cell, a robotic glass tube sucked up the cell into its private glass tube and examined whether the sensor variant satisfied all the requirements. Here, the team specifically focused on two criteria: the protein’s location and its brightness.
In this way, the team rapidly identified the top five candidates, and then subjected them to another round of mutations generating eight million (!!!) new variants. With help from their trusty robot cell picker, they narrowed the best performers down to seven proteins, which they then characterized using good old electrical recordings to see how fast the sensors responded to voltage fluctuations.
In the end, only two sensors met all criteria, and the authors named them Archon1 and Archon2 respectively.
Normally it’s excruciatingly hard to find sensors that excel in multiple domains, the authors say. The robotic screen works so well because it acts like a multi-round game show. To remain a candidate, each variant has to stand out in each round of testing, whether for its brightness, location, or speed.
“(It’s) a very clever high-throughput screening approach,” said Harvard professor Dr. Adam Cohen, who was not involved in this study. Cohen previously developed a sensor called QuasAr2 (get it?) that Boyden used here as a starting point to generate his mutant forms.
Putting Archon1 to the test, the team inserted the protein onto the neuronal membranes of cortical neurons in mice. These cells come from the outermost region of the brain—the cortex—often considered the seat of higher cognitive functions.
Archon1 performed fabulously in brain slices from these mice. When stimulated with a reddish-orange light, the protein emitted a longer wavelength of red light that matched up to the neuron’s voltage swings—the brightness of the protein corresponds to a particular voltage.
The sensor was extremely quick on its feet, capable of reporting each time a neuron fired in near real time.
The team also tested Archon1 in two of neuroscience’s darling translucent animal models: a zebrafish and a tiny worm called C. elegans. Don’t underestimate these critters: zebrafish are often used to study how the brain encodes vision, hearing movement or fear, whereas C. elegans has shed lighton the circuits that drive eating, socializing, and even sex.
Because of their see-through bodies, it’s particularly useful to watch their neurons light up in action because of the higher signal-to-noise ratio. As in the mouse brain, Archon1 performed beautifully, rapidly emitting light that lasted at least eight minutes.
“(This) supports recordings of neural activity over behaviorally relevant timescales,” the authors said.
Even cooler, Archon1 can be used in conjunction with optogenetic tools. In a proof-of-concept, the team used blue light to activate a neuron in C. elegans and watched Archon1 light up in response—an amazing visual feedback, especially since neuroscientists often use electrical recordings to see whether their optogenetic tricks worked.
The team is now looking to test their sensor in living mice while performing certain behaviors and tasks.
The sensor “opens up the exciting possibility of simultaneous recordings of large populations of neurons” and of capturing each individual firing from every single neuron, the authors said. We’ll be watching neural computations happen in real time under the microscope.
And the best is yet to come. Scientific-grade cameras are increasingly capable of taking images at faster speeds and allowing for higher resolutions with a broader field of view. Mapping the brain with Archon1 and future generation sensors will no doubt yield buckets of new findings and theories about how the brain works.
“Over the next five years or so we’re going to try to solve some small brain circuits completely,” said Boyden.
It is one of the top 10 deadliest diseases in the United States, and it cannot be cured or prevented. But new studies are finding ways to diagnose Alzheimer’s disease in its earliest stages, while some of the latest research says technologies like artificial intelligence can detect dementia years before the first symptoms occur.
These advances, in turn, will help bolster clinical trials seeking a cure or therapies to slow or prevent the disease. Catching Alzheimer’s disease or other forms of dementia early in their progression can help ease symptoms in some cases.
“Often neurodegeneration is diagnosed late when massive brain damage has already occurred,” says professor Francis L Martin at the University of Central Lancashire in the UK, in an email to Singularity Hub. “As we know more about the molecular basis of the disease, there is the possibility of clinical interventions that might slow or halt the progress of the disease, i.e., before brain damage. Extending cognitive ability for even a number of years would have huge benefit.”
The researchers used sensor-based technology with a diamond core to analyze about 550 blood samples. They identified specific chemical bonds within the blood after passing light through the diamond core and recording its interaction with the sample. The results were then compared against blood samples from cases of Alzheimer’s disease and other neurodegenerative diseases, along with those from healthy individuals.
“From a small drop of blood, we derive a fingerprint spectrum. That fingerprint spectrum contains numerical data, which can be inputted into a computational algorithm we have developed,” Martin explains. “This algorithm is validated for prediction of unknown samples. From this we determine sensitivity and specificity. Although not perfect, my clinical colleagues reliably tell me our results are far better than anything else they have seen.”
Martin says the breakthrough is the result of more than 10 years developing sensor-based technologies for routine screening, monitoring, or diagnosing neurodegenerative diseases and cancers.
“My vision was to develop something low-cost that could be readily applied in a typical clinical setting to handle thousands of samples potentially per day or per week,” he says, adding that the technology also has applications in environmental science and food security.
The new test can also distinguish accurately between Alzheimer’s disease and other forms of neurodegeneration, such as Lewy body dementia, which is one of the most common causes of dementia after Alzheimer’s.
“To this point, other than at post-mortem, there has been no single approach towards classifying these pathologies,” Martin notes. “MRI scanning is often used but is labor-intensive, costly, difficult to apply to dementia patients, and not a routine point-of-care test.”
Canadian researchers at McGill University believe they can predict Alzheimer’s disease up to two years before its onset using big data and artificial intelligence. They developed an algorithm capable of recognizing the signatures of dementia using a single amyloid PET scan of the brain of patients at risk of developing the disease.
Alzheimer’s is caused by the accumulation of two proteins—amyloid beta and tau. The latest research suggests that amyloid beta leads to the buildup of tau, which is responsible for damaging nerve cells and connections between cells called synapses.
The work was recently published in the journal Neurobiology of Aging.
“Despite the availability of biomarkers capable of identifying the proteins causative of Alzheimer’s disease in living individuals, the current technologies cannot predict whether carriers of AD pathology in the brain will progress to dementia,” Sulantha Mathotaarachchi, lead author on the paper and an expert in artificial neural networks, tells Singularity Hub by email.
The algorithm, trained on a population with amnestic mild cognitive impairment observed over 24 months, proved accurate 84.5 percent of the time. Mathotaarachchi says the algorithm can be trained on different populations for different observational periods, meaning the system can grow more comprehensive with more data.
“The more biomarkers we incorporate, the more accurate the prediction could be,” Mathotaarachchi adds. “However, right now, acquiring [the] required amount of training data is the biggest challenge. … In Alzheimer’s disease, it is known that the amyloid protein deposition occurs decades before symptoms onset.”
Unfortunately, the same process occurs in normal aging as well. “The challenge is to identify the abnormal patterns of deposition that lead to the disease later on,” he says
One of the key goals of the project is to improve the research in Alzheimer’s disease by ensuring those patients with the highest probability to develop dementia are enrolled in clinical trials. That will increase the efficiency of clinical programs, according to Mathotaarachchi.
“One of the most important outcomes from our study was the pilot, online, real-time prediction tool,” he says. “This can be used as a framework for patient screening before recruiting for clinical trials. … If a disease-modifying therapy becomes available for patients, a predictive tool might have clinical applications as well, by providing to the physician information regarding clinical progression.”
Pixel by Pixel Prediction
Private industry is also working toward improving science’s predictive powers when it comes to detecting dementia early. One startup called Darmiyan out of San Francisco claims its proprietary software can pick up signals before the onset of Alzheimer’s disease by up to 15 years.
Darmiyan didn’t respond to a request for comment for this article. Venture Beat reported that the company’s MRI-analyzing software “detects cell abnormalities at a microscopic level to reveal what a standard MRI scan cannot” and that the “software measures and highlights subtle microscopic changes in the brain tissue represented in every pixel of the MRI image long before any symptoms arise.”
Darmiyan claims to have a 90 percent accuracy rate and says its software has been vetted by top academic institutions like New York University, Rockefeller University, and Stanford, according to Venture Beat. The startup is awaiting FDA approval to proceed further but is reportedly working with pharmaceutical companies like Amgen, Johnson & Johnson, and Pfizer on pilot programs.
“Our technology enables smarter drug selection in preclinical animal studies, better patient selection for clinical trials, and much better drug-effect monitoring,” Darmiyan cofounder and CEO Padideh Kamali-Zare told Venture Beat.
An estimated 5.5 million Americans have Alzheimer’s, and one in 10 people over age 65 have been diagnosed with the disease. By mid-century, the number of Alzheimer’s patients could rise to 16 million. Health care costs in 2017 alone are estimated to be $259 billion, and by 2050 the annual price tag could be more than $1 trillion.
In sum, it’s a disease that cripples people and the economy.
Researchers are always after more data as they look to improve outcomes, with the hope of one day developing a cure or preventing the onset of neurodegeneration altogether. If interested in seeing this medical research progress, you can help by signing up on the Brain Health Registry to improve the quality of clinical trials.
In November 2017, a gunman entered a church in Sutherland Springs in Texas, where he killed 26 people and wounded 20 others. He escaped in his car, with police and residents in hot pursuit, before losing control of the vehicle and flipping it into a ditch. When the police got to the car, he was dead. The episode is horrifying enough without its unsettling epilogue. In the course of their investigations, the FBI reportedly pressed the gunman’s finger to the fingerprint-recognition feature on his iPhone in an attempt to unlock it. Regardless of who’s affected, it’s disquieting to think of the police using a corpse to break into someone’s digital afterlife.
Most democratic constitutions shield us from unwanted intrusions into our brains and bodies. They also enshrine our entitlement to freedom of thought and mental privacy. That’s why neurochemical drugs that interfere with cognitive functioning can’t be administered against a person’s will unless there’s a clear medical justification. Similarly, according to scholarly opinion, law-enforcement officials can’t compel someone to take a lie-detector test, because that would be an invasion of privacy and a violation of the right to remain silent.
But in the present era of ubiquitous technology, philosophers are beginning to ask whether biological anatomy really captures the entirety of who we are. Given the role they play in our lives, do our devices deserve the same protections as our brains and bodies?
After all, your smartphone is much more than just a phone. It can tell a more intimate story about you than your best friend. No other piece of hardware in history, not even your brain, contains the quality or quantity of information held on your phone: it ‘knows’ whom you speak to, when you speak to them, what you said, where you have been, your purchases, photos, biometric data, even your notes to yourself—and all this dating back years.
In 2014, the United States Supreme Court used this observation to justify the decision that police must obtain a warrant before rummaging through our smartphones. These devices “are now such a pervasive and insistent part of daily life that the proverbial visitor from Mars might conclude they were an important feature of human anatomy,” as Chief Justice John Roberts observed in his written opinion.
The Chief Justice probably wasn’t making a metaphysical point—but the philosophers Andy Clark and David Chalmers were when they argued in “The Extended Mind” (1998) that technology is actually part of us. According to traditional cognitive science, “thinking” is a process of symbol manipulation or neural computation, which gets carried out by the brain. Clark and Chalmers broadly accept this computational theory of mind, but claim that tools can become seamlessly integrated into how we think. Objects such as smartphones or notepads are often just as functionally essential to our cognition as the synapses firing in our heads. They augment and extend our minds by increasing our cognitive power and freeing up internal resources.
If accepted, the extended mind thesis threatens widespread cultural assumptions about the inviolate nature of thought, which sits at the heart of most legal and social norms. As the US Supreme Court declared in 1942: “freedom to think is absolute of its own nature; the most tyrannical government is powerless to control the inward workings of the mind.” This view has its origins in thinkers such as John Locke and René Descartes, who argued that the human soul is locked in a physical body, but that our thoughts exist in an immaterial world, inaccessible to other people. One’s inner life thus needs protecting only when it is externalized, such as through speech. Many researchers in cognitive science still cling to this Cartesian conception—only, now, the private realm of thought coincides with activity in the brain.
But today’s legal institutions are straining against this narrow concept of the mind. They are trying to come to grips with how technology is changing what it means to be human, and to devise new normative boundaries to cope with this reality. Justice Roberts might not have known about the idea of the extended mind, but it supports his wry observation that smartphones have become part of our body. If our minds now encompass our phones, we are essentially cyborgs: part-biology, part-technology. Given how our smartphones have taken over what were once functions of our brains—remembering dates, phone numbers, addresses—perhaps the data they contain should be treated on a par with the information we hold in our heads. So if the law aims to protect mental privacy, its boundaries would need to be pushed outwards to give our cyborg anatomy the same protections as our brains.
This line of reasoning leads to some potentially radical conclusions. Some philosophers have argued that when we die, our digital devices should be handled as remains: if your smartphone is a part of who you are, then perhaps it should be treated more like your corpse than your couch. Similarly, one might argue that trashing someone’s smartphone should be seen as a form of “extended” assault, equivalent to a blow to the head, rather than just destruction of property. If your memories are erased because someone attacks you with a club, a court would have no trouble characterizing the episode as a violent incident. So if someone breaks your smartphone and wipes its contents, perhaps the perpetrator should be punished as they would be if they had caused a head trauma.
The extended mind thesis also challenges the law’s role in protecting both the content and the means of thought—that is, shielding what and how we think from undue influence. Regulation bars non-consensual interference in our neurochemistry (for example, through drugs), because that meddles with the contents of our mind. But if cognition encompasses devices, then arguably they should be subject to the same prohibitions. Perhaps some of the techniques that advertisers use to hijack our attention online, to nudge our decision-making or manipulate search results, should count as intrusions on our cognitive process. Similarly, in areas where the law protects the means of thought, it might need to guarantee access to tools such as smartphones—in the same way that freedom of expression protects people’s right not only to write or speak, but also to use computers and disseminate speech over the internet.
The courts are still some way from arriving at such decisions. Besides the headline-making cases of mass shooters, there are thousands of instances each year in which police authorities try to get access to encrypted devices. Although the Fifth Amendment to the US Constitution protects individuals’ right to remain silent (and therefore not give up a passcode), judges in several states have ruled that police can forcibly use fingerprints to unlock a user’s phone. (With the new facial-recognition feature on the iPhone X, police might only need to get an unwitting user to look at her phone.) These decisions reflect the traditional concept that the rights and freedoms of an individual end at the skin.
But the concept of personal rights and freedoms that guides our legal institutions is outdated. It is built on a model of a free individual who enjoys an untouchable inner life. Now, though, our thoughts can be invaded before they have even been developed—and in a way, perhaps this is nothing new. The Nobel Prize-winning physicist Richard Feynman used to say that he thought with his notebook. Without a pen and pencil, a great deal of complex reflection and analysis would never have been possible. If the extended mind view is right, then even simple technologies such as these would merit recognition and protection as a part of the essential toolkit of the mind.This article was originally published at Aeon and has been republished under Creative Commons.
New planets found in distant corners of the galaxy. Climate models that may improve our understanding of sea level rise. The emergence of new antimalarial drugs. These scientific advances and discoveries have been in the news in recent months.
While representing wildly divergent disciplines, from astronomy to biotechnology, they all have one thing in common: Artificial intelligence played a key role in their scientific discovery.
One of the more recent and famous examples came out of NASA at the end of 2017. The US space agency had announced an eighth planet discovered in the Kepler-90 system. Scientists had trained a neural network—a computer with a “brain” modeled on the human mind—to re-examine data from Kepler, a space-borne telescope with a four-year mission to seek out new life and new civilizations. Or, more precisely, to find habitable planets where life might just exist.
The researchers trained the artificial neural network on a set of 15,000 previously vetted signals until it could identify true planets and false positives 96 percent of the time. It then went to work on weaker signals from nearly 700 star systems with known planets.
The machine detected Kepler 90i—a hot, rocky planet that orbits its sun about every two Earth weeks—through a nearly imperceptible change in brightness captured when a planet passes a star. It also found a sixth Earth-sized planet in the Kepler-80 system.
AI Handles Big Data
The application of AI to science is being driven by three great advances in technology, according to Ross King from the Manchester Institute of Biotechnology at the University of Manchester, leader of a team that developed an artificially intelligent “scientist” called Eve.
Those three advances include much faster computers, big datasets, and improved AI methods, King said. “These advances increasingly give AI superhuman reasoning abilities,” he told Singularity Hub by email.
AI systems can flawlessly remember vast numbers of facts and extract information effortlessly from millions of scientific papers, not to mention exhibit flawless logical reasoning and near-optimal probabilistic reasoning, King says.
AI systems also beat humans when it comes to dealing with huge, diverse amounts of data.
That’s partly what attracted a team of glaciologists to turn to machine learning to untangle the factors involved in how heat from Earth’s interior might influence the ice sheet that blankets Greenland.
Algorithms juggled 22 geologic variables—such as bedrock topography, crustal thickness, magnetic anomalies, rock types, and proximity to features like trenches, ridges, young rifts, and volcanoes—to predict geothermal heat flux under the ice sheet throughout Greenland.
The machine learning model, for example, predicts elevated heat flux upstream of Jakobshavn Glacier, the fastest-moving glacier in the world.
“The major advantage is that we can incorporate so many different types of data,” explains Leigh Stearns, associate professor of geology at Kansas University, whose research takes her to the polar regions to understand how and why Earth’s great ice sheets are changing, questions directly related to future sea level rise.
“All of the other models just rely on one parameter to determine heat flux, but the [machine learning] approach incorporates all of them,” Stearns told Singularity Hub in an email. “Interestingly, we found that there is not just one parameter…that determines the heat flux, but a combination of many factors.”
The research was published last month in Geophysical Research Letters.
Stearns says her team hopes to apply high-powered machine learning to characterize glacier behavior over both short and long-term timescales, thanks to the large amounts of data that she and others have collected over the last 20 years.
Emergence of Robot Scientists
While Stearns sees machine learning as another tool to augment her research, King believes artificial intelligence can play a much bigger role in scientific discoveries in the future.
“I am interested in developing AI systems that autonomously do science—robot scientists,” he said. Such systems, King explained, would automatically originate hypotheses to explain observations, devise experiments to test those hypotheses, physically run the experiments using laboratory robotics, and even interpret the results. The conclusions would then influence the next cycle of hypotheses and experiments.
His AI scientist Eve recently helped researchers discover that triclosan, an ingredient commonly found in toothpaste, could be used as an antimalarial drug against certain strains that have developed a resistance to other common drug therapies. The research was published in the journal Scientific Reports.
Automation using artificial intelligence for drug discovery has become a growing area of research, as the machines can work orders of magnitude faster than any human. AI is also being applied in related areas, such as synthetic biology for the rapid design and manufacture of microorganisms for industrial uses.
King argues that machines are better suited to unravel the complexities of biological systems, with even the most “simple” organisms are host to thousands of genes, proteins, and small molecules that interact in complicated ways.
“Robot scientists and semi-automated AI tools are essential for the future of biology, as there are simply not enough human biologists to do the necessary work,” he said.
Creating Shockwaves in Science
The use of machine learning, neural networks, and other AI methods can often get better results in a fraction of the time it would normally take to crunch data.
For instance, scientists at the National Center for Supercomputing Applications, located at the University of Illinois at Urbana-Champaign, have a deep learning system for the rapid detection and characterization of gravitational waves. Gravitational waves are disturbances in spacetime, emanating from big, high-energy cosmic events, such as the massive explosion of a star known as a supernova. The “Holy Grail” of this type of research is to detect gravitational waves from the Big Bang.
Dubbed Deep Filtering, the method allows real-time processing of data from LIGO, a gravitational wave observatory comprised of two enormous laser interferometers located thousands of miles apart in California and Louisiana. The research was published in Physics Letters B. You can watch a trippy visualization of the results below.
In a more down-to-earth example, scientists published a paper last month in Science Advances on the development of a neural network called ConvNetQuake to detect and locate minor earthquakes from ground motion measurements called seismograms.
ConvNetQuake uncovered 17 times more earthquakes than traditional methods. Scientists say the new method is particularly useful in monitoring small-scale seismic activity, which has become more frequent, possibly due to fracking activities that involve injecting wastewater deep underground. You can learn more about ConvNetQuake in this video:
King says he believes that in the long term there will be no limit to what AI can accomplish in science. He and his team, including Eve, are currently working on developing cancer therapies under a grant from DARPA.
“Robot scientists are getting smarter and smarter; human scientists are not,” he says. “Indeed, there is arguably a case that human scientists are less good. I don’t see any scientist alive today of the stature of a Newton or Einstein—despite the vast number of living scientists. The Physics Nobel [laureate] Frank Wilczek is on record as saying (10 years ago) that in 100 years’ time the best physicist will be a machine. I agree.”
Ungerleider is an expert in the field of end-of-life care and is working to overhaul patient treatment at this life stage. She practices internal medicine at California Pacific Medical Center in San Francisco and is also the founder of the End Well project, a new symposium focused on using human-centered design principles to improve the end-of-life experience.
While explaining the current status quo of end-of-life care Ungerleider said, “It’s really important for people to understand that, by default, you will receive aggressive invasive care no matter how old you are, no matter how sick you are, and even if it won’t help you. That’s our default protocol in the United States.”
Having standardized medical protocols for many conditions is crucial, but when it comes to end-of-life care, these impersonal and uniform treatment plans fail to honor the needs of the individual at hand. This shouldn’t come as a surprise because what it means to “end well” is unique for everyone. Because of this, Ungerleider’s core message is that there cannot be a one-size-fits-all treatment plan for the end-of-life experience.
Ungerleider said, “As a physician, it’s really all about making sure that the care people receive is care that they really want, and that they understand. It’s about honoring the way that people have lived their lives and looking at what’s most important to them and what are their goals and values for living.”
By bringing together communities of designers, technologists, healthcare professionals, and activists at the End Well project, Ungerleider hopes to overhaul the current medical approach to end-of-life care—and to ultimately make it a more human experience.
The term “silver bullet” gets tossed around a lot, but cancer vaccines are just that. Unlike the flu vaccines that we’re familiar with, cancer vaccines are slightly different in that they don’t just seek to prevent cancers from forming. In many cases, these vaccines also treat tumors already within the body.
What unites cancer vaccines is this: these agents, ranging from chemicals to DNA-like molecules to cells, all give the immune system a boost so that it better recognizes and attacks cancer cells.
To Dr. Ronald Levy, an oncologist at Stanford University, cancer immunotherapy is the way to go. You may have heard of some of these treatments already. CAR-T, which genetically enhances a patient’s immune cells to better target cancers, was approved last year to treat certain types of blood cancers.
“All of these immunotherapy advances are changing medical practice,” he says.
And a tidal wave is coming. Just this month, two studies explored completely new ways to shock the immune system back into action. The first, a Stanford study published in Cell Stem Cell, surprisingly found that induced pluripotent stem cells (iPSCs) from a patient can “train” the immune system into attacking or preventing tumors in mice.
“The concept itself is pretty simple,” says study author Dr. Joseph Wu, “we would take your blood, make iPSCs and then inject the cells to prevent future cancers.”
The second, led by Levy and published in Science Advances, found a simple, ready-to-use system to boost immune T cells. By injecting two molecules directly into solid tumors, they reinvigorated confused T cells, transforming them into super soldiers that wiped out both local cancer cells and those that have already spread.
The best thing? These approaches aren’t mutually exclusive. We could envision a treatment regime whereby a patient first receives a personalized iPSC cancer vaccine, followed by a universal “booster shot” that further enhances T cell efficacy.
“I’m very excited about the future possibilities,”says Wu.
T Cell Boot Camp
Cancer cells exist in a sort of limbo. When they first begin to mutate, swapping normal surface proteins with cancerous ones, it throws the immune system into red alert. T cells perk up, infiltrate into the tumor mass and begin sweeping the area clean of the dangerous mutants.
Yet at some point, cancer cells fight back. They learn to invade the T cell’s surveillance system, or even figure out ways to keep those immune soldiers from doing their jobs.
The thing is, it takes a while for rookie T cells to realize that something’s awry. Trained soldiers are another story—these guys spring into action, beating back cancers before they have a chance to grow.
What if we could take “naïve” T cells to boot camp?
In one early study, a team identified proteins on melanoma cells (a common, but deadly type of skin cancer) that were specific to the cancer. These cells came from tumors surgically removed from patients who were at high risk of recurrence.
The team synthesized molecules that resembled these melanoma markers and injected them back into the patients to “train” their immune systems. 25 months later, four out of six treated patients remained cancer free—a small win, but a huge proof-of-concept for the field.
But the approach has a drawback: it requires complex computer algorithms to tease out which markers to use as bait, and making them from scratch is expensive.
The Stem Cell Solution
In a surprising twist, Wu and colleagues found another Trojan horse draped in cancer-like markers on its surface—iPSCs.
By comparing the gene expression profiles of cancer and iPSCs, the team found remarkable similarities, suggesting that the two cell types may share surface markers that could act as “red flags” to the immune system. In fact, iPSCs can often form a type of tumor called teratomas when injected into mice, and like cancer cells, they’re free from growth restrictions normally built into healthy, adult cells.
To see if iPSCs can act as a vaccine, the team injected mice with four doses of iPSCs over a month. These cells were converted from the mice’s own skin cells and irradiated to prevent them from forming teratomas. Like most cancer therapies, the team also added a generic immune-stimulating chemical to the cells, which by itself had no observable effect.
The mice were then transplanted with mouse breast cancer cells. One week later, the saline control group developed large, dramatic tumors at the injection site. In contrast, 70 percent of the vaccinated mice slowed tumor growth, while two completely beat back the cancer. They lived cancer-free for over a year.
“Once activated, the immune system is on alert to target cancers as they develop throughout the body,”says study author Nigel Kooreman, adding that the technique is especially powerful because we can simultaneously “train” the immune system on multiple types of cancer markers.
The Universal Booster
But what if the cancer had already formed?
Levy’s new (relatively) cheap-and-easy vaccine uses two molecules to reboot T cells snoozing inside cancer cells back into action.
The first is a short piece of DNA dubbed a CpG oligonucleotide. It’s like installing a turbo on a diesel engine—the molecule causes T cells to up their expression of a molecule called OX40 (the turbo).
The second shot is like giving fuel to the new engine. The shot contains a molecule that binds OX40, which causes T cells to rev their engines to a full roar.
The vaccine worked shocking well in mice transplanted with lymphoma tumors at two places in their bodies. In 87 out of 90 mice, a three-dose treatment at one site eliminated cancer cells from both locations. Similarly, transplanted mouse breast, colon, and melanoma tumors were also beaten back with the vaccine.
The shot also worked for animals genetically engineered to spontaneously develop breast cancer, wiping out cancer cells both at the site of injection and those that sprung up further away. What’s more, the shot lowered the chance of the animals developing future tumors, and boosted their survival rate. At 15 weeks after the shot, roughly 80 percent of the vaccinated animals survived, whereas all those in the control group perished.
Even more incredibly, the team found that the treatment was extremely specific. T cells reinvigorated to attack lymphoma cells did not harm colon cancer cells or other normal cells. This is likely because the drugs only activate T cells already present inside the injected tumor—in other words, T cells capable of recognizing and infiltrating a specific type of cancer cell.
Unlike previous cancer immunotherapies, Levy’s vaccine is extremely elegant in its simplicity. It doesn’t need to identify cancer markers specific for each patient, or customize each patient’s T cells as in CAR-T.
Even better: both molecules in the vaccine are already approved individually for human use.
Levy is now recruiting about 15 patients with lymphoma to test the therapy. If it works, patients with multiple tumors could receive a shot and let the vaccine do the work. Surgeons in the future could inoculate a patient before removing the tumor tissue to ward off against the cancer springing back to life.
There are limitations. A big one: the vaccine currently only targets solid tumors. For now, it doesn’t work on blood cancers such as leukemia.
Still, cancers beware. All those with sleeper T cells buried within are now potentially in the firing line.
“I don’t think there’s a limit to the type of tumor we could potentially treat,” saysLevy.
Save for the occasional burning pain that accompanies a run, most people don’t pay much attention to the two-leafed organ puffing away in our chests.
But lungs are feats of engineering wonder: with over 40 types of cells embedded in a delicate but supple matrix, they continuously pump oxygen into the bloodstream over an area the size of a tennis field. Their exquisite tree-like structure optimizes gas exchange efficiency; unfortunately, it also makes engineering healthy replacement lungs a near-impossible task.
Rather than building lungs from scratch, scientists take a “replace and refresh approach”: they take a diseased lung, flush out its sickly, inflamed cells and reseed the empty matrix with healthy ones.
It’s an intricate procedure—nevertheless, the delicate branches of blood vessels are often completely destroyed during the process. Without blood to deliver nutrients and molecules to the newly seeded cells, the graft fails.
What if, thought Dr. Gordana Vunjak-Novakovic at Columbia University, rather than removing all cells from a donor lung, we gently clean out only the diseased cells in the airway without touching blood circulation?
This week, Vunjak-Novakovic’s team published a “radically new approach” to bioengineering lungs: making scaffolds with blood vessels intact.
When researchers added back therapeutic human cells that line the lung’s airways to a rat lung scaffold, the foreign cells—in this case, epithelium cells—homed to the correct location, attached, and thrived.
Because lung failure often stems from diseased epithelium cells, says study author Dr. N. Valerio Dorrello, this new method allows us to regenerate lungs by treating just the injured cells.
Dr. Matthew Bacchetta, who also worked on the project, sees the method as a “transformative” way to obtain lungs ready for transplant. Because lungs are notoriously bad at repairing themselves, in severe cases the only real option is a transplant.
It’s a hard sell—only up to 20 percent of patients are still alive ten years later, the procedure is expensive, and the demand for donor lungs far exceeds the supply.
These new “vascularized” lungs bring us one step closer to the penultimate goal: transplanting lungs made from a patient’s own cells, seeded onto a donor scaffold from a cadaver or even primate or pig.
The patients’ cells give the scaffold a complete immune makeover, lowering the risk of immune rejection—a main reason why transplants fail.
“As a lung transplant surgeon, I am very excited about the great potential of our technique,” he says.
Engineering functional lungs is nothing short of a moonshot, even in the ambitious field of regenerative medicine.
The lung is a real jungle: at the microscopic level, the tree-like airways contain alveoli, tiny bubble-like structures where the lungs exchange gas with our blood. Both arteries and veins enwrap the alveoli like two sets of mesh pockets.
At least a half dozen cellular denizens work in tandem to keep the alveoli spheres inflated, to guard the organ against infections, and to enforce the structure of its many branches.
This three-dimensional complexity is why we ruled out the possibility of growing lungs from scratch, explains Dr. Laura Niklason, a biomedical engineer at Yale University who was not involved in the new study.
Back in 2010, Niklason had a brilliant idea: rather than relying on synthetic templates that mimic the organ’s intricate structure—a “very tall order,” she says—scientists could use nature’s own template, the lung’s matrix, as a jumping off point.
Niklason’s approach is similar to stripping down a house to its bare bones—weight-bearing beams, struts and bolts—and reworking the rest to its new owner’s tastes.
As a proof-of-concept, Niklason’s team used a detergent that washed away the cells and blood vessels from a rat lung. They then soaked the lung matrix scaffold inside a “bioreactor” that mimics the conditions of a growing fetus.
When the team reseeded the scaffold with a cocktail of cells, the lung regrew its blood vessels, alveoli and tiny airways with the right types of cells—all within four days.
In the ultimate test of functionality, Niklason’s team transplanted the regrown lungs back into living rats. A few seconds later, the lung inflated, turning bright red as it took in oxygen and blood supply.
It’s just an initial step, the team wrote at the time. The lungs only survived up to two hours in the donor’s body, and subsequent analysis revealed bleeding and blood clots within the airway and regrown capillaries.
One potential reason is this: the blood vessels may not have formed proper junctions with the alveoli. While still allowing gas exchange, this eventually causes blood leaks into the lungs.
Breath of Fresh Air
If newly-grown blood vessels form malfunctioned junctions, why not preserve the originals instead?
That’s exactly what Vunjak-Novakovic’s team tackled in the new studypublished in Science Advances.
Adapting Niklason’s technique, the team inserted a tube into the airway of a newly harvested rat lung and pumped through a gentle detergent that only removed the lung’s epithelial cells—the inner lining.
Blood vessels, in contrast, were washed with an electrolyte solution similar to Gatorade.
With this small change, we removed over 70 percent of epithelial cells—which are often the root of lung diseases—but maintained the vasculature, the authors say.
Like cartographers mapping a new land, the team next probed the integrity of the vessels. Injecting tiny beads that glow under UV light into the lung’s main artery, they watched as the beads flooded the twisting capillaries, glowing bright within the larger vessels.
In contrast, there were no obvious signs of glowing beads within the airway or alveoli, suggesting that the blood vessels were intact—no leakage!
With scaffold in hand, the team next marinated the structure with human lung epithelium cells. As a bonus, they also used lung cells derived from induced pluripotent stem cells (iPSCs). iPSCs are made from a patient’s own cells—often skin cells—and can be coaxed to become nearly any other cell type with the right cocktail of signals.
Because iPSCs retain the person’s immune profile, scaffolds seeded with these cells have a much lower chance of being rejected.
Within a mere 24 hours, the team detected signs of the newly seeded cells within the lung scaffolds. Under the microscope, the newcomers attached to the right spot, stabilized and begun rapidly dividing to repopulate the missing cells.
The lung grafts also had a boost in breathing power—they could expand more fully—gaining back roughly 50 percent of what was lost during the detergent treatment.
A Breath Away?
The study stops short at the final test: transplanting the engineered lung back into a recipient. As with older generation scaffolds, the newly minted lungs could also develop deadly blood clots or bleeding once reintroduced into a living, breathing animal.
What’s more, the team only used a mild detergent in their preparation to preserve the lung’s integrity. The result was a partial cleanout with some of the rats’ own epithelial cells still intact.
These injured stragglers may provide important information to the new, healthy cells, so this could be an unexpected bonus, the authors explain. Whether they are friend or foe will have to be tested in a future study.
The technology needs a lot more work before it could be used in humans, but Vunjak-Novakovic and colleagues are already excited about potential new treatment options.
This study provides proof-of-concept evidence that our approach works, the authors write. We show, for the first time, that it’s possible to wash out diseased lung epithelial cells without touching blood vessels.
What really gets the team excited is this: although freshly harvested rat lungs were used in this study, in theory the method could be used without removing the lung.
This is “transformative:” patients with injured lung epithelial cells could be irrigated with the detergent to remove the sickly cells. Doctors can then harvest their skin cells and transform them into healthy lung cells to reseed the lung.
“Every day, I see children in intensive care with severe lung disease who depend on mechanical ventilation support,” says Dorrello. We may be on our way to an entirely new treatment solution for these patients and regenerate their broken lungs, he says.
The advance of CRISPR gene editing technology, which uses an RNA strand to guide an enzyme called Cas9 to cut a specific portion of DNA, has raised concerns and sparked debate as people envision a not-so-distant future populated by bioengineered super-crops, genetically flawless pets, and customized babies. While the method could be used for these purposes, it’s also showing potential as a valuable medical tool, with a seemingly new condition added each week to the list of what CRISPR may one day cure.
One recent addition to that list is Duchenne muscular dystrophy (DMD). In a study from University of Texas Southwestern Medical Center, researchers used CRISPR to make a single cut at a few strategic points along DNA in cells derived from DMD patients, with the result of potentially correcting most of the 3,000 gene mutations that cause DMD.
DMD is a genetic disorder characterized by progressive muscle degeneration and weakness. It mostly affects boys and is caused by defects in the gene that makes dystrophin, a protein that helps strengthen muscle fibers in skeletal and cardiac muscles. Many patients end up in wheelchairs, on respirators, or both, and while advances in cardiac and respiratory care have increased life expectancy into the early 30s, there’s still no cure for the condition.
The study on CRISPR for DMD was the cover story of this month’s Science Advances, and it builds on previous studies led by Dr. Eric Olson, director of UT Southwestern’s Hamon Center for Regenerative Science and Medicine, in which CRISPR was used to correct a single gene mutation that caused DMD in mice.
The new study showed that various DMD-related mutations can be corrected in human cells by eliminating flawed splice sites in genomic DNA. These splice sites instruct genes to build abnormal dystrophin molecules. The protein then doesn’t function as it should to keep muscle cells intact, and muscles start to break down.
Researchers developed 12 guide RNAs to find mutation sites along the dystrophin gene. They cut the DNA at these locations and, in doing so, directed the cellular machinery to skip over the faulty protein sequences. Once the gene was successfully edited, it started building functional dystrophin protein, enhancing the function of muscle tissue.
“We found that correcting less than half of the cardiomyocytes (heart muscle cells) was enough to rescue cardiac function to near-normal levels in human-engineered heart tissue,” said Dr. Chengzu Long, lead author of the study and assistant professor of medicine at New York University Langone Health.
This single-cut method is an efficient alternative to developing a separate molecular treatment for each one of the gene mutations that cause DMD, and could potentially be used to correct other single-gene mutations like cystic fibrosis or sickle cell anemia.
“Not only did we find a practical way of treating many mutations, we have developed a less disruptive method that skips over defective DNA instead of removing it,” said Dr. Rhonda Bassel-Duby, co-author of the study and professor of molecular biology at UT Southwestern. “The genome is highly structured and you don’t want to remove DNA that could potentially be important.” She added that while single-cut editing may be useful for treating other single-gene diseases, the genes involved must still be able to function after certain DNA or RNA sequences are removed.
Before we sing CRISPR’s praises too loudly or start banking on it curing all our ailments, though, we must keep in mind that the tool is still very new, and we don’t really know what long-term results or late-onset side effects its use could engender. In fact, we’re not even sure it’ll always work in its current form on humans; one recent study found that some people may be “immune” to CRISPR, as an adaptive immune response can be triggered in people who have been exposed to the bacteria that’s used to engineer CRISPR proteins.
Clinical trials using CRISPR to cure blood disorders and sickle-cell disease in humans are slated to start this year in the US. Human trials have already begun in China, where CRISPR is being used to treat cancer and HIV. No peer-reviewed studies from these trials have been published yet, but doctors claim the tool has succeeded in improving some patients’ conditions.
Dr. Olson’s lab will continue testing its DMD method for side effects and will also look for ways to improve the precision of the guide RNAs. The team’s work led to the creation of biotech company Exonics Therapeutics, which has licensed the technology from UT Southwestern and is working to optimize the approach and extend it to other neuromuscular diseases.
“This is a major advance,” Dr. Bassel-Duby said. “Many different therapies have been put forward, but this one provides real hope to extend and improve the quality of patients’ lives.”