The Secret to Living Past 120 Years Old? Nanobots

In The Singularity Is Nearer: When We Merge With AI, the spiritual sequel to his (in)famous 2005 book, Ray Kurzweil doubles down on the promise of immortality.
Grid of 9 images alternating between sepiatoned photographs of people of various ages and limegreen hued imagery of...
Photo-Illustration: WIRED Staff; Getty Images

We are now in the later stages of the first generation of life extension, which involves applying the current class of pharmaceutical and nutritional knowledge to overcoming health challenges. In the 2020s we are starting the second phase of life extension, which is the merger of biotechnology with AI. The 2030s will usher in the third phase of life extension, which will be to use nanotechnology to overcome the limitations of our biological organs altogether. As we enter this phase, we’ll greatly extend our lives, allowing people to far transcend the normal human limit of 120 years.

Only one person, Jeanne Calment—a French woman who survived to age 122—is documented to have lived longer than 120 years. So why is this such a hard limit to human longevity? One might guess that the reasons people don’t make it past this age are statistical—that elderly people face a certain risk of Alzheimer’s, stroke, heart attack, or cancer every year, and that after enough years being exposed to these risks, everyone eventually dies of something. But that’s not what’s happening. Actuarial data shows that from age 90 to 110, a person’s chances of dying in the following year increase by about 2 percentage points annually. For example, an American man at age 97 has about a 30 percent chance of dying before 98, and if he makes it that far he will have a 32 percent chance of dying before 99. But from age 110 onward, the risk of death rises by about 3.5 percentage points a year.

Doctors have offered an explanation: At around age 110, the bodies of the oldest people start breaking down in ways that are qualitatively different from the aging of younger senior citizens. Supercentenarian (110-plus) aging is not simply a continuation or worsening of the same kinds of statistical risks of late adulthood. While people at that age also have an annual risk from ordinary diseases (although the worsening of these risks may decelerate in the very old), they additionally face new challenges like kidney failure and respiratory failure. These often seem to happen spontaneously—not as a result of lifestyle factors or any disease onset. The body apparently just starts breaking down.

Over the past decade, scientists and investors have started giving much more serious attention to finding out why. One of the leading researchers in this field is biogerontologist Aubrey de Grey, founder of the LEV (Longevity Escape Velocity) foundation. As de Grey explains, aging is like the wear on the engine of an automobile—it is damage that accumulates as a result of the system’s normal operation. In the human body’s case, that damage largely comes from a combination of cellular metabolism and cellular reproduction. Metabolism creates waste in and around cells and damages structures through oxidation (much like the rusting of a car!). When we’re young, our bodies are able to remove this waste and repair the damage efficiently. But as we get older, most of our cells reproduce over and over, and errors accumulate. Eventually the damage starts piling up faster than the body can fix it.

The only solution, longevity researchers argue, is to cure aging itself. In short, we need the ability to repair damage from aging at the level of individual cells and local tissues. There are a number of possibilities being explored for how to achieve this, but I believe the most promising ultimate solution is nanorobots.

And we don’t need to wait until these technologies are fully mature in order to benefit. If you can live long enough for anti-aging research to start adding at least one year to your remaining life expectancy annually, that will buy enough time for nanomedicine to cure any remaining facets of aging. This is longevity escape velocity. This is why there is sound logic behind Aubrey de Grey’s sensational declaration that the first person to live to 1,000 years has likely already been born. If the nanotechnology of 2050 solves enough issues of aging for 100-year-olds to start living to 150, we’ll then have until 2100 to solve whatever new problems may crop up at that age. With AI playing a key role in research by then, progress during that time will be exponential. So even though these projections are admittedly startling—and even sound absurd to our intuitive for linear thinking—we have solid reasons to see this as a likely future.

I’ve had many conversations over the years about life extension, and the idea often meets resistance. People become upset when they hear of an individual whose life has been cut short by a disease, yet when confronted with the possibility of generally extending all human life, they react negatively. “Life is too difficult to contemplate going on indefinitely” is a common response. But people generally do not want to end their lives at any point unless they are in enormous pain—physically, mentally, or spiritually. And if they were to absorb the ongoing improvements of life in all its dimensions, most such afflictions would be alleviated. That is, extending human life would also mean vastly improving it.

But how will nanotechnology actually make this possible? In my view, the long-term goal is medical nanorobots. These will be made from diamondoid parts with onboard sensors, manipulators, computers, communicators, and possibly power supplies. It is intuitive to imagine nanobots as tiny metal robotic submarines chugging through the bloodstream, but physics at the nanoscale requires a substantially different approach. At this scale, water is a powerful solvent, and oxidant molecules are highly reactive, so strong materials like diamondoid will be needed.

And whereas macro-scale submarines can smoothly propel themselves through liquids, for nanoscale objects, fluid dynamics are dominated by sticky frictional forces. Imagine trying to swim through peanut butter! So nanobots will need to harness different principles of propulsion. Likewise, nanobots probably won’t be able to store enough onboard energy or computing power to accomplish all their tasks independently, so they will need to be designed to draw energy from their surroundings and either obey outside control signals or collaborate with one another to do computation.

To maintain our bodies and otherwise counteract health problems, we will all need a huge number of nanobots, each about the size of a cell. The best available estimates say that the human body is made of several tens of trillions of biological cells. If we augment ourselves with just 1 nanobot per 100 cells, this would amount to several hundred billion nanobots. It remains to be seen, though, what ratio is optimal. It might turn out, for example, that advanced nanobots could be effective even at a cell-to-nanobot ratio several orders of magnitude greater.

One of the main effects of aging is degrading organ performance, so a key role of these nanobots will be to repair and augment them. Other than expanding our neocortex, this will mainly involve helping our nonsensory organs to efficiently place substances into the blood supply (or lymph system) or remove them. By monitoring the supply of these vital substances, adjusting their levels as needed, and maintaining organ structures, nanobots can keep a person’s body in good health indefinitely. Ultimately, nanobots will be able to replace biological organs altogether, if needed or desired.

But nanobots won’t be limited to preserving the body’s normal function. They could also be used to adjust concentrations of various substances in our blood to levels more optimal than what would normally occur in the body. Hormones could be tweaked to give us more energy and focus, or speed up the body’s natural healing and repair. If optimizing hormones could make our sleep more efficient, it would in effect be “backdoor life extension.” If you just go from needing eight hours of sleep a night to seven hours, that adds as much waking existence to the average life as five more years of lifespan!

Eventually, using nanobots for body maintenance and optimization should prevent major diseases from even arising. Once nanobots can selectively repair or destroy individual cells, we will fully master our biology, and medicine will become the exact science it has long aspired to be.

Achieving this will also entail gaining complete control over our genes. In our natural state, cells reproduce by copying the DNA in each nucleus. If there is a problem with the DNA sequence in a group of cells, there is no way to address it without updating it in every individual cell. This is an advantage in unenhanced biological organisms, because random mutations within individual cells are unlikely to cause fatal damage to the whole body. If any mutation in any cell in our bodies were instantly copied to every other cell, we wouldn’t be able to survive. But the decentralized robustness of biology is a major challenge to a species (like ours) that can edit individual cells’ DNA fairly well but has not yet mastered the nanotechnology needed to edit DNA effectively throughout the whole body.

If instead each cell’s DNA code were controlled by a central server (as many electronic systems are), then we could change the DNA code by simply updating it once from that “central server.” To do this, we would augment each cell’s nucleus with a nanoengineered counterpart—a system that would receive the DNA code from the central server and then produce a sequence of amino acids from this code. I use “central server” here as a shorthand for a more centralized broadcast architecture, but this probably does not mean every nanobot getting direct instructions from literally one computer. The physical challenges of nanoscale engineering might ultimately dictate that a more localized broadcast system is preferable. But even if there are hundreds or thousands of micro-scale (as opposed to nanoscale) control units placed around our bodies (which would be large enough for more complex communications with an overall control computer), this would be orders of magnitude more centralization than the status quo: independent functioning by tens of trillions of cells.

The other parts of the protein synthesis system, such as the ribosome, could be augmented in the same fashion. In this way we could simply turn off activity from malfunctioning DNA, whether it is responsible for cancer or genetic disorders. The nanocomputer maintaining this process would also implement the biological algorithms that govern epigenetics—how genes are expressed and activated. As of the early 2020s, we still have a lot to learn about gene expression, but AI will allow us to simulate it in enough detail by the time nanotechnology is mature that nanobots will be able to precisely regulate it. With this technology we’ll also be able to prevent and reverse the accumulation of DNA transcription errors, which are a major cause of aging.

Nanobots will also be useful for neutralizing urgent threats to the body—destroying bacteria and viruses, halting autoimmune reactions, or drilling through clogged arteries. In fact, recent research by Stanford and Michigan State University has already created a nanoparticle that finds the monocytes and macrophages that cause atherosclerotic plaque and eliminates those cells. Smart nanobots will be vastly more effective. Initially such treatments would be initiated by humans, but ultimately they will be carried out autonomously; the nanobots will perform tasks on their own and report their activities (via a controlling AI interface) to humans monitoring them.

As AI gains greater ability to understand human biology, it will be possible to send nanobots to address problems at the cellular level long before they would be detectable by today’s doctors. In many cases this will allow prevention of conditions that remain unexplained in 2023. Today, for example, about 25 percent of ischemic strokes are “cryptogenic”—they have no detectable cause. But we know they must happen for some reason. Nanobots patrolling the bloodstream could detect small plaques or structural defects at risk of creating stroke-causing clots, break up forming clots, or raise the alarm if a stroke is silently unfolding.

Just as with hormone optimization, though, nanomaterials will allow us to not just restore normal body function but augment it beyond what our biology alone makes possible. Biological systems are limited in strength and speed because they must be constructed from protein. Although these proteins are three-dimensional, they have to be folded from a one-dimensional string of amino acids. Engineered nanomaterials won’t have this limitation. Nanobots built from diamondoid gears and rotors would be thousands of times faster and stronger than biological materials, and designed from scratch to perform optimally.

Thanks to these advantages, even our blood supply may be replaced by nanobots. A design by founding Singularity University nanotechnology cochair Robert A. Freitas called the respirocyte is an artificial red blood cell. According to Freitas’ calculations, someone with respirocytes in his bloodstream could hold his breath for about four hours. In addition to artificial blood cells, we’ll eventually be able to engineer artificial lungs to oxygenate them more efficiently than the respiratory system that biology has given us. Ultimately, even hearts made from nanomaterials will make people immune to heart attacks and make cardiac arrest due to trauma much rarer.

Yet the most important role of nanotech in our bodies will be augmenting the brain—which will eventually become more than 99.9 percent nonbiological. There are two distinct pathways by which this will happen. One is the gradual introduction of nanobots to the brain tissue itself. These may be used to repair damage or replace neurons that have stopped working. The other is connecting the brain to computers, which will both provide the ability to control machines directly with our thoughts and allow us to integrate digital layers of neocortex in the cloud. This will go far beyond just better memory or faster thinking.

A deeper virtual neocortex will give us the ability to think thoughts more complex and abstract than we can currently comprehend. As a dimly suggestive example, imagine being able to clearly and intuitively visualize and reason about 10-dimensional shapes. That sort of facility will be possible across many domains of cognition. For comparison, the cerebral cortex (which is mainly made up of the neocortex) has an average of 16 billion neurons, in a volume of roughly half a liter. Ralph Merkle’s design for a nanoscale mechanical computing system could theoretically pack more than 80 quintillion logic gates into the same amount of space. And the speed advantage would be enormous: The electrochemical switching speed of mammalian neuron firing probably averages within an order of magnitude of once per second, as compared with likely around 100 million to 1 billion cycles per second for nanoengineered computation. Even if only a minuscule fraction of these values are achievable in practice, it is clear that such technology will allow the digital parts of our brain (stored on nonbiological computing substrates) to vastly outnumber and outperform the biological ones.

My estimate is that the computations inside the human brain (at the level of neurons) is on the order of 1014 per second. As of early 2023, $1,000 of computing power could perform up to 48 trillion computations per second. Based on the 2000–2022 trend, by 2053 about $1,000 of computing power (in 2023 dollars) will be enough to perform more than 1 million times as many computations per second as the unenhanced human brain. If it turns out, as I suspect, that only a fraction of the brain’s neurons are necessary to digitize the conscious mind (e.g., if we don’t have to simulate the actions of many cells that govern the actions of the body’s other organs), this point could be reached several years sooner. And even if it turns out that digitizing our conscious minds requires simulating every protein in every neuron (which I think is unlikely), it might take a few more decades to reach that level of affordability—but it’s still something that would happen within the lifetimes of many people living today. In other words, because this future depends on fundamental exponential trends, even if we greatly change our assumptions about how easy it will be to affordably digitize ourselves, that won’t vastly change the date by which this milestone will be reached.

In the 2040s and 2050s, we will rebuild our bodies and brains to go vastly beyond what our biological bodies are capable of, including their backup and survival. As nanotechnology takes off, we will be able to produce an optimized body at will: We’ll be able to run much faster and longer, swim and breathe under the ocean like fish, and even give ourselves working wings if we want them. We will think millions of times faster, but most importantly, we will not be dependent on the survival of any of our bodies for our selves to survive.

From The Singularity Is Nearer by Ray Kurzweil, to be published on June 25, 2024, by Viking, an imprint of Penguin Publishing Group, a division of Penguin Random House, LLC. Copyright © 2024 by Ray Kurzweil.

Updated 06-14-24, 5:45 am ET: The story was updated to correct the description of Kurzweil's 2005 book and the estimated number of computations in the human brain.


If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more.