Aging Reversal

http://www.longevityreporter.org/blog/2016/4/22/promising-results-from-the-first-human-gene-therapy-against-aging

I just read this article on the results of the fist anti-aging gene therapy. If it’s legitimate (it looks like so, but I don’t understand why this hasn’t been talked about in the mainstream media), these are some really great news. We may actually get rid of aging even sooner than most people expect.

I feel that the tranhumanist movement should probably give less atention to uploading and more to this sort of thing. What do you think?

3 Likes

Elizabeth Parrish had to do this treatment on herself in Columbia without permission, and without the knowledge of her staff. The mainstream considers this to be fringe science done by rogue scientists, which is why they don’t report much about it – unless it will go horribly wrong. :frowning: Unfortunately, progress all too often depends on pioneers going ahead with seemingly foolish attempts to thwart all the naysayers.

That is indeed a possibility. The opinions are split about whether fixing telomeres will help a lot, or almost nothing at all. I want to be optimistic, but I don’t have a very informed opinion of this matter.

Longevity treatments look to be closer on the horizon that uploading, at the moment. It certainly seems to be the right time to focus on longevity, rather than more far our technologies like artificial general intelligence or uploading. If we can fix ageing, we will be able to use our human potential to the maximum – at the time when we need it most, because we will face the biggest challenges humanity has ever faced.

3 Likes

Sounds kinda cool. Really cool, actually.

Well, I guess we’ll really have to wait to see.

I’ve actually been thinking of changing the Fractal Cosmos timeline in order to have uploading and AGI only emerge in the mid 22nd century instead of the mid 21st century, as most transhumanists assume will happen. We’ve only just started understanding how the brain works, and we’re definitly not gonna learn everything in the blink of an eye. Research currently seems to be exposing more problems than it’s solving, and I think this will be a far harder task than we used to think.

This change would allow Fractal Cosmos to set itself apart from the typical transhumanist view of the future, thus making it a lot more original. I think that people have never really given much thought to the prospect of a universe where biological immortality and human enhancement (both genetic and cybernetic) are real but AGIs and uploading aren’t there just yet. We’d be exploring fresh territory.

We could probably do this while keeping all our major ideas intact, but it would require some effort from our part to keep things coherent.

What do you think? Could this be a way of bringing some life back to the project, or is it an overly radical change?

That’s an interesting angle, but I think it would be too extreme. I can well envision a world in which biological immortality was the norm, but in which strong AI or uploading aren’t solved problems, yet. A sufficiently long break in Moore’s law might be a possible reason for that. Still, your scenario is not entirely plausible. If it turns out that implementing complex minds in silicon is too complicated, scientists will resort to other strategies. They might start trying to use DNA as compact organic computation substrate (which it actually is), and optimize it for certain types of computation, eventually gaining Turing completeness, and finally reaching the complexity of the human brain. That approach might very well take more than 50 years, but I don’t see it taking 100 years, with all the progress in bioinformatics and all the other areas happening.

Perhaps we won’t upload into silicon chips, but machines built mostly from DNA – wait, aren’t we there already? :wink: Ah, we are still lacking the protocols that enable effective transfer of data from one brain to another brain, or another substrate. Someone should definitely work on that.

1 Like

OK, in that case, let’s try to fine-tune my idea a little bit.

I don’t really think that the problem will be implanting minds in sillicon, but understanding how minds actually work. To me, that seems to be the real challenge here, and we’re probably not going to solve it very soon.

Still, let’s say we manage to do that in around half of a century. That would mean that the Organic Artificial Intelligences you describe (mostly brains built from scratch) would appear around the beggining of the 22nd century. That would already be pretty good, but it probably wouldn’t be enough to give us uploading right away. Research suggests that, to actually replicate a mind in a different body/substrate, one would have to know in detail not just every connection between neurons but also the chemical composition of each neuron. That’s quite a lot of information and, while OAIs could certainly handle it, we’d still have to find a way of quickly “scanning” the brain in order to obtain all that information. I think this means that there will be a lapse of a decade or two between the invention of the first OAI and the emergence of uploading, even with the OAIs accelarating the process (they may very well not be as easy to evolve as “traditional” Artificial Intelligence).

So, I guess that the first human to be uploaded would be someone living in the 2120s, coinciding with the early years of system V and the Exaltation (I sense good material for a story).

What do you think? Does it seem more likely this way?

Understanding how minds work is sure a very tough challenge. But do we really need to solve it? I mean, we can make computer programs who are better than any humans at playing chess, even though they work quite differently from how human brains work. Probably we don’t need to understand how the human brain works to build something with similar capabilities. The first near-AGIs may be superior to human brains in many aspects, but still inferior in many others. It doesn’t really matter much, as long as we still improve out methods at getting those AIs better in the aspects they are still lacking in. Even without being able to create a full human-equivalent AGI, the AIs that come close, yet still surpass humans in many respects will have a tremendous impact on about anything.

Alternatively, we might opt to replicate as much of the brain in a synthetic artefact as possible, even without fully understanding how the brain does what it does. We don’t need to understand the brain to create good replicas. We only need to scan and simulate it. The replicas might not be quite as good as the originals, but they might help us a lot to understand the human mind better.

It may be the case that we will actually reach some kind of soft ceiling, when our human mind simulacra come close to the general capabilities of actual human brains, at which it becomes very hard to improve on what we already have – it’s actually quite difficult for us to improve our human brains while we are still using them, so why would it be so much easier to improve artificial human brains? This is true even though the brain has such cool capabilities like rewiring itself in real time and creating new neurons that are integrated into the brain. Crucial advantages of artificial thinking substrates are better information and control over those. But with nanomachines embedded into the human brain, we get those advantages, too, for our legacy wetware.

Eventually we will develop a mind substrate that is better in any salient respect than human brains, but it’s not easy to guess how early that will be. In the meantime, very interesting things will happen, however. Machine intelligence will become more and more important and integrated into our lives. New forms of hardware and software will emerge which will be great game changers.

You seem to be thinking about scenarios in which the Singularity gets delayed by more than sixty years, or even cancelled outright. Delayed or cancelled by what exactly? There are so many forces pushing for better technologies and convergent technologies that it seems unlikely that we won’t see a massive surge in our technological capabilities, say around 2050. You’d really need to come up with multiple big reasons for why we might not make fast progress. Or with a reason that is strong enough to basically put humanity back into the medieval age.