Jump to content
AlPater

Every-other-day feeding extends lifespan but fails to delay many symptoms of aging in mice

Recommended Posts

Every-other-day feeding extends lifespan but fails to delay many symptoms of aging in mice.

Xie K, Neff F, Markert A, Rozman J, Aguilar-Pimentel JA, Amarie OV, Becker L, Brommage R, Garrett L, Henzel KS, Hölter SM, Janik D, Lehmann I, Moreth K, Pearson BL, Racz I, Rathkolb B, Ryan DP, Schröder S, Treise I, Bekeredjian R, Busch DH, Graw J, Ehninger G, Klingenspor M, Klopstock T, Ollert M, Sandholzer M, Schmidt-Weber C, Weiergräber M, Wolf E, Wurst W, Zimmer A, Gailus-Durner V, Fuchs H, Hrabě de Angelis M, Ehninger D.

Nat Commun. 2017 Jul 24;8(1):155. doi: 10.1038/s41467-017-00178-3.

PMID: 28761067

https://www.nature.com/articles/s41467-017-00178-3

https://www.nature.com/articles/s41467-017-00178-3.pdf

http://www.nature.com.sci-hub.cc/articles/s41467-017-00178-3

Abstract

Dietary restriction regimes extend lifespan in various animal models. Here we show that longevity in male C57BL/6J mice subjected to every-other-day feeding is associated with a delayed onset of neoplastic disease that naturally limits lifespan in these animals. We compare more than 200 phenotypes in over 20 tissues in aged animals fed with a lifelong every-other-day feeding or ad libitum access to food diet to determine whether molecular, cellular, physiological and histopathological aging features develop more slowly in every-other-day feeding mice than in controls. We also analyze the effects of every-other-day feeding on young mice on shorter-term every-other-day feeding or ad libitum to account for possible aging-independent restriction effects. Our large-scale analysis reveals overall only limited evidence for a retardation of the aging rate in every-other-day feeding mice. The data indicate that every-other-day feeding-induced longevity is sufficiently explained by delays in life-limiting neoplastic disorders and is not associated with a more general slowing of the aging process in mice.

Dietary restriction can extend the life of various model organisms. Here, Xie et al. show that intermittent periods of fasting achieved through every-other-day feeding protect mice against neoplastic disease but do not broadly delay organismal aging in animals.

Share this post


Link to post
Share on other sites

For me this is an important paper teasing CR from eating schedules.

 

Does any one know the amount of CR of the EOD group? 

 

Is this the same strain that Matson used in 2003 that ignited the whole IF thing in the blogosphere.

 

Randy

Share this post


Link to post
Share on other sites

For me this is an important paper teasing CR from eating schedules.

Yes — a very good find, Al! Properly-done studies have clearly shown that intermittent fasting only extends lifespan to the extent that it cuts Calories (and see this followup post); this study now raises the question of whether the anti-aging effect suggested by those lifespan gains are actually as robust as with "regular" CR. I really wish they had included a "regular" CR study as a positive control.

 

Unfortunately (or, I suppose, fortunately!), we can't take this result very seriously. First, the "Animals fed AL were granted unlimited access to food anytime" (rather than a 10-15% restriction to avoid obesity), so with a very light level of net CR (see below), this may merely be obesity-avoidance, not a real anti-aging effect (and thus the lack of retardation of functional declines). Consistent with this, the study is contaminated by the Original Sin of Biogerontology: short-lived controls. "we observed significant lifespan extension in EOD mice (Fig. 1d). Mean lifespan was prolonged by 102 days (AL: 806 days, EOD: 908 days) and maximum lifespan, calculated based on the longest living 20%, was extended by 122 days (AL: 980 days, EOD: 1102 days) (Fig. 1e), which is within the range of lifespan extension observed in previous DR studies".  Mean LS for nonobese, well-cared-for, otherwise-healthy but aging mice should be ≈900 days; maximum (at 10% survival) should be 1100 days or so. Here they give 980 d for teh longest-lived 20% at 980 d, and you can see that their very last AL mouse just barely passed the thousand-day mark, so I would be surprised if their 10% survival were 100 d longer.

 

Does any one know the amount of CR of the EOD group?

 

Al gave the link to the free full text. "Over the course of their entire lifespan, average calorie intake of EOD mice was reduced by 7.5% ... Body weights remained reduced in EOD mice during much of their adult life (average reduction of ~17.1%) (Fig. 1c). Organ weights and body dimensions were often reduced in EOD mice compared to AL controls (Supplementary Fig. 1)."

 

 

Is this the same strain that Matson used in 2003 that ignited the whole IF thing in the blogosphere.

Yes: plain Jane C57BL/6J (in whom the proportionality of IF-induced LS gains to net CR has been demonstrated repeatedly).

Share this post


Link to post
Share on other sites

On the contrary, I personally find no value in this study, if we measure value by how relevant it is to humans. A particular strain of mice, a feeding regimen that could not be duplicated in humans in practical terms, and the massive variable of extraordinary vulnerability to neoplasms in these organisms. Meh. Nothing to see here folks.

Share this post


Link to post
Share on other sites

An alternate-days IF involving eating only vegetables and some olive oils on alternate days has been suggested by Fontana & Berrino in tehir recent book: The great way. Have to check the references. It's not total fast on alternate days, rather Longo-like fast-mimicking on alternate days.

Edited by mccoy

Share this post


Link to post
Share on other sites

Why do you think the feeding regimen could not be duplicated in humans?

 

"lifelong" - who here did "lifelong" IF?

 

"young" - who here went back in time to do IF when young?

 

= cannot be duplicated in humans.

Share this post


Link to post
Share on other sites

Why do you think the feeding regimen could not be duplicated in humans?

 

"lifelong" - who here did "lifelong" IF?

 

"young" - who here went back in time to do IF when young?

 

= cannot be duplicated in humans.

But - This particular eating schedule, EOD, has been promoted for humans by a number of authors/researchers (Krista Varaday for one). The fact that some folks will only start past youth would *probably* just make even less effective.  

But this particular rodent strain tends to compensate more on eating days, humans maybe not as much.

 

Randy

Share this post


Link to post
Share on other sites

 

Why do you think the feeding regimen could not be duplicated in humans?

 

"lifelong" - who here did "lifelong" IF?

 

"young" - who here went back in time to do IF when young?

 

= cannot be duplicated in humans.

But - This particular eating schedule, EOD, has been promoted for humans by a number of authors/researchers (Krista Varaday for one). The fact that some folks will only start past youth would *probably* just make even less effective.  

But this particular rodent strain tends to compensate more on eating days, humans maybe not as much.

 

Randy

 

But I was not discussing IF in general, Varaday and whatnot. I was discussing *this* study. This study is of little value to humans, because it doesn't describe IF as it happens in humans, it discusses IF as it happens in this strain of mice, for that period of time - neither of which obtain for humans, and speculating that it will be only a little less effective in humans for a shorter period of time is speculation which has nothing to do with THIS STUDY. I am discussing the study the OP posted, as that's on topic. If I wanted to discuss IF in general, which is NOT the subject of this thread, then I'd have done so. But this is not what it's about. Therefore, what I said, stands - THIS STUDY is of little relevance to humans because the conditions described IN THIS STUDY are not duplicated in humans. Logical, no?

Edited by TomBAvoider

Share this post


Link to post
Share on other sites

 

But - This particular eating schedule, EOD, has been promoted for humans by a number of authors/researchers (Krista Varaday for one). The fact that some folks will only start past youth would *probably* just make even less effective.  

But this particular rodent strain tends to compensate more on eating days, humans maybe not as much.

But I was not discussing IF in general, Varaday and whatnot. I was discussing *this* study. This study is of little value to humans, because it doesn't describe IF as it happens in humans, it discusses IF as it happens in this strain of mice, for that period of time - neither of which obtain for humans, and speculating that it will be only a little less effective in humans for a shorter period of time is speculation which has nothing to do with THIS STUDY. [...] Therefore, what I said, stands - THIS STUDY is of little relevance to humans because the conditions described IN THIS STUDY are not duplicated in humans. Logical, no?

 

Tom, you're being unreasonable. Of course the conditions described IN THIS STUDY are not duplicated in humans if you're going to be as literal about it as criticizing the fact that it was done in a strain of mouse! What would you suggest? We can't do lifespan studies in humans: mice and other model organisms is what we've got. And by definition, any experiment has to be controlled in various ways. More controls are better than fewer, up until the point you're ready to do Phase IV followup trials.

Share this post


Link to post
Share on other sites

I don't think Tom's being unreasonable, and tend to agree with him here.

 

...What would you suggest? We can't do lifespan studies in humans...:

No, evidently "we" can't perform birth to death human lifespan studies; but we can pursue scientifically creative short term, controlled "intermittent fasting" (defined) experiments amongst honest, reasonable people who are seeking health and life extension hints.

 

Any one of us (especially y'all retired ex-engineer types) could "easily" thunk up us some citizen science. But evidently writers here and writers there prefer waiting around for government- and academia-supported projects (projects chiefly designed for publishing success stories and bolstering personal reputations and incomes, hence "mice and other model organisms is [sic] what we've got...")

 

If we were interested (dreaming now is sthira, oh look, y'all stand back, like dreamtime drum circle fantasies, too far out) then we could devise citizen science projects amongst like-minded, honest people.

 

We don't.

 

So face truth instead -- we're really lazy if honestly look within, obviously myself included, and so as a deadly consequence we'll all grow old and withered and sad just like our ancestors.

 

Anyway, keep on beating that funding drum and we'll sit on our asses and await The Great Oz (I just rewatched Wizard of Oz, haha, and it reminds me of rodent studies and all that mainstream sciency stuff...)

Share this post


Link to post
Share on other sites

Yes: we are necessarily limited in lifespan studies, perhaps forever. No: therefore we can substitute X [mice, rats, flies, pigs, non-humans] and draw conclusions from it wrt. human lifespan.

 

Just because we can't do human trials doesn't magically transform mice into suitable subjects from which to draw conclusions about human lifespan studies. Some animals are closer to us than others. Most folks have slowly come around to dismissing studies in yeast as saying much about lifespan studies in humans - and we've just about finally reached consensus that the same is true of worms (Cynthia Kenyon notwithstanding), same for flies and zebra fish. Time to take the same hammer to mammals. The fact is, specific mammals can be very poor models for humans in various respects - example, rabbits are poor models for cholesterol and artherosclerosis studies in humans. And I thought we have reached consensus - at least on this board - that short-lived rodents, such as rats and mice are poor models for lifespan studies in humans. And mice are even worse than rats from that point of view. And short lived strains of mice are even worse. It's bad to use rodents and it only gets worse from there by the time we reach a particular strain of mice. Even the lay public has cottoned on to the fact that for example results in mice when it comes to cancer drugs usually translate very poorly to humans - angiogenesis is an evolving story - and when someone excitedly cites a drug that cured cancer in mice, the only appropriate reaction is to say "good for the mice". Same here - mice are a particularly inapt model for human lifespan studies, and while there are better and worse strains (cancer prone, short-lived etc.), none of them are applicable.

 

My point - as rabbits are bad models for cholesterol studies, and mice are bad models for cancer studies, so too are rodents bad modes for human lifespan studies. Therefore touting studies in mice is pointless and inapplicable here - as I pointed out. I don't think I'm being unreasonable. And as soon as we do studies in other mammals that are closer to us, such as monkeys, we immediately see much more heterogenous results compared to mice - I thought given the impact of monkey studies on the CR Society, we now understand very well, that we must ignore mice studies in toto (when it comes to lifespan, cancer etc.).

 

I stand by my points.  

Share this post


Link to post
Share on other sites

Screw human lifespan studies; obviously they're past us. And leave alone the near-extinction non-hominoid simians and great ape brothers and sisters -- god knows we've fucked them to near oblivion anyway. Give them habitats and peace and quiet.

 

Artificial intelligence models for human lifespan extension (like weather prediction models, idk) might be part of "a solution": you add one plus one equals two, and calculate this must be Calico's thinking? Oh The Great Oz.

 

Or, citizen science amongst fellow geeky tinkerers. Let's do it!

 

Meanwhile, I've toned down my own over-fasting, toned it down to one five day fast per two months, that is, and I only eat one meal per day, from 2pm to 5pm (god knows why) so I let 21 hours elapse (like numerology) between refeedings (who the hell knows if this is beneficial n1 human behavior... but it's fun and interesting and I'm aging right on schedule with everyone else around).

Share this post


Link to post
Share on other sites

We are bumping up against the limitations of our methodologies to date. That's why we are seeing so many seemingly contradictory results in medical science. I think a new era is upon us when it comes to medicine, where we appreciate the importance of individual differences (personal medical intervention) and the necessity to pay greater attention to the limitations of our models (including animal studies). This is a natural progression. Once upon a time we didn't even particularly care about how the conditions in which we kept study animals impacted the study results, today we are much more acutely aware of husbandry issues. I think the same is happening slowly with regard to translation from animal results to humans. It took decades before we beat it into peoples heads that there is a giant leap between in vitro to in vivo (even though pop science reporting still occasionally touts in vitro results). It'll take a while before people start routinely regarding mice studies in lifespan with the same attitude - as at best an indicator of where future studies might be fruitful, but as of literally zero relevance to humans. That's what I do - as soon as I see that a study of human lifespan was done in "mice", I stop reading. Saves a ton of time and if anything enhances one's understanding without useless noise and distraction.

Share this post


Link to post
Share on other sites

I don't think Tom's being unreasonable, and tend to agree with him here.

 

...What would you suggest? We can't do lifespan studies in humans...:

No, evidently "we" can't perform birth to death human lifespan studies; but we can pursue scientifically creative short term, controlled "intermittent fasting" (defined) experiments amongst honest, reasonable people who are seeking health and life extension hints.

 

Er, and where exactly will you get those hints? What change in what parameter(s) might suggest you were having an effect on aging, without reference to animal models where effects on aging have already been demonstrated using the same intervention?

 

(Note, I asked about aging: don't come back to me with LDL-C and blood pressure and the like unless you're going to make reference to animal aging studies. Of course we can monitor things linked to reduced risk of premature suffering with individual diseases of aging).

 

Human studies on putative anti-aging interventions can be informative only because we can compare changes in humans subject to some manipulation to changes in other species that have shown slowed aging under the same intervention. If the variables linked to retarded aging in the animal model are also changed in the same direction by the same (or, a similar enough, TomBNitpicker  ;)xyz  ) intervention in humans, that's evidence of the translatability of the intervention. Without such a positive control, you don't know what to look for as an outcome in a short-term study.

 

Sure, that's far from the best kind of evidence — but it's the best we've got, and the best we're going to get.

 

 

...What would you suggest? We can't do lifespan studies in humans...:

Yes: we are necessarily limited in lifespan studies, perhaps forever. No: therefore we can substitute X [mice, rats, flies, pigs, non-humans] and draw conclusions from it wrt. human lifespan.

Just because we can't do human trials doesn't magically transform mice into suitable subjects from which to draw conclusions about human lifespan studies.

No one is suggesting that it does. You can go with the evidence that you've got (inclusive of translational data), for which the animal studies are the foundation — or you can forget about intervening in aging until we have actual rejuvenation biotechnologies proven in clinical trials (and, by the way: initially incomplete, short-term trials!), decades from now if and only if the funding becomes available starting now.

 

Some animals are closer to us than others. And I thought we have reached consensus - at least on this board - that short-lived rodents, such as rats and mice are poor models for lifespan studies in humans. And mice are even worse than rats from that point of view.

I'm reminded of Winston Churchill on democracy as the worst kind of government, except for all the others. Sure, some animals are closer to us than others. Again: are you going to wait for the results of a yet-to-be even contemplated nonhuman primate lifespan study, which would take 40 years to complete once finally begun, and might be terminated at any time by budget priority changes or an animal rights attack gone afoul? No? Then, again: this is the best evidence we've got (again, combined with translational evidence from short-term human studies). You can wait for better, and accept your ongoing aging until after the Paris Climate Treaty expires — or you can use the data you've got, doing your best to bear in mind its limitations.

 

And short lived strains of mice are even worse.

Agreed, of course. Note, in case there's confusion here, that this was not a short-lived strain of mice. The mice did, however, live less long than they should have, probably for reasons I suggested above, and that's a serious limitation to the study. Happily, we have a lot of other rodent data into which to contextualize these findings — and, it would only take 5 years or so for a better-designed study to be done.

 

My point - as rabbits are bad models for cholesterol studies, and mice are bad models for cancer studies, so too are rodents bad modes for human lifespan studies.

And your evidence for this is ...? (Not just "mice are not humans," please. Something equivalent to rabbits and cholesterol). (And by the way: we still test cancer drugs in mice, and for good reason).

 

And as soon as we do studies in other mammals that are closer to us, such as monkeys, we immediately see much more heterogenous results compared to mice - I thought given the impact of monkey studies on the CR Society, we now understand very well, that we must ignore mice studies in toto (when it comes to lifespan, cancer etc.).

That would be a ridiculous conclusion to draw. We ran two flawed experiments in nonhuman primates, and discovered that what we thought was another, stronger link in the chain of evidence that CR would translate (the positive-seeming WUSTL results) was actually too messed up to tell us anything. The fact that we can't make as much use of the nonhuman primate data as it initially seemed makes us more reliant on the mice, not less.

 

So face truth instead -- we're really lazy if honestly look within, obviously myself included, and so as a deadly consequence we'll all grow old and withered and sad just like our ancestors.

Certainly, we should face the truth about our existential situation. Now, having accepted your existential situation: what are you going to do about it? If you aren't prepared to accept that we'll all grow old and withered and sad just like our ancestors (and, for that matter, our peers), you use the best data you've got — and that's largely the eight decades of work that's been done in rodents, and the additional rodent studies that are going to keep coming down the pipes from now until we each reach Escape Velocity or the grave.

Share this post


Link to post
Share on other sites

Even in rodents, there are other options.

Those are useful for comparative studies, but for intervention studies those are even less useful, particularly if you want to make use of the information today. 20 year lifespans = 25+-year lifespan studies start-to-finish, after you've secured the funding. Again: are you going to wait that long?

 

Mice and rats are also well-characterized genetically and in every other way; work on these other animals is far more preliminary, and outside of NMR their aging is particularly poorly-understood. So designing studies and interpreting results is even harder than it is in mice and rats.

 

(And, seriously: a controlled, two-decade study in captive beavers ...? Imagine the vivarium ...)

Share this post


Link to post
Share on other sites

Er, and where exactly will you get those hints? What change in what parameter(s) might suggest you were having an effect on aging, without reference to animal models where effects on aging have already been demonstrated using the same intervention?

In eight decades of completed work in animal studies (deep bow to the animals), what changes in which parameters are relevant to aging humans? If few, then why, as Tom may be suggesting, continue with this model?

 

Facebook shut down an artificial intelligence engine after developers discovered that the AI had created its own unique language that humans can’t understand. So if that's true, doors to AI models may open to supplement or supersede animal models. So, do any SENS-funded projects plan to advance or incorporate AI models? Might they in future? Given Dr. de Grey's background, ya'd think it'd be obvious, but... If so or if not, Calico seems like a promise for a progressive model, no?

 

Perhaps you don't think so, or perhaps I'm misunderstanding this bit of darkness:

 

Sure, that's [rodents' evidence] far from the best kind of evidence — but it's the best we've got, and the best we're going to get.

I doubt many people here fail to recognize and pay tribute to the countless animals of many species who've contributed their suffering and their lives to advance human health (mice for penicillin; guinea pigs for TB; the eyes of monkeys, cats and rabbits for macular degeneration; frogs for asthma; rabbits for meningitis; dogs and pigs for kidney transplants; insulin in dogs...)

 

But clearly there's a better way forward for both humans and wildlife, and artificial intelligence seems less cruel and perhaps more relevant.

 

You can wait for better, and accept your ongoing aging until after the Paris Climate Treaty expires — or you can use the data you've got, doing your best to bear in mind its limitations.

The Wizard of Oz: "Child, you're talking to a man who's laughed in the face of death, sneered at doom, and chuckled at catastrophe..."

 

 

So face truth instead -- we're really lazy if honestly look within, obviously myself included, and so as a deadly consequence we'll all grow old and withered and sad just like our ancestors.

Certainly, we should face the truth about our existential situation. Now, having accepted your existential situation: what are you going to do about it? If you aren't prepared to accept that we'll all grow old and withered and sad just like our ancestors (and, for that matter, our peers), you use the best data you've got — and that's largely the eight decades of work that's been done in rodents, and the additional rodent studies that are going to keep coming down the pipes from now until we each reach Escape Velocity or the grave.
Additional rodent studies? That's it? Are not AI advances in algorithms and software currently improving the quality and availability of healthcare services? Won't these continue, lol? Unless we're all drowned in filth by climate disruptions, there's little reason for that model to not replace the animal model in human medicine. Where is SENS in the AI jam?

 

Oh, and why not citizen science amongst us science literate CR ppl to investigate "intermittent fasting?" Which parameters relevant to aging humans did we learn from eight decades of animal data? Let's use those while we muggles await Hogwarts interventions.

Edited by Sthira

Share this post


Link to post
Share on other sites

 

Er, and where exactly will you get those hints? What change in what parameter(s) might suggest you were having an effect on aging, without reference to animal models where effects on aging have already been demonstrated using the same intervention?

In eight decades of completed work in animal studies (deep bow to the animals), what changes in which parameters are relevant to aging humans?

 

Changes consistently seen for a given intervention between mouse and human are consistent with the intervention's translatability; those that aren't weigh against it.

 

If few, then why, as Tom may be suggesting, continue with this model?

Because it is all we've got.

 

Facebook shut down an artificial intelligence engine after developers discovered that the AI had created its own unique language that humans can’t understand. So if that's true

First, it's false.

 

Second: suppose that it were true. What then? If you ask the AI of Oz how to retard aging in humans and it replies in gibberish, how is that helpful?

 

doors to AI models may open to supplement or supersede animal models. So, do any SENS-funded projects plan to advance or incorporate AI models? Might they in future? Given Dr. de Grey's background, ya'd think it'd be obvious, but... If so or if not, Calico seems like a promise for a progressive model, no?

I see absolutely no way for AI to tell us how to intervene in aging in a way that would preëpt animal testing. If you ask the AI of Oz how to retard aging in humans and it replies "take 1500 mg of extract of untested plant X and microdose cyanide daily," is any sensible person going to take it without first testing it in rodents? And where is the AI going to get its data, if not from rodent studies?

 

 

Additional rodent studies? That's it? Are not AI advances in algorithms and software currently improving the quality and availability of healthcare services? Won't these continue, lol?

AI helps with healthcare because we can feed into it data from clinical trials and healthcare registries for various clinical outcomes. This is already useful, and will become more so as more and more electronic medical record data becomes mineable. That will improve cancer care and other kinds of ordinary medicine, but it doesn't help us with aging, because we have no equivalent data for aging — except in rodents, and translational studies built up from rodents.

Share this post


Link to post
Share on other sites

Hmm, I realize this is a thread about intermittent fasting, which I practice, so I apologize for the swing out. Me, I've no problem with the natural flows of conversation in online discussions, especially in dead zones like this one. Have you considered a Reddit AMA, Michael, you seem to know about AI's future impact on human aging, and it might be fun in a livelier setting? Meanwhile:

 

 

 

Er, and where exactly will you get those hints? What change in what parameter(s) might suggest you were having an effect on aging, without reference to animal models where effects on aging have already been demonstrated using the same intervention?

In eight decades of completed work in animal studies (deep bow to the animals), what changes in which parameters are relevant to aging humans?

Changes consistently seen for a given intervention between mouse and human are consistent with the intervention's translatability; those that aren't weigh against it.

Animal studies obviously have been of historical use in medicine, as I noted above. So the suggestion isn't that we ditch all prior work or quit paying attention to current animal studies as foundational and practical contributions. Rather, the suggestion is that humans will keep looking for different ways to reduce suffering and improve life. Animal models have taken us to a point, and soon they'll drop off and become even more irrelevant than they already are, like your horse and buggy analogies in the Dr. de Grey book.

 

AI will be medically useful during our lifetimes because it already is; dismissing it isn't what you're asserting. Nevertheless, about animal model translatability, you write:

 

 

If few, then why, as Tom may be suggesting, continue with this model?

Because it is all we've got.

It's not all we've got. Robotic surgeons, for example, are here and they're not going away. I must be misunderstanding you.

 

 

Facebook shut down an artificial intelligence engine after developers discovered that the AI had created its own unique language that humans can’t understand. So if that's true

First, it's false.

That's interesting good news. I certainly don't see AI potentiality in dark and threatening movie tones -- eek, violent sci-fi films, I hate them. I'm excited about possibilities for AI's potential use in helping to solve not only human suffering and damage repair, but also in wider issues like ecosystem degradation, species extinction, alternative energy solutions, global climate change reversals, extraterrestrial travel...

 

From the Snopes article you linked, this: "...We are in infant stages now, but I think we will subsume AI and make it part of ourselves; better to control it. Implanting neural nets, within our brains that are connected to it, etc. Now that raises all kinds of as yet unseen “have and have not” issues. But that’s another subject for another time."

 

Second: suppose that it [chatbots who created their own language] were true. What then? If you ask the AI of Oz how to retard aging in humans and it replies in gibberish, how is that helpful?

Gibberish isn't helpful, that's why the chatbotting gibberish was halted. AI won't stay stuck in place. The things must crawl before they walk, though, and before their usefulness to us unfurls. And they're already here anyway.

 

 

doors to AI models may open to supplement or supersede animal models. So, do any SENS-funded projects plan to advance or incorporate AI models? Might they in future? Given Dr. de Grey's background, ya'd think it'd be obvious, but... If so or if not, Calico seems like a promise for a progressive model, no?

I see absolutely no way for AI to tell us how to intervene in aging in a way that would preëpt animal testing. If you ask the AI of Oz how to retard aging in humans and it replies "take 1500 mg of extract of untested plant X and microdose cyanide daily," is any sensible person going to take it without first testing it in rodents? And where is the AI going to get its data, if not from rodent studies?

Then you're not looking carefully enough. Are these opinions a general SENS position? No way, right? I mean, do SENS researchers see absolutely no way for future advances in AI to assist us in how to intervene in human aging? Of course preemptions of animal models won't occur until more intelligent models develop and mature.

 

For example, eventually Google will map out every metabolic activity that occurs within your body. Probably static at first. Like an internal Google maps, they're already working on this part. Then progress will lead to dynamics, live, as in your own personal chemical reactions that are occurring within your own body right now, we see them now. It's hard for me to imagine that this isn't one of Calico's aims. Baby steps first, though, and they ain't talking publicly because why should they? Fruit ain't ripe. SENS ain't talking to us much, either, are y'all?

 

 

Additional rodent studies? That's it? Are not AI advances in algorithms and software currently improving the quality and availability of healthcare services? Won't these continue, lol?

AI helps with healthcare because we can feed into it data from clinical trials and healthcare registries for various clinical outcomes. This is already useful, and will become more so as more and more electronic medical record data becomes mineable. That will improve cancer care and other kinds of ordinary medicine, but it doesn't help us with aging, because we have no equivalent data for aging — except in rodents, and translational studies built up from rodents.

Sure, we feed these things data from clinical trials and healthcare registries until they can think for themselves. They'll be doing that, thinking independey, and hopefully into the directions we steer them, as your Snopes link suggests. One direction we may steer AI is into the repair of aging-damaged human bodies. We'll do this because we must do this -- the trillions in healthcare costs from these aging damaged baby boomers will spur faster progress.

Share this post


Link to post
Share on other sites

Hmm, I realize this is a thread about intermittent fasting, which I practice, so I apologize for the swing out.

Agreed: if no one objects, I may split this discussion off to one side.

 

 

Changes consistently seen for a given intervention between mouse and human are consistent with the intervention's translatability; those that aren't weigh against it.

Animal studies obviously have been of historical use in medicine, as I noted above. So the suggestion isn't that we ditch all prior work or quit paying attention to current animal studies as foundational and practical contributions. Rather, the suggestion is that humans will keep looking for different ways to reduce suffering and improve life.

I think you've forgotten the suggestion ;)xyz — or, alternatively, are unconsciously moving the goalposts. Remember, this side-discussion got started with Tom's perhaps not well-thought-out suggestion that we should ditch all prior and current work in animals, on the basis that they were supposedly meaningless to humans — to which assertion you expressed agreement. Pressed on the issue, you've progressively retreated to the idea that AI will at some point in the future replace animal studies, and then to the position that AI is of some use today and will be in the future for general medical research and practice, and that robotic surgeons (which are not AI, and not mice) are useful tools — which is true, but so is "Carbon dioxide is essential to life."

 

Taking it back to the actual hundred-yard line: if you want to decide today about what you might do to intervene in your own aging, the existing animal studies are a critical source of evidence, and it's unreasonable to simply dismiss such studies individually or collectively solely on the basis that they weren't done in humans. Agreed?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×