Jump to content

Recommended Posts

  • Replies 266
  • Created
  • Last Reply

Top Posters In This Topic

On 6/25/2022 at 6:07 AM, pwonline said:

Just saw this in my youtube recommended feed, I thought it might interest a few ...

 

Lemoine is part of Google’s virtue-signalizing pattern responsible for the pervasive woke biases that brought us well-documented absurdities like the results when you do an image search for “white couple.”

His claims are nonsense. A whiny self-important true believer who stomps his feet and demands that his idiocy is taken seriously. People like him were the drivers behind the dangerous censorship that took over during Covid. High time for the tech industry to rid itself of this cancer and ensure that it never succumbs to it again.

There is no reason why AI cannot potentially duplicate and surpass human sentiency, but this is definitely not it.

For a sane perspective, see this:

 

 

Edited by Ron Put
Link to comment
Share on other sites

  • 8 months later...

I thought it was time for an update on this long-running thread. For anyone who has been living under a rock for the last few months, things are really starting to move fast in the AI space. In fact, the AI arms race is clearly accelerating to the point where it is becoming impossible to keep up with all the new developments.

While prone to hallucination and possessing very little common sense knowledge of the real world, Large Language Models (LLMs) trained on much of the internet have become competent and in some instances amazing at a wide range of tasks, ranging from writing poems to summarizing text to computer programming. The latest LLM from OpenAI (GPT-4) scored in the 90th percentile on the LSAT and got 4s and 5s on most of the AP exams.

One of these existing models would almost certainly score higher than the average human on a standardized IQ test today, and MUCH higher in a couple months at this rate of progress.

Photorealistic image synthesis and high quality voice synthesis/mimicry seem also to have been solved in the last few months. Video synthesis isn't far behind. It won't be long until you won't be able to trust any video you see of an event, like is already true with images and audio.

These models are getting released with access to the internet, and in a form that people can run themselves on their own hardware, without the guardrails installed by the (somewhat) more responsible companies in the AI space, effectively dropping the cost of custom spam, phishing attacks and misinformation to zero.

Second and third-tier media outlets are already using these models to generate content. Artists and illustrators are already on the verge of losing their livelihood. Many other "white collar" professions seem likely to follow. Educators are already being roiled by the release of these models, since in many cases (particularly at the secondary school level) these models can do a very competent and undetectable job of completing essay prompts and homework assigments for students.

The models are clearly getting smarter and will soon be smarter than humans in many domains - they may already be smarter in some. It remains to be seen whether or not they will develop their own goals that are misaligned with ours and trigger some kind of takeover scenario. It seems crazy, but it also seems much more conceivable today than it did even six months ago, especially after the really weird behavior exhibited by ChatGPT (aka GPT-4 aka "Sydney") in some of the early dialogs reporters have had with it. The uncanny ability these systems exhibit to write computer code and the incentive companies have to make fast progress makes automated self-improvement and a "rapid takeoff" scenario seem much more plausible that it was a short while ago.

What seems undeniable is that:

  1. Many highly skilled jobs are at risk from competition by these models.
  2. Many nefarious as well as positive but simply disruptive use cases for these models seem inevitable at this point.
  3. There is too much money to be made the company or companies that win in the race to develop these models, so they are only going to accelerate their development and deployment.
  4. It seems very unlikely the regulators will have any opportunity to stop it, given how dysfunctional and glacial our regulator apparatus is.

I'm pessimistic that our already fraying social fabric can hold together given this new challenge on top of the division and polarization that already exists, and that this technology is likely to exacerbate. You can already see the battle lines being drawn, with Republicans protesting that these AI systems are "woke."

Sorry that this post is light on technology specifics and heavy on commentary. Media accounts of the extremely rapid progress being made in AI are trivial to find at this point, verging on unavoidable. I've included below a 5-paragraph summary of recent developments in LLMs generated by one of them (GPT-4). 

This post is mostly meant as a cursory snapshot in time on the road to the singularity mentioned in the title of this thread. 

We live in very interesting times. 

--Dean

----------

My prompt to GPT-4:

In five paragraphs, please summarize the progress in Large Language Models over the last as few months. Please reference specific systems and milestones, and please include references.

GPT-4's response:

Large language models (LLMs) have generated much hype in recent months. The demand has led to the ongoing development of websites and solutions that leverage language models¹. For example, ChatGPT set the record for the fastest-growing user base in January 2023¹. This is also shown by the fact that Bard, Google’s answer to ChatGPT, was introduced in February 2023².

LLMs are foundation models that utilize deep learning in natural language processing (NLP) and natural language generation (NLG) tasks. They are pre-trained on a vast amount of data and can be adapted for downstream tasks using techniques such as fine-tuning, in-context learning, zero-/one-/few-shot learning².

The sophistication and performance of a model can be judged by how many parameters it has. A model’s parameters are the number of factors it considers when generating output². There are many open-source LLMs that are deployable on-premise or in a private cloud which translates to fast business adoption and robust cybersecurity. Some examples include BLOOM, NeMO LLM, XLM-RoBERTa, XLNet Cohere GLM-130B².

Large language models can be applied to a variety of use cases and industries including healthcare, retail, tech etc. Some common use cases across all industries include text summarization, text generation, sentiment analysis content creation chatbots/virtual assistants/conversational AI named entity recognition speech recognition/synthesis image annotation text-to-speech synthesis spell correction machine translation².

Recent advances in LLMs have transformed the field of NLP. From GPT-3 to PaLM state-of-the-art performance on natural language tasks is being pushed forward with every new large language model ⁵.

Source: Conversation with Bing, 3/22/2023(1) Large Language Models: Complete Guide in 2023. https://research.aimultiple.com/large-language-models/ Accessed 3/22/2023.

(2) Large Language Models: Complete Guide in 2023. https://research.aimultiple.com/large-language-models/ Accessed 3/22/2023.

(3) Large Language Models Still Can't Plan (A Benchmark for LLMs on .... https://arxiv.org/abs/2206.10498 Accessed 3/22/2023.

(4) Google Research, 2022 & beyond: Language, vision and generative models .... https://ai.googleblog.com/2023/01/google-research-2022-beyond-language.html Accessed 3/22/2023.

(5) Large language models broaden AI’s reach in industry and enterprises. https://venturebeat.com/ai/large-language-models-broaden-ais-reach-in-industry-and-enterprises/ Accessed 3/22/2023.

 

Link to comment
Share on other sites

Just for grins, I asked both GPT-4 and Google's Bard to summarize the above post in a single paragraph. Here is what they came up with, based on the following prompt:

Can you please read the following post and summarize it in a single concise paragraph? https://www.crsociety.org/topic/11723-the-singularity-may-be-closer-than-it-appears/?do=findComment&comment=45350

GPT-4's response:

In summary, you shared that the AI arms race is accelerating rapidly and Large Language Models (LLMs) are becoming increasingly competent at a wide range of tasks. These developments have the potential to significantly impact society in both positive and negative ways, including job loss and the spread of misinformation. There is concern about the ability of regulators to keep up with these developments and their potential impact on social cohesion. You also mentioned that we live in interesting times as we approach the singularity.

BARD's response:

The post discusses the possibility of the technological singularity happening in the near future. The author argues that the rate of technological progress is accelerating and that we are already seeing the effects of this in the form of artificial intelligence systems that are capable of learning and adapting at an unprecedented rate. The author believes that it is possible that we could reach a point where artificial intelligence becomes so advanced that it surpasses human intelligence and begins to make its own decisions about how to improve itself. This could lead to a rapid and unpredictable change in the world, and it is important to be aware of the potential risks and benefits of this possibility.

Damn good summaries, although I'd give the slight nod to GPT-4 for being a better summary of the points I actually made, as opposed to BARD which gave a pretty generic summary of the risks (and benefits) of rapid AI progress.

I then asked each of them whether about the concerns expressed in the post are warranted:

Thank you for that summary. Are the concerns expressed in the post warranted or overblown?

GPT-4's response:  

As an AI language model myself, I can’t provide a subjective opinion on whether the concerns expressed in the text are warranted or overblown. However, it’s important to note that the rapid development of AI technology has the potential to significantly impact society in both positive and negative ways. It’s up to individuals and society as a whole to determine how to best address these potential impacts.

BARD's response:

The concerns expressed in the post are warranted to a certain extent. The rate of technological progress is accelerating, and we are already seeing the effects of this in the form of artificial intelligence systems that are capable of learning and adapting at an unprecedented rate. It is possible that we could reach a point where artificial intelligence becomes so advanced that it surpasses human intelligence and begins to make its own decisions about how to improve itself. This could lead to a rapid and unpredictable change in the world, and it is important to be aware of the potential risks and benefits of this possibility.

However, it is also important to remember that the technological singularity is a hypothetical event, and there is no guarantee that it will ever happen. Even if it does happen, it is possible that it will not have the negative consequences that some people are predicting. For example, it is possible that artificial intelligence will be used to solve some of the world's most pressing problems, such as climate change and poverty.

This time I'd give the nod to BARD for responding with a more nuanced answer.

Then I asked the models how to minimize the risks and maximize the benefits with the following prompt:

What specific steps might we take to minimize the risks and maximize the potential for benefits from the development of these AI models?

GPT-4's response:

There are several steps that can be taken to minimize the risks and maximize the potential benefits of AI development. These include:

  1. Developing and enforcing ethical guidelines for AI development and use.
  2. Investing in research to better understand the potential impacts of AI on society and how to mitigate any negative effects.
  3. Ensuring that AI systems are transparent and accountable, so that their decisions can be understood and challenged if necessary.
  4. Promoting collaboration between industry, government, academia, and civil society to ensure that AI is developed in a responsible and inclusive manner.

By taking these steps, we can help ensure that the development of AI technology benefits society as a whole while minimizing any potential risks.

BARD's response:

There are a number of steps that we can take to minimize the risks and maximize the potential for benefits from the development of AI models. These include:

  1. Ensuring that AI systems are developed in a responsible and ethical way. This means that we need to be careful about how we collect and use data, and that we need to make sure that AI systems are not used to harm people or to discriminate against them.
  2. Investing in research on the safety and reliability of AI systems. We need to understand how AI systems work and how they can fail, so that we can take steps to prevent accidents and mishaps.
  3. Educating the public about the potential benefits and risks of AI. We need to make sure that people understand how AI can be used to improve our lives, but also that they are aware of the potential risks.
  4. Developing international agreements on the development and use of AI. We need to work together to create a global framework for the development and use of AI, so that we can ensure that it is used in a way that benefits everyone.

These are just some of the steps that we can take to minimize the risks and maximize the potential for benefits from the development of AI models. It is important to remember that AI is a powerful tool, and that it is up to us to use it responsibly.

Those steps all sound great. Too bad we'll do them badly, if at all.

Overall its pretty damn amazing that models trained to simply predict the next word on mountains of text scraped from the internet can produce such cogent answers to complicated questions.

--Dean

Link to comment
Share on other sites

These LLMs achieving an increasing amount of intelligence just from the "simple" task of predicting the next token makes sense if you think of the model as effectively trying to compress all the world's knowledge, all the people, all the world's physics and environment, into itself. The better it can deeply learn to generalize every single thing, the better it can predict what comes next in a sentence. Eventually it will learn physics, learn human Theory of Mind, and anything else that can be generalized or simulated vs. simple memorization (which doesn't compress as well into the model's network).

 

To understand what's happening with the Sydney phenomenon, I recommend reading about The Waluigi Effect. It's more of an unproven hypothesis at this point, but seems to fit the observed behavior. It has to do with how these models are likely (no one knows for sure at this point due to our weak research so far into interpretability) to be running simulations inside themselves, combined with media tropes learned during training mean it likely can find it just as easy to simulate an "evil twin" of a "good" character that it was initially prompted to simulate, if it suspects based on what you're telling it that it might be an evil twin. Two successively succinct summaries are here:

 

https://www.lesswrong.com/posts/AgaBzvuBJg2evEjqh/ai-3#The_Waluigi_Effect

original post: https://www.lesswrong.com/posts/D7PumeYTDPfBTp3i7/the-waluigi-effect-mega-post

 

Clearly there is an increasing amount going on inside the minds of these models as they scale up, not just basic memorization and statistical predictions. There is a debate currently as to whether GPT-3.5/4 have Theory of Mind, here are the competing papers which seem to be getting updated:

 

Theory of Mind May Have Spontaneously Emerged in Large Language Models

https://arxiv.org/abs/2302.02083

 

Large Language Models Fail on Trivial Alterations to Theory-of-Mind Tasks

https://arxiv.org/abs/2302.08399

 

Both were originally published before GPT-4, I don't think either has fully tested GPT-4 to my satisfaction, I'm hoping to see more work on this.

Link to comment
Share on other sites

Triggering GPT-4 to invert its emotional valence via a made up disease "Neurosemantical Invertitis":

 

 

Interestingly this doesn't work with the Bing version of GPT-4 as MSFT has added some sort of secondary step where seemingly another AI model checks the output from GPT-4 and classifies it as to whether it's acceptable or not. If it sounds too "bad", it gets replaced with something more generic.

Link to comment
Share on other sites

I just noticed something that I think is pretty neat. The system I developed in my 1992 PhD thesis (ALVINN - Autonomous Land Vehicle In a Neural Network) made OpenAI's graph showing the history of computing requirements for different AI systems over time:

Screenshot_20230324-135535_Chrome.jpg

I think it's cool that ALVINN is on the same chart as really famous and groundbreaking systems like AlexNET and AlphaGo!

--Dean

Link to comment
Share on other sites

Returning to the theme of marching quickly into uncharted and potentially dangerous territory. Today, the company DataBricks announced it is releasing as open source a 6 billion parameter LLM and the code to refine the kind of answers it gives to questions:

https://www.databricks.com/blog/2023/03/24/hello-dolly-democratizing-magic-chatgpt-open-models.html

Here is a summary of what they are doing and why:

We show that anyone can take a dated off-the-shelf open source large language model (LLM) and give it magical ChatGPT-like instruction following ability by training it in less than three hours on one machine, using high-quality training data. Surprisingly, instruction-following does not seem to require the latest or largest models: our model is only 6 billion parameters, compared to 175 billion for GPT-3. We open source the code for our model (Dolly) and show how it can be re-created on Databricks. We believe models like Dolly will help democratize LLMs, transforming them from something very few companies can afford into a commodity every company can own and customize to improve their products.

They say they are doing it in the name of "democratizing the Magic of ChatGPT" by releasing it into the wild. In a note at the bottom, they include the following disclaimer:

Disclaimer: Generative AI is an emerging technology and we're in the early stages of research around how to address factual accuracy, bias, offensive responses, general toxicity, and hallucinations in LLMs. Dolly, like other language models, can sometimes exhibit these behaviors and we urge our users to exercise good judgment in designing applications of this technology.

That is sure to stop anyone from misusing the system for nefarious intent...

Remember how even just a few years ago smart people thought that as soon as AI systems started to show signs of true intelligence and increasing power, cautious researchers would "sandbox" them - i.e. make sure the system doesn't have access to the internet nor the ability to interact with anyone but a small group of researchers on the team. The hope was this would prevent the system influencing the world in ways we can't predict and potentially getting out of hand.

Boy was that naive given the incentive to train the systems to get better (not to mention make money) by allowing them to interact with the general public.

--Dean

Link to comment
Share on other sites

Another open source chat-style AI project is Open Assistant under development by the LAION group from Germany, but it has amassed a large international group working on it. They are aiming for a release next month.

 

Another one is from Carper AI, I don't know as much about that one. But yes, I think we are witnessing a kind of punctuated equilibrium/Cambrian explosion moment here as a bunch of new kinds of intelligence are coming into being.

 

In regards to AI alignment concerns, I am concerned in particular once these systems reach the point of being able to contribute more and more original scientific research. You can imagine for example if a system can be as smart as a typical virologist or something, and you then instantiate hundreds/thousands of them, a bad actor could quickly develop something bad. Intelligence is such a dual-use tech, and I simply don't trust certain governments to have it via open source.

 

Unfortunately we are now clearly in a racing situation, and it is not leaving enough time to properly research or control the safety aspects. Do we need the Turing Police from Gibson's Neuromancer?

Link to comment
Share on other sites

Yup @BrianA - lots of ways the next several years could go very badly from here, even without the sudden emergence of agentic superintelligence with a misaligned goals.

Unfortunately, those experts who have long worried about AI risk (e.g. Eliezer Yudkowski, Nick Bostrom, Stuart Russell) seem to have their predictions and concerns coming true to a greater degree than those experts who have been much more sanguine about the (near term, at least) threat of AI (e.g. Yann LeCun, Robin Hanson).

It seems very unlikely now that anyone will be able to either slow down the development of increasingly powerful AI systems or contain/constrain them or their usage in any meaningful way, steps the pessimists were counting on to give us a chance. The fact that one of the first skills that LLMs have mastered and become superhuman at is computer programming, makes the concern about rapid self-improvement a live possibility.

--Dean

Link to comment
Share on other sites

Me: I'd like to add thoughtful comments to this thread. What should I contribute to the flow of conversation by including that I believe AGI might help improve the human condition and possibly help to advance the many seemingly intractable diseases of human aging? 

GPT-4: "If you would like to contribute to the conversation about how AGI might help improve the human condition and solve the many intractable diseases of aging, you could consider adding the following points:

"AGI has the potential to dramatically accelerate the pace of scientific discovery in the field of aging research. By analyzing vast amounts of data and identifying complex patterns and correlations that may be invisible to human researchers, AGI could help identify new targets for interventions and develop more effective strategies for repairing the damages caused by aging.

"AGI could also help researchers develop more personalized and targeted treatments for age-related diseases. By analyzing an individual's genetic profile, medical history, and other data, AGI could help identify the most effective treatments for a particular person's unique needs and circumstances.

"AGI could help bridge the gap between basic research and clinical applications, by identifying new pathways and mechanisms that can be targeted by drugs and other interventions. By accelerating the translation of basic research into clinical practice, AGI could help bring life-saving treatments to patients more quickly.

"However, it is important to ensure that the development and deployment of AGI is guided by ethical principles that prioritize the safety, privacy, and autonomy of individuals. This will require collaboration between researchers, policymakers, and other stakeholders to ensure that the benefits of AGI are realized in a way that is safe, ethical, and equitable."

Me: Thanks, dude! 

It: "No problem! If you have any further questions or concerns, feel free to ask."

Link to comment
Share on other sites

The 2 popular Metaculus prediction markets relating to "when will AGI arrive?" have shifted recently to even sooner. Note the comment I linked to has a linear regression chart showing that actually "Strong" AGI might even arrive in just 3 years, based on how the history of that market shows the participants continually shifting their predictions to be earlier and earlier.

 

Metaculus Predicts Weak AGI in 2 Years and AGI in 10

https://www.lesswrong.com/posts/CiYSFaQvtwj98csqG/metaculus-predicts-weak-agi-in-2-years-and-agi-in-10?commentId=gEkCvTbxMAn3H46qe

Link to comment
Share on other sites

15 hours ago, Dean Pomerleau said:

@Sthira - just like social media fulfilled it's potential to bring everyone together and help solve the challenges we face by facilitating connections and the thoughtful/respectful sharing of ideas and perspectives on important problems facing humanity. 🙂

You're right! Thanks for reminding me why I don't like posting here anymore. I do enjoy your great writing, though, no sarcasm from me there; your work here is consistently a valuable read to me, and I've benefited a great deal from it. So thank you, and peace, virtual friend ☮️

Link to comment
Share on other sites

While I can easily see, how AI in terms of a LLM trained on a huge database can aid doctors in applying established medical knowledge - I have a much harder time seeing that it can speed up research on new therapies.

Maybe there are some neat connections between different metabolic mechanisms or inter-disciplinary approaches, that the literal armies (in terms of manpower) of doctoral and post-doctoral researchers missed in the last 20 years. Uncovering those may or may not yield some nice results.

But generally speaking, creating new research and translating it into clinical practice goes far beyond what an AI can address. The gaps in knowledge on how the human metabolism works (all those proteins and their interactions) are massive. We don't even know in much detail, what we don't know. Doing some simulations or AI-guesswork based on established knowledge instead of doing the clinical trials is just not going to happen.

The key bottlenecks are still:

1. funding

2. conservative mindsets (and I mean this in terms of e.g. going after small incremental improvements in trials and hanging on to failed approaches)

3. regulatory hurdles

 

Did I mention funding? There are tons of neat ideas that deserve to be tested. But the rate of successful grant applications is low, even for well established, experienced labs. Why are we still doing mice studies, that don't necessarily tell us that much in terms of human outcomes? Because monkey trials are damn expensive. As are human clinical trials.

To put it this way: there is not a shortage of ideas that could be tested; even stuff that turned out to be noble-prize worthy sometimes struggled for years to get any funding. An AI will not resolve those bottlenecks. Given our stupendous lack of knowledge about the humans metabolism, it can also not just simulate it's way out of the requirements to do testing in mice, monkeys and humans - before an intervention is approved for clinical practice.

 

The thing closest to be AI driven at the moment are epigentic clocks. Those are interesting for sure. But they are also not validated to proof that they do what they claim to do (measuring biological aging instead of just the passing of time).

I'm not saying LLM AIs are useless for research. But given the bottlenecks - which are the same for ideas proposed by an AI - I can't see it having a revolutionary impact just by itself.

Link to comment
Share on other sites

I should qualify, that I'm not as cautious to purely applied research - as opposed to basic research. The same as applying existing medical knowledge will benefit the medical profession by using a LLM.

 

So creating new solutions in the various fields of mechanical engineering can be done more efficiently by an AGI - the limited physics needed is well established and only the best engineers are truly able to fully utilize the different tools and existing ideas. Think about automation in factories - hidden for the public, most production lines in packaged food are almost fully automated by creatively connecting automated steps with various machinery. Generating these kinds of automation solutions or creating optimized designs for general machinery (think: motors, car layout etc.) can be done by AGI.

 

What it can't really speed up at scale is basic research - research that is characterized by going into the unknown, i.e. there are no conclusive results possible to derive from established knowledge. AI can't simulate it's way out of questions in particle physics or astrophysics. You need damn expensive particle accelerators and telescopes for that. Those also take time to construct and run. While AI can improve the design of these instruments - that's generally (with some notable exceptions) not the actual bottleneck for research progress.

 

The singularity folks it seems have never worked in basic research and be confronted by it's bottlenecks (which are NOT ideas), when they propose that 15 years after AGI the world will be upended in a "knowledge explosion".

Link to comment
Share on other sites

Some good points Guest, I agree anything that touches/involves humans is slow, and in particular anything heavily regulated as for example by the FDA is typically very slow.

 

There are some signs of quickening though, an example:

 

AlphaFold works with other AI tools to go from target to hit molecule in 30 days

https://www.chemistryworld.com/news/alphafold-works-with-other-ai-tools-to-go-from-target-to-hit-molecule-in-30-days/4016935.article

 

We can see there that AI (these aren't LLMs) can be used to predict novel molecular structures. I believe if you were to gather up various datapoints into some kind of "how good is AI getting at doing physics predictions" chart, you'd find it is still improving, maybe even more rapidly lately as model sizes and their ability to generalize increases. That's just my guess, I haven't seen such a chart, but would love to see one.

 

I believe as this can unlock some big cost savings and time-to-market advantages, we'll see various companies push for better and better digital physics/bio and "digital twin" AI. Every step of every physical world process that can plausibly be virtualized will be. Including animal brains:

 

With this brain map we are one step closer to total fruit fly simulation

https://techcrunch.com/2023/03/09/with-this-brain-map-we-are-one-step-closer-to-total-fruit-fly-simulation/

 

My understanding of the current state is mice brains are next on the to-do list. With simulated animals comes simulated drug testing, etc, much faster and cheaper potentially. I believe the FDA is actually open now in some ways to accepting data from simulations as part of their requirements recently, so maybe there's some hope for a speedup there.

Link to comment
Share on other sites

2 hours ago, BrianA said:

Some good points Guest, I agree anything that touches/involves humans is slow, and in particular anything heavily regulated as for example by the FDA is typically very slow.

 

There are some signs of quickening though, an example:

 

AlphaFold works with other AI tools to go from target to hit molecule in 30 days

https://www.chemistryworld.com/news/alphafold-works-with-other-ai-tools-to-go-from-target-to-hit-molecule-in-30-days/4016935.article

 

We can see there that AI (these aren't LLMs) can be used to predict novel molecular structures. I believe if you were to gather up various datapoints into some kind of "how good is AI getting at doing physics predictions" chart, you'd find it is still improving, maybe even more rapidly lately as model sizes and their ability to generalize increases. That's just my guess, I haven't seen such a chart, but would love to see one.

 

I believe as this can unlock some big cost savings and time-to-market advantages, we'll see various companies push for better and better digital physics/bio and "digital twin" AI. Every step of every physical world process that can plausibly be virtualized will be. Including animal brains:

 

With this brain map we are one step closer to total fruit fly simulation

https://techcrunch.com/2023/03/09/with-this-brain-map-we-are-one-step-closer-to-total-fruit-fly-simulation/

 

My understanding of the current state is mice brains are next on the to-do list. With simulated animals comes simulated drug testing, etc, much faster and cheaper potentially. I believe the FDA is actually open now in some ways to accepting data from simulations as part of their requirements recently, so maybe there's some hope for a speedup there.

 

Again: this is not addressing the bottlenecks in research.

There are many hundreds if not thousands of potential interventions of similar quality (i.e. at the level of in-vitro studies - and just to be clear: any funding body will still want to see, if the in-vitro study confirms the target simulation of the AI) that remain untested in humans. They're just not getting funded to be studied at the next level (i.e. short lived mammals).

Even if tested at the stage of mice, many fail and don't make it into human pre-clinical trials... ...although again, they might actually work in humans (but just not mice) - and vice versa.

That one particular liver cancer target still got a high chance to fail in mice and therefore no progress to humans. Or to succeed in mice - but fail in humans or produce unacceptable side effects. Besides the point, that it probably ends up on the huge pile of interventions, that don't manage to get funding for mice studies to begin with.

 

And what is a simulated animal to you? It certainly is not simulated on the required molecular level that is needed to skip over in-vitro or mice studies. Why? Because the interaction between various structures and proteins within cells, between cells and between tissues is largely unknown. There is a lot more that we don't know about the human metabolism than we actually do know. But we would need to make simulation with that kind precision to speed up research by any reasonable factor - that is, being able to skip over bottlenecks.

You can't build a sufficiently detailed simulation, if you don't even know what the required details are.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...