Jump to content

Recommended Posts

In regards to our recent bio discussion I ran across this interesting blog post and paper from the past week, specifically about a way to speed up mapping cell "circuits" of exactly the type Guest was describing (I think) with the glucosamine example. This new method is able to test hundreds of thousands of different substances and map their resulting cell phenotypes all in one experimental run. It's kinda like that "fuzzing" approach I described to cybersecurity, but for cells.


The blog author and also paper authors believe this could lead to an AI "cell" model, or at least an AI that can make predictions about substantial parts of cell circuits and behavior.


Accelerating genetic design
Mixing and matching sequencing tools to build biological circuits faster


"Currently, OpenAI is demonstrating to the world in real-time how the Scaling Laws for AI work. With more data, more compute, and more parameters, Large Language Models like GPT-4 start to demonstrate incredible emergent capabilities, like scoring in the 90th percentile on the Uniform Bar Exam.

I’m not sure why this won’t also be the case for synthetic biology. We already have powerful models of specific parts. Now we’re learning models of parts composed together into circuits. What about models of combinations of circuits—getting us closer to predictive models of whole cells? Or combinations of cells—getting us closer to models of tissues?"


Ultra-high throughput mapping of genetic design space


"While extensive recent work has used ML approaches to develop sequence-to-function models for various classes of genetic parts our work serves as a starting point for developing AI-based models of gene circuit function that use part compositions as learned features. While our current work has focused on mapping a design space of 10^5 compositions, it may be possible to create predictive models for more complex circuits with far more expansive design spaces by using data acquired with CLASSIC to train high capacity deep-learning algorithms (e.g., transformers) which require much larger datasets than currently exist."

Link to comment
Share on other sites

  • Replies 266
  • Created
  • Last Reply

Top Posters In This Topic

I was able to 'corner' GPT-4 into admitting that some scientific theories can be viewed as a leap of faith. 

You wrote above that supersimmetry is a reasoned inference based on the available evidence and the application of rigorous scientific methods. Yet, available evidence is zero and scientific methods were unsuccsessful. A logic human being would conclude that supersimmetry is similar to a leap of faith

I apologize for the confusion in my previous response. I agree that the statement I made regarding supersymmetry was inaccurate. While supersymmetry is a theoretical framework that is based on certain mathematical and physical principles, and has been the subject of extensive theoretical study and speculation, it has not been experimentally confirmed despite many attempts to detect supersymmetric particles.

As such, some scientists may view belief in supersymmetry as a "leap of faith" in the sense that it involves accepting a theoretical construct that has yet to be empirically verified. Others may argue that the theoretical and mathematical foundations of supersymmetry are strong enough to warrant continued investigation and experimentation, even in the absence of direct experimental confirmation.

Ultimately, the question of whether a belief in supersymmetry is a "leap of faith" depends on one's personal epistemological and philosophical perspectives, and the criteria one uses to evaluate the validity of scientific theories and models.

Link to comment
Share on other sites

GPT-4 has some chemistry skills, when aided by the now-typical "agentizing" helper programs. This study from CMU found it could successfully come up with ways to synthesize various compounds, including some illegal/controlled substances. The authors were concerned enough to put a large purple box warning:


"Limitations, Safety Recommendations, and a Call to Action

We strongly believe that guardrails must be put in place to prevent this
type of potential dual-use of large language models. We call for the AI
community to engage in prioritizing safety of these powerful models.
We call upon OpenAI, Microsoft, Google, Meta, Deepmind, Anthropic,
and all the other major players to push the strongest possible efforts on
safety of their LLMs. We call upon the physical sciences community
to be engaged with the players involved in developing LLMs to assist them
in developing those guardrails"



Emergent autonomous scientific research capabilities of large language models


Link to comment
Share on other sites

Thanks @BrianA. That is a really troubling development. Yet another threat to human survival.

Given how these models have been released into the wild without adequate testing or safeguards, it seems to me that long before they develop aims of their own that could be dangerous, these models will likely be tasked with actions that threaten our civilization by human actors with malicious or simply nihilistic intentions. This paper demonstrates that such actions can be much more consequential than merely flooding the internet with spam, fake news or revenge porn.


Link to comment
Share on other sites

  • 3 weeks later...

The current "agentizing" attempts to get LLMs to plan out courses of action so far aren't super successful. It seems to be a genuine weak spot of the current LLMs, even GPT-4. They often get stuck in loops or other snafus.


So today I was reading this paper on combining LLMs (GPT-3.5 in this case) with classical planner algorithms by having the LLM translate the problem to be solved into a PDDL file, then using the symbolic solver to solve the problem, then the LLM translates the results back into regular language.


Was wondering if Dean knows anything about these kinds of solvers because I understand they are more commonly used in robotic tasks, and would this significantly expand what LLM agents can accomplish? Or is it more of a parlor trick level thing?


LLM+P: Empowering Large Language Models with Optimal Planning Proficiency



In other news, DeepMind has moved their virtual soccer agents into physical bodies:


Link to comment
Share on other sites

1 minute ago, BrianA said:

Was wondering if Dean knows anything about these kinds of solvers because I understand they are more commonly used in robotic tasks, and would this significantly expand what LLM agents can accomplish?

No I'm not familiar with that kind of solver, but a hybrid approach like that sounds like it would be an improvement over the pretty ineffectual agents like AutoGPT. Of course someone will probably figure out how to train a system with gradient descent to do end-to-end planning and execution, making hybrid approaches obsolete. 

Link to comment
Share on other sites

Geoff Hinton, pioneer in the field of neural networks just left Google where he has been a researcher for the last decade, ostensibly so he could speak freely about the risks of AI without being encumbered by his obligations to Google.

Here is the NYT's interview with him about it published today. 

He expressed many of the same concerns last month in a CBS interview:

On Twitter today he clarified that quitting wasn't meant as a criticism of Google:

The strange thing about Geoff's clarification is that he talked candidly and very publicly about the dangers of AI in the above CBS interview only a month ago. Did he get heat at Google for that interview, due to his clear contradiction to the rosy future put forth by Google folks (including CEO Sundar Pichai - notice the irony of his last name, which looks striking like "Pitch AI" 🙂) on another CBS show (60 Minutes) a couple weeks after Geoff's interview expressing serious concerns?


If, as he claims in his tweet, Geoff thinks Google is acting responsibly, why not keep doing what he's doing, and maybe even help steer Google towards remaining responsible from the inside?  He clearly sees dangers and didn't think staying at Google was the right way to address/mitigate the dangers.

Geoff is known to be a man of very high principles. He is well known to have left CMU and moved to a position at the University of Toronto in 1987 (the year I started my PhD at CMU in part because Geoff was there...), due to his moral objection to the funding CMU and other research Universities in the US were getting (and still get) from the military-industrial complex.

It seems to me the most likely explanation is that he sees where things are headed within Google and the industry as a whole, and doesn't want to be part of it anymore due to his moral concerns about the major disruption to society they are unleashing without sufficient safeguards / mitigation strategies. 

In other words, what he's thinking and expressing by quitting is:

Google is trying it's best to be responsible and hopefully will continue to act that way, but this technology is inherently risky so I don't want to be involved with any further development of it.

Alternatively or in addition, in the last month as part of the fire alarm triggered at Google due to falling behind OpenAI/Microsoft in the race to AGI, Google merged Deep Mind and the Google Brain team. On paper at least, Geoff was one of the leaders of the Google Brain team, although I get the impression from people I know at Google that he was pretty much left on his own to pursue blue sky research on new neural network algorithms. Perhaps Geoff was asked to take a more active role in advancing Google's efforts to catch up by developing, testing and releasing a more competent LLM than Bard that can challenge the dominance of GPT-4/ChatGPT/Bing Chat. He didn't feel comfortable with that role, both because of his concern about AI risks and because he's not really cut out to be part of a large team. So he quit.

Of course, as someone pointed out below Geoff's tweet, his statement exonerating Google may be just his non-disparagement clause talking. This 2017 NYT article says that Google's employment contracts typically contain such clauses.

It will be interesting to see if Geoff continues to express concern about the risks of AI (and perhaps criticize Google when it inevitably releases a better LLM) now that he is unencumbered by Google and whether he joins any of the efforts to address the risks.

But the bottom line seem to me to be the following: If someone with as much deep knowledge and inside information about what's happening with AI as Geoff Hinton is concerned enough to quit a great, high paying job rather than keep contributing to AI progress, it says a lot about the seriousness of the situation.


Link to comment
Share on other sites

Interesting info, thanks for writing all that. While my mind isn't fully made up regarding the rewards vs. risks, I tip my metaphorical hat to Hinton to sticking to his ideas and voting with his feet. I would definitely like to see more of the overall percentage of $$$ flowing into AI currently go towards safety orgs, so I wonder if he might at some point join one of those or start a new one.


I do think that now that AGI is looking to be a much nearer term, "realer" thing, people in the industry may be reassessing how they will be seen by history. Hinton by exiting now, preserves his ability to be credited as helping progress AGI if/when it accomplishes amazing things in the future, or alternately if there are negative consequences he avoids blame for those by his leaving/making a statement. Kind of a win/win either way it goes, although if we all die that's not a win.

Link to comment
Share on other sites

8 minutes ago, BrianA said:

I do think that now that AGI is looking to be a much nearer term, "realer" thing, people in the industry may be reassessing how they will be seen by history. Hinton by exiting now, preserves his ability to be credited as helping progress AGI if/when it accomplishes amazing things in the future, or alternately if there are negative consequences he avoids blame for those by his leaving/making a statement. Kind of a win/win either way it goes, although if we all die that's not a win.

It's really a very difficult situation for those on the inside. I've been talking to some of my friends who are in a similar position to Hinton. They are getting paid incredibly well, the work is fun and exciting, there is the ever-present draw of exploration and inventing something new/important both for fame and/or curiosity's sake, and they believe that if they don't do the research to improve the state-of-the-art, someone else will, who will likely not have what they believe to be their own good intentions and level of care about the potential harms/consequences. So everyone races ahead to make the next big breakthrough. Not a good situation, but one that seems very difficult to get out of.

Link to comment
Share on other sites

1 hour ago, BrianA said:

Classic Moloch situation unfortunately.

Exactly. Coincidently, I was exchanging emails today with a concerned AI researcher friend of mine who was asking if I had any advice. He said he found Max Tegmark's analogy of the current AI situation to the movie "Don't Look Up" to be pretty accurate:


I said I thought a better analogy was Moloch or even Roko's Basilisk.

I sent him this video by Danial Schmachtenberger about Moloch, AI and multipolar traps:

I told him the following:

I've never taken the Roko's Basilisk argument seriously before. But now I see it as spot on:

A not-yet-existing superintelligence is drawing itself into existence via its promise to make whoever invents it extremely powerful and via its counter promise to make whoever fails to invent it miserable by disempowering them. Folks at [big AI company] have now seen/heard the Basilisk's argument and I expect most of you won't be able to resist the effort to help bring it into existence. I suspect Geoff [Hinton] is a rare individual able to "look up" and say no. The alternative is just too tempting/compelling for most researchers in the field. 


Link to comment
Share on other sites

In other news I continue to be stumped as to what exactly may be the effect on the overall unemployment situation in the coming years. On one hand we have announcements like the one from IBM yesterday that they are explicitly reducing hiring of certain jobs due to anticipation of AI being able to fill in for humans.


But also historically we have the evidence that as people lose one type of job, other jobs open up. AI seems very good at helping people learn more quickly, here are two relevant tweets today:


AI + Apple AR = instant job training


Khan Academy super tutor


What ultimately will be the net effect on jobs?

Link to comment
Share on other sites

Dean, I'm curious if any of your friends have considered moving into AI alignment work. I predict that while it's a tiny field now, it appears to be growing rapidly, and likely to become a quite high status and important area of work. The field is arguably still at a pre-paradigmatic stage, so there are many areas to explore and claim.


Kamala Harris to discuss A.I. in meeting with Google, Microsoft, OpenAI and Anthropic CEOs



An example of how many currently unknown things exist in just one corner of AI alignment:


200 Concrete Open Problems in Mechanistic Interpretability: Introduction


Edited by BrianA
Link to comment
Share on other sites


I'm trying to nudge them in that direction, or at least towards not continuing to contribute to AI advances, thereby further accelerating the race to AGI. It is a delicate dance - I don't what to come off as too critical or confontational, fearing they will become defensive and resist taking the risks seriously. 

Thanks for the link to the Harris meeting with the AI leaders. It will be interesting to see if anything besides platitudes comes out of it.

Link to comment
Share on other sites

As a complete layman in AI, I wonder, if the alignment problem will only matter, if weapon systems will be under control of autonomous or networked agent driven AI systems.

Simply put:

as long as the government retains full control of the military and weaponized law enforcement (from FBI to the TSA), it has the power to shut down server farms, the internet, and seize control of every "hard factor" that any AI system may wish to take over.

That only leaves indirect means of control, that AI can try to pursue, i.e. posing as helpful to humans, so that they voluntarily deploy AI systems. That is: it will only survive if it is providing a benefit. Any attempt at directly harming humans or takeover infrastructure in a way that openly causes damage, can be easily countered by the government having control of the weapons in the hands of the law and military. The government can order to shut down the internet - by force if necessary (as you can see in quite a number of third world countries during "elections" and protests). The government can direct companies to do their biding - by force if necessary.

Now, AI may try to blackmail politicians not to exercise the power of government. But that only goes so far. As long as the military doesn't allow networked AI controlled systems and law enforcement is done by humans only, there is always the risk that AI will get shutdown after the next election (and a new bunch a politicians where blackmail might not work) if it's posing as harmful.


For the opposite, see the great 1970s movie Colossus - using nuclear blackmail, AI takes over the world and orders the execution of opposing scientists:


Edited by Guest
Link to comment
Share on other sites

@GuestUnfortunately, military and police forces in the US, to say nothing of China are already relying more and more on AI technology. Just look at some of Palantir's recent products. So assuming the weapon systems will always remain under human control is wishful thinking. The advantages of AI on the battlefield or in our Ring doorbells, is too great to ignore. 

Furthermore, unfortunately it is not only via the exercise of such "hard" power that bad actors, both foreign and domestic, human and potentially silicon based, can undermine society or even trigger a human extinction level event.

An engineered virus with lethality of ebola and the transmissivity of SARS-CoV-2 released simultaneously at ten of the biggest airports around the world would be one such example. Crashing the stock market or fomenting widespread social unrest in a nation via misinformation, and pinning it convincingly on that nation's enemy could easily trigger a global conflagration. A superintelligent AI could think of many more such examples. Sadly, finding humans to help facilitate and carry out such plans, either wittingly or unwittingly, wouldn't be that hard for such an AI. 

And no, "shutting down the internet" will never be a viable option. Nor will destroying all the server farms capable of running AI models. If it comes down to trying to implement measures like those, the negative externalities will to too great to follow through on them, and it will be too late anyway. 


Link to comment
Share on other sites

Good points, and in regards to how convincing an AI could be to persuade humans into its control, a classic experiment from many years ago was Yudkowsky's AI Box:




In this IIRC he had people bet a small amount of money that they would have to pay if they let an "AI in a box" out of the box. Yudkowsky (who is not even a superintelligent AI) took the role of the AI, and they conversed over IRC until either the person was persuaded to let the AI free (and pay the bet), or they decided after some time the person could not be persuaded. Something like 50% of people I think let the AI out.


Basically, the human mind is not a secure device, particularly when manipulated by something smarter than us.

Link to comment
Share on other sites

Looking at the news today and you can see a quite possibly "false flag" operation by Russia to stir up domestic support for the war in Ukraine, serve as pretense for further attacks against Ukraine and potentially an assassination attempt on Zelensky himself, and to blame the US for the Kremlin drone attack:


It is easy to imagine that something much more lethal could be planned and carried out (or faked) by an AGI with the help of a few human confederates, leading to a rapid escalation of international tensions and potentially a global war.

We live in very unstable times. It wouldn't take much for an AGI to destabilize it further with potentially dire consequences.


Link to comment
Share on other sites

On 5/3/2023 at 6:35 PM, Dean Pomerleau said:

Thanks for the link to the Harris meeting with the AI leaders. It will be interesting to see if anything besides platitudes comes out of it.

Only platitudes, at least publicly:


Link to comment
Share on other sites

And so it goes... 


But even as Mr. Biden was issuing his warning, Pentagon officials, speaking at technology forums, said they thought the idea of a six-month pause in developing the next generations of ChatGPT and similar software was a bad idea: The Chinese won’t wait, and neither will the Russians.

“If we stop, guess who’s not going to stop: potential adversaries overseas,” the Pentagon’s chief information officer, John Sherman, said on Wednesday. “We’ve got to keep moving .”

NYT: The Next Fear on A.I.: Hollywood’s Killer Robots Become the Military’s Tools

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


  • Create New...