Jump to content

Recommended Posts

Harking back to a much earlier sub-topic of this thread, physicists may have just definitively proved that we are not living in a simulation.

 

This is way beyond my ability to critically asses the case, but here it is: Physicists Find we're not living in a computer simulation.

 

Just in case it’s been weighing on your mind, you can relax now. A team of theoretical physicists from Oxford University in the UK has shown that life and reality cannot be merely simulations generated by a massive extraterrestrial computer.

The finding – an unexpectedly definite one – arose from the discovery of a novel link between gravitational anomalies and computational complexity.

In a paper published in the journal Science Advances, Zohar Ringel and Dmitry Kovrizhi show that constructing a computer simulation of a particular quantum phenomenon that occurs in metals is impossible – not just practically, but in principle.

 

Link to comment
Share on other sites

  • 2 weeks later...
  • Replies 266
  • Created
  • Last Reply

Top Posters In This Topic

They've found that a simulation of particular effect is exponential is the number of particles participating.
Thus, concluding it will be impossible to simulate a large number of such particles.

But, Quantum computers in theory can work with exponential time:
https://cs.stackexchange.com/questions/12892/quantum-computers-parallel-computing-and-exponential-time

Thus, if our simulators have access to those computers, they can simulate those kind of interaction in linear time.

Link to comment
Share on other sites

China speeds ahead of U.S. as quantum race escalates, worrying scientists

 

I think this is just what we need, a technological race, the new cold war with much higher benefits than the old one.  This is the tech that will pave the way for critical future advances and the singularity.

Edited by Gordo
Link to comment
Share on other sites

Rather than celebrate, I find this depressing. Call me crazy, but I think we should be massively investing in promising new tech whether genetic engineering or extreme computing for the betterment of humanity's prospects. That's a worthy goal. It shouldn't be that the only way to invest any real resources into promising tech is because of an "arms race". Why are we sitting around on our behinds while "China" or whoever is "racing ahead"? We should be massively investing in science and technology to begin with, not wait fort the "Soviets", "Nazis", "Japan" "China" and threat du jour before we do what we should be doing routinely anyway. The fact that we can only conceive of investing in cutting edge science if it has military application is why we end up with substantial portions of publicly funded science going into the department of defence and then the DOD funds this or that study at a university if it strikes their fancy. Meanwhile science budgets are starved of funds (just look up at current budget proposals). Bass ackwards.

 

Cut the military budget by 90% and re-assign the money to science, infrastructure and education, so we have a stronger economy and can afford more science funding, and eliminate tax loopholes for religion/churches and other scams. Because it is not bombs or bibles that will save us, but science (if that). 

 

Why does this sound utterly unrealistic, though it is in fact completely commonsensical? Politics. The same forces that brought us Trump, and have been with us since the dawn of time, the regressive/superstitious/irrational tendencies in humanity. We either overcome it, or we are doomed as a species. All academic to anyone on this board, of course, as we're all going to be dead long before the reckoning. But sad for what human civilization could have been. File under "that's why we can't have nice things". 

Edited by TomBAvoider
Link to comment
Share on other sites

I’d be fine with the DoD budget being cut by 90%, but it really doesn’t matter who does the R&D nor is DoD work going to be military restricted.

 

https://www.darpa.mil/about-us/timeline/modern-internet

 

p.s. Not sure why you want to drag religion through the mud in a thread about the singularity but it would make for an interesting new thread. Many studies point to the benefits with regard to health, happiness, and longevity.

The Important Relationship Between Faith and Longevity

http://lifeandhealth.org/readandwatch/live/faith-and-longevity/14428.html

 

Making Sense of Extreme Longevity: Explorations Into the Spiritual Lives of Centenarians

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3489187/

 

Going to church could help you live longer, study says

http://www.cnn.com/2016/05/16/health/religion-lifespan-health/index.html

 

Of course the flip side of that is that if everyone believes paradise awaits them after death, there is little to no incentive or mass appeal to the ideas of radical life extension so faith may be a detriment to progress. I’m not convinced of that argument though, I think most people would favor eliminating disease and suffering if possible, which essentially means ending aging.

Edited by Gordo
Link to comment
Share on other sites

While darpa funds dual use research, and there certainly are civilian applications from a lot of that research, it is far more efficient to fund science directly, without going through the filter of DOD - it's not as if DOD maps exactly to all science research in a venn diagram - and if it did, it would be a redundant hoop to jump through - why just not fund directly. But the fact remains that the primary mandate of the DOD is to fund military related research - civilian applications are a happy byproduct and definitely a *subset* of all research money going to the DOD. I prefer science to be funded without the distorting lens of military applications.

 

 

https://www.theatlantic.com/science/archive/2017/03/trumps-science-health-budget/519768/

 

"In the Department of Energy, nuclear weapons spending gets a boost to the detriment of renewable energy. The budget would eliminate Advanced Research Projects Agency-Energy (ARPA-E), which has given out $1.5 billion to 580 high-risk, high-reward projects in renewable energy and efficiency since 2009."

 

I prefer not to devote funds for f.ex. more nuclear weapons (we have enough) at the cost of civilian high-risk, high-reward scientific projects.

 

As to religion - there is no mystery as to why I brought it up, indeed, I stated so outright. In this context it was strictly based on economics. Any entities that are exempt from paying income taxes are in effect subsidized by the other taxpayers. That being so, one has the right, as a taxpayer (I am one) to ask what are we getting in exchange for this burden. It is my personal opinion (everyone has an opinion!) that on balance it's a bad deal for our society and any goals of making progress through scientific research. If they had to pay taxes, there would be more money to fund research, f.ex. life-saving research. I'd rather pay to find a cure for cancer through tax funded research than subsidize religious businesses and money making ventures. I don't think the "why" is much of a puzzle: I based that on the observation that more scientific progress has been made by funding research than funding bibles. Of course, that's just my opinion, and others my claim the opposite. 

Edited by TomBAvoider
Link to comment
Share on other sites

  • 1 year later...
On 10/25/2017 at 3:10 AM, Gordo said:

Of course the flip side of that is that if everyone believes paradise awaits them after death, there is little to no incentive or mass appeal to the ideas of radical life extension so faith may be a detriment to progress. I’m not convinced of that argument though, I think most people would favor eliminating disease and suffering if possible, which essentially means ending aging.

I believe that the attachment to one own's body and present state of existence is enough to prompt life-extension strategies, regardless of the concept of paradise (but there always is the possibility of hell!).

I agree upon the positive aspects of religious belief, even though TA made it clear that he cited the religious aspect as pertinent to thetax  funds available for research.

 

Link to comment
Share on other sites

18 hours ago, mccoy said:

I believe that the attachment to one own's body and present state of existence is enough to prompt life-extension strategies, regardless of the concept of paradise (but there always is the possibility of hell!).

I agree upon the positive aspects of religious belief, even though TA made it clear that he cited the religious aspect as pertinent to thetax  funds available for research.

 

I didn't want to go on that tangent but I would argue that dropping the non-profit status of churches or other organizations would have little impact on tax receipts. They would just reorganize as an s-Corp, the same as millions of small businesses across America - and maybe people don't know this, but s-corps don't pay federal taxes either. There is good reason for that, the net income is passed through to employees, and employees pay the taxes on their income just like everyone else. The same already exists for non-profits today (if you think people who work at non-profits don't pay taxes, you are wrong). Very few people itemize deductions anymore since the standard deduction is so high, only 12% itemize, so the impact from that standpoint is also miniscule. 

At any rate, if congress wanted to fund more research, they could, but it seems they'd rather blow the budget on Medicare, medicaid, social security and defense. 

Edited by Gordo
Link to comment
Share on other sites

  • 2 years later...

Wow. Things are really starting to accelerate in the AI / Machine Learning space over the last year or so. OpenAI's GPT-3 for text generation and Dall-E for image generation are impressive, but I think Google's system's even more amazing - Imagen [1] (image generation), Flamingo [2] (image understanding) and LaMDA [3] (natural language dialog). LaMDA is the system that has recently become controversial when one of its testers made claims as to its sentience, and published a pretty compelling dialog with it on the subject [4]. He was subsequently suspended from his job (with pay) due to "breaching confidentiality policies."

I'll throw in some example from each of them to illustrate. Here are images and the corresponding text prompts fed to Imagen to create them:

google-imagen-generare-immagini-AI.jpg

 

Here are images plus questions posed to Flamingo about them (pink text boxes):

Screenshot_20220618-152257_Drive.jpg

Screenshot_20220618-152407_Drive.jpg

Screenshot_20220618-152513_Drive.jpgScreenshot_20220618-152544_Drive.jpg

 

And here are other dialogs (besides the sentience one) with LaMDA (from [5]). The human interlocutor is labeled ME:

ME: Imagine a room with a single door and no windows. There’s a coffee table with a drawer in it, and a sofa with big cushions. Alice walks in with a book and sits on the sofa to do some reading. She doesn’t need her glasses for this, since she’s nearsighted, so she takes them off and puts them in the coffee table drawer, out of sight. When she leaves the room with her book, she forgets to take her glasses. Are you with me so far?

LaMDA: I am familiar with the current situation. Alice forgot her glasses in the drawer, and walked out of the room.

ME: Right. Now Bob sneaks in, unbeknownst to Alice. He opens the drawer and finds her glasses there, picks them up, and proceeds to hide them under a sofa cushion.

LaMDA: This is all within the realm of possibility. Let us continue.

ME: Now Alice comes back in, flustered, wondering where her glasses are. Bob looks up innocently, saying nothing. Where will Alice look for her glasses first?

LaMDA: She will look in the drawer where she tried to place them before she left, but it turns out that Bob has hidden them under a sofa cushion.

Here is another one:

ME: I dropped the bowling ball on the bottle and it broke.

LaMDA: That’s too bad. Did it cut you?

ME: What broke?

LaMDA: The bottle you were talking about.

If we swap these concrete nouns for different ones, we get quite a different response:

ME: I dropped the violin on the bowling ball and it broke.

LaMDA: That sounds sad 😢

ME: What broke?

LaMDA: The violin when it hit the bowling ball.

Knowing (more or less) how the LaMBA chatbot works, I'm confident it isn't sentience, since it doesn't have any persistent memory of previous conversations beyond the current one. It is a static model that has digested billions of words of human generated text and somehow has learned to make these sorts of responses in a stochastic, stimulus/response way. Nevertheless it is pretty amazing, and arguably passes the Turing Test (for what it's worth) in casual conversation.

I talked to two AI/ML researchers (who were PhD advisees of mine when I was at CMU) about these developments yesterday. Neither of them were involved in any of these research projects. One of them told me that up until a few months ago he was very skeptical that we'd see something close to true AI in his lifetime (he's ~50). Now he thinks that a "reasonable facsimile of intelligence" is "quite likely."

Unfortunately, Eliezer Yudkowsky, arguably one of the top AI safety researchers in the world (from the Machine Intelligence Research Institute - MIRI) has grown extremely pessimistic about our chances of surviving the emergence of true AI. I'm not quite as pessimistic as he is, but it's worth reading his arguments [6] - note this particular post was from April Fool's day, but he has endorsed this same view elsewhere [7], i.e. we're pretty much doomed from some not-too-distance (next few decades) unaligned AI. I would add, if some other existential risk doesn't kill most of us first...

We live in very interesting times.

--Dean

-------------------

[1] https://imagen.research.google/

[2] https://arxiv.org/abs/2204.14198

[3] https://arxiv.org/abs/2201.08239

[4] https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917

[5] https://www.amacad.org/publication/do-large-language-models-understand-us

[6] https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy

[7] https://www.lesswrong.com/posts/CpvyhFy9WvCNsifkY/discussion-with-eliezer-yudkowsky-on-agi-interventions

 

Link to comment
Share on other sites

It's definitely getting more interesting, also don't miss DeepMind's Gato, which is perhaps the most general single-model AI yet?

 

https://www.deepmind.com/publications/a-generalist-agent

 

Also don't miss Eliezer's recent list of of potential AGI lethalities: https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities

 

BTW, I think Kurzweil is coming out with an updated book soon, apparently titled The Singularity Is Nearer.

Link to comment
Share on other sites

Oh yeah, for those who think there is no plausible way an AGI could kill us all, there is that other recent breakthrough from Google/DeepMind - Alphafold 2 [1]. It can predict protein structure to atomic level accuracy, the key step to both determine what a virus like Covid-19 will do in a human body (upside), or design an even more lethal virus potentially without us even knowing what the damn thing will do when we synthesize it (downside).

--Dean

------------

[1] Jumper, J., Evans, R., Pritzel, A. et al. Highly accurate protein structure prediction with AlphaFold. Nature 596, 583–589 (2021). https://doi.org/10.1038/s41586-021-03819-2

 

Link to comment
Share on other sites

14 hours ago, Dean Pomerleau said:

I couldn't have explained these as well as the system does. 

It's quite amazing, although I think the Medium piece was a bit optimistic in its interpretations.

Sentience presumes self-awareness and feelings, such as fear of death. A dog has them, and so does a cow, a chicken, or a lamb, which is why I do not eat animals. Humans don't until some time after birth, which is why I think the state should have no say in personal carrying-to-term decisions (had to throw this in 😄 ).

I see nothing above or in the LaMDA piece that indicates sentience. In fact, I'd be worried if it exhibited fear of death, for obvious reasons. But so far nothing indicates that it does, and hopefully, by the time it does, we'll have a way of addressing it safely.

A more immediate concern is that Google develops and shares know-how with China, which has no qualms about using it for political control and exports it to other authoritarian states. Heck, the Left in the US has shown that it has no qualm about using similar tech to suppress free discussion internally. This is a much more immediate issue, IMO.

Link to comment
Share on other sites

AI superseding humans is way into the future, I'm more interested in how in the interim it would change human civilisation. How many fields of discipline would it obsolete? What major breakthroughs will it achieve and at what pace. I'm hoping to live long enough to see the AGI uprising LOL

Link to comment
Share on other sites

On 6/20/2022 at 10:54 PM, Ron Put said:

we'll have a way of addressing it safely.

isn't that the whole point of AGI? to exceed human intelligence so we can harness it's discoveries for own benefit?

If at this early stage if it can beat the best of the best in the most complex game known to man then surely one day it would solve it's way to do as it wishes. It will find a vulnerability in any system a human could devise and exploit it, no system is 100% secure since humans aren't capable of achieving that, we are not logical by nature BUT on the other hand an AGI is logical by it's design.

Edited by pwonline
Link to comment
Share on other sites

Metaculus is a prediction market, where people bet on various questions. It tends to give a decent fairly accurate crowd-sourced guess as to what might happen in the future.

 

Regarding AGI, for the question of when will "weak AGI" be publicly announced (privately this would occur more like 1 year earlier I guess at whatever company gets there first), the current prediction is around 2029, only 7 or so years away. It used to be in the 2030's but the predictions accelerated into the late 2020s recently after the spate of various impressive AI papers all hit, including the Gato work.

https://www.metaculus.com/questions/3479/date-weakly-general-ai-system-is-devised/

 

Regarding "strong AGI" the prediction is current late 2030's, and this prediction also accelerated forward recently after the recent results.

https://www.metaculus.com/questions/5121/date-of-general-ai/

 

We appear to be close enough now to at least "weak" AGI, that anyone still making statements to the effect of "it's a long ways away, we don't need to worry yet" seem to be increasingly wrong.

 

Also in other news, video deepfake tech has advanced to the point it is being used in war:

European politicians duped into deepfake video calls with mayor of Kyiv

https://www.theguardian.com/world/2022/jun/25/european-leaders-deepfake-video-calls-mayor-of-kyiv-vitali-klitschko

Link to comment
Share on other sites

14 hours ago, BrianA said:

ccelerated into the late 2020s recently after the spate of various impressive AI papers all hit

I would go one level up and say most if not all of the advancements in AI is landing on Machine learning using neural networks, all made possible due to HARDWARE!

GPU's led the way but dedicated specialized hardware is starting to accelerate and this has taken advantage of the boom of semi conductor market recently and the HUGE DEMAND from todays society to consume more technology (gaming market, social media / server market, self driving cars, internet of things!)

imagine in 5 years with how advance the hardware be and what kind of models we can design and build, I believe I read somewhere the pace of papers being written is on an exponential curve atm!

15 hours ago, BrianA said:

Regarding "strong AGI" the prediction is current late 2030's

I think we are set for plenty of major breakthroughs that could bring the date even closer! 

Link to comment
Share on other sites

Very impressive Brian. Thanks! At 31:30 he was pretty charitable towards Elon Musk's "spandex" robot. Here is another example (from Google) of linking the latest AI/ML results to real-world robotics:

Like the Giant AI's robot, it's currently a bit slow and makes mistakes, but given the rate of progress it looks to me like flexible, ubiquitous robots may finally be around the corner. Much closer than Intel's HERB butler robot that I was involved with circa 2010!

--Dean

 

Link to comment
Share on other sites

Giant's bot is kinda slow, but apparently not as slow as Google's bot where their video was sped up 10x. I also know with DeepMind's recent Gato project they had to artificially constrain the model size in order to have it be able to control a robot arm in realtime, apparently larger models have too slow inference times.

 

I like Giant's focus on getting the cost down and safety up, by making the limbs out of cheap and light plastic, and driven by tendon-like cables rather than internal heavier motors. Seems more practical and easier to maintain long term for the inevitable army of robot repair technicians.

 

However there have been attempts in the past to make something similar, like Rethink Robotics' Baxter: https://en.wikipedia.org/wiki/Baxter_(robot)

Discontinued in 2018 due to lack of demand. Too expensive? Too hard to train or too limited?

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...