Jump to content

Recommended Posts

For anyone who wants to avoid "sandy hair" syndrome on the topic of AI risk by learning what's being said people who really know what they are talking about, here is a pair of articles in MIT Technology Review from the last couple days.

 

The first article is by Orin Etzioni, a former CMU colleague and compatriot of mine, who is now CEO Allen Institute for Artificial Intelligence. I like and respect Orin, but even a smidgen of rational thinking will see the strawman nature and misdirection inherent in the so-called arguments he presents that we don't need to worry about existential risks of AI.

 

The follow-up article published today is by a someone who is arguably one of the few who is even more expert in the arena of AI than Orin - Stewart Russell. Stewart literally wrote the book on AI -  AI: A Modern Approachan 1100+ page textbook used by nearly every university course on AI around the world, originally published in 1994 and now in it's 3rd edition, which he co-authored with renowned AI & computer science expert Peter Norvig.

 

Stewart takes Orin to task on his analysis and interpretation, to put it mildly. Here is how Stewart summarizes:

 

Many prominent AI experts have recognized the possibility that AI presents an existential risk. Contrary to misrepresentations in the media, this risk need not arise from spontaneous malevolent consciousness. Rather, the risk arises from the unpredictability and potential irreversibility of deploying an optimization process more intelligent than the humans who specified its objectives. This problem was stated clearly by Norbert Wiener in 1960, and we still have not solved it. We invite the reader to support the ongoing efforts to do so.

 

You can judge for yourself who has the better argument, and who got the better of whom in the exchange, but here is how I described it in my twitter post:

 

Stewart Russell pwns @etzioni & defends Nick Bostrom on topic of AI risk and AI expert surveys. Ouch. @techreview technologyreview.com/s/602776/yes-w…
 
From his conciliatory reply at the bottom of Stewart's rebuttal, one gets the feeling Orin agrees with me...
 
--Dean
Link to comment
Share on other sites

  • Replies 266
  • Created
  • Last Reply

Top Posters In This Topic

Thanks, Dean, I'm happy you're posting on this crucial topic again.

 

The first article is by Orin Etzioni, a former CMU colleague and compatriot of mine, who is now CEO Allen Institute for Artificial Intelligence. I like and respect Orin, but even a smidgen of rational thinking will see the strawman nature and misdirection inherent in the so-called arguments he presents that we don't need to worry about existential risks of AI.

Wow, sorry to hate on your friend Oren Etzioni but he seems like a real bone head here. What, are these respondents to his survey living in caves?

 

The follow-up article published today is by a someone who is arguably one of the few who is even more expert in the arena of AI than Orin - Stewart Russell. Stewart literally wrote the book on AI - AI: A Modern Approach, an 1100+ page textbook used by nearly every university course on AI around the world, originally published in 1994 and now in it's 3rd edition, which he co-authored with renowned AI & computer science expert Peter Norvig.

 

Stewart takes Orin to task on his analysis and interpretation, to put it mildly. Here is how Stewart summarizes:

 

 

Many prominent AI experts have recognized the possibility that AI presents an existential risk. Contrary to misrepresentations in the media, this risk need not arise from spontaneous malevolent consciousness. Rather, the risk arises from the unpredictability and potential irreversibility of deploying an optimization process more intelligent than the humans who specified its objectives. This problem was stated clearly by Norbert Wiener in 1960, and we still have not solved it. We invite the reader to support the ongoing efforts to do so.

You can judge for yourself who has the better argument, and who got the better of whom in the exchange, but here is how I described it in my twitter post:

 

Stewart Russell pwns @etzioni & defends Nick Bostrom on topic of AI risk and AI expert surveys. Ouch. @techreview technologyreview.com/s/602776/yes-w…

From his conciliatory reply at the bottom of Stewart's rebuttal, one gets the feeling Orin agrees with me...

 

--Dean

Right, very interesting, and I'm glad he has the grace to say oooopsie.

 

Keep posting, Dean.

Link to comment
Share on other sites

Hello all .

I am a software engineer with interest in CR,programming  and  of course machine learning

Singularity is really farther away than we here on CR forums might think. 

I  wish we were already at the famous "escape velocity" were for each year passed technology would be able to add another year in life span.

Even for the most  serious CR followers (I am really not there) it might not be the case. 

 

My 2 cents : People are unwilling to commit to artificial intelligence research at a real "Apollo" program level. I am following DeepMind and other researchers (like Ben Goertzel) closely and appreciate their efforts. But it's just a drop in the ocean... the general population does not have the interest in this AI thing (most people actually believe the scary Terminator, Transformer ,2001 A Space Odissey movies).  

 

While the most recent advances (beating Lee Sedol at Go, DQN algorithms, the differentiable neural  computer, even Starcraft 2 skills https://deepmind.com/blog/deepmind-and-blizzard-release-starcraft-ii-ai-research-environment/)are all very promising it will take decades before these trickle down to general usage. 

 

 

Why I say that ?

My software engineering experience :In 1950's a interesting functional programming language was developed. It's name was LISP. Only now some 70 years later are we using functional programming in our lives as programmers( one of them is actually a LISP dialect called Clojure). Ideas as old as the '70s (the actor model) have only been implemented in real systems after 2009. 

 

Neural networks are more than 50 years old (and yet we are merely starting to build better architectures).

Electric cars are  nearly a century old. 

While I was a huge fan of Ray Kurzweil  I think he might be overly optimistic. 

 

If there will be  a  weapon that will slay the dragon tyrant (aging that is)  I believe artificial intelligence will be it.

Will  singularity happen around 2045 ? I wish it were so .. but think it will take at least 20 years more (wishing it was here already).

Hope my children will live to see it .. although being  on the last train to the mountain (last one to die)  is indeed a pity.

Link to comment
Share on other sites

All,

 

I've promised to keep people reading this thread apprised of the latest important developments and milestones on the path to AI as I see them. Here is not one, but three advances announced today that each individually might seem innocent enough, but together show a pretty disconcerting degree of progress towards a scary AI scenario. Here is how I described the first on twitter:

 

Dean Pomerleau @deanpomerleau     11m

This new Google paper on RNNs designing better RNNs appears disconcertingly close to recursive self improvement... openreview.net/forum?id=r1Ue8

 

I0rB7Cx.png

 

 

Basically it is a hierarchical recurrent neural network architecture (using LSTMs for those knowledgeable about RNNs) that explores many neural network architectures in parallel to find the best one for the task. In the paper, the tasks they tested were object recognition (CIFAR-10 database) and natural language parsing (Penn Treebank database). They were able to get state-of-the-art performance with the NN architectures their NN-based NN-designer dreamed up. 

 

Of course, to really achieve recursive self-improvement, the RNN would have to explore optimizing its own architecture, rather than the architecture for a separate tasks. But it appears to me that feeding the RNN to itself for recursive optimization is not that big a stretch from what was shown in this paper. 

 

The second one is easier to explain. Google's DeepMind announced today that it has partnered with Blizzard (makers of World of Warcraft) to develop reinforcement AI systems to conquer a much more challenging game than the Atari videogames it previously tackled. The new game is StarCraft II, one of the most popular multi-player eSport games in the world. For those of you who don't know about StarCraft, it is a real-time, shoot-em-up strategy game. Here is a picture:

 

qkqHJwk.png

 

Here is a description of game play:

 

Most importantly, StarCraft II is a game full of hidden information. Each player begins on opposite sides of a map, where they are tasked with building a base, training soldiers, and taking out their opponent. But they can only see the area directly around units, since the rest of the map is hidden in a “fog of war”.

 

“Players must send units to scout unseen areas in order to gain information about their opponent, and then remember that information over a long period of time,” DeepMind says in a blogpost. “This makes for an even more complex challenge as the environment becomes partially observable - an interesting contrast to perfect information games such as Chess or Go. And this is a real-time strategy game - both players are playing simultaneously, so every decision needs to be computed quickly and efficiently.

 

“An agent that can play StarCraft will need to demonstrate effective use of memory, an ability to plan over a long time, and the capacity to adapt plans based on new information.”

 

So much for AI's only being able to handle toy games like Atari's Breakout or boardgames like chess and Go... In fact, StarCraft strongly resembles military strategy, complete with real-time the "fog of war" as mentioned above. Here is how Oriol Vinyals, one of the DeepMind researchers on the project, describes their ultimate goal, and how StarCraft facilitates it:

 

But the aim isn’t just to improve video games. Vinyals says that StarCraft II is a natural next step for the studio’s eventual goal to use AI to solve real-world problems. The lack of perfect information, the realistic (for a certain definition of “realistic”) visuals, the requirement to develop memory and even a sort of imagination all are important skills for an AI trying to understand the real world. Games are a better way to understand the real world than the real world, he says. 

 

But you might be thinking, "Sure, but these are all advances contained within computers / game worlds. They aren't going to have much if any capabilities to wreak havoc in the real world. I actually pointed out something similar earlier today in my post to the Self Driving Cars thread - suggesting that driverless cars will be pretty defenseless, even against kids throwing rocks. But that's where the third advance announced today comes in. Swarms of computer-controlled flying drones under development at my (other) former employer, Intel. Here is a video of Intel's 500-drone swarm in action, a new world record:

 


 

Currently all 500 can be controlled by a single person using Intel's software. Why Intel? Company spokesman says:

 

Today's processing takes place in everyday objects and tiny gadgets. Drones are no exception. We see them as flying computers... An entire fleet of hundreds of drones can be controlled by a single computer... There's tons of data that needs to be processed, which is going to drive more computing requirements. And anything that requires more computing requirements fits very well with the Intel strategy.

 

Quite right, and quite beautiful. Now imagine if, instead of carrying pretty lights, each of those 500 drones were carrying an IED or some other incendiary device. Other researchers have already developed drones with lightweight sensors that allow them to skim the ground and avoid obstacles. In fact, Intel has developed its own obstacle avoidance technology for drones that can "see, think and adapt". or if the drones are small and robust, they can just say screw it and bump into each other and plow their way around the environment like insects do. As a bonus, if you haven't seen it, hobbyists have also built gun-toting drones.

 

As leading AI researcher Stewart Russell puts it, drones and other AI weapons:

 

"could change the scale in which small groups of people can affect the rest of the world,” he says. “They can do the damage of nuclear weapons with less money and infrastructure.”

 

Doubt things are moving fast? Here is how the lead engineer on the Intel drone project in the video above puts it:

 

This is something we never could have done last year. It has been an amazing experience just to see the technology develop so quickly.

 

So, there you have it, three exciting new projects announced today taking us further down the road towards an AI apocalypse.

 

But just so I don't leave everyone too stressed out, I give you this Marconi Union video to accompany their song Weightless, which has been voted the most relaxing song ever. It is quite hypnotizing, until you realize those are drones in the video, flying in an amazingly intricate formation. Then it should both scare the pants of you and lull you to sleep, just like the AIs will want:

 


 

Oh yeah, I almost forgot. A deadly new superbug has also just arrived in the US for the first time today, and if we're not careful, we may just elect as President a megalomaniacal narcissist who wonders why we can't use all those nuclear weapons we've got stockpiled.

 

Chill out and have a nice day.

 

--Dean

Link to comment
Share on other sites

  • 3 weeks later...
It is amazing what science/medicine can do these days.

 

Fully Implanted Brain–Computer Interface in a Locked-In Patient with ALS

Mariska J. Vansteensel, Ph.D., Elmar G.M. Pels, M.Sc., Martin G. Bleichner, Ph.D., Mariana P. Branco, M.Sc., Timothy Denison, Ph.D., Zachary V. Freudenburg, Ph.D., Peter Gosselaar, M.D., Sacha Leinders, M.Sc., Thomas H. Ottens, M.D., Max A. Van Den Boom, M.Sc., Peter C. Van Rijen, M.D., Erik J. Aarnoutse, Ph.D., and Nick F. Ramsey, Ph.D.

N Engl J Med 2016; 375:2060-2066 November 24, 2016DOI: 10.1056/NEJMoa1608085


 

Options for people with severe paralysis who have lost the ability to communicate orally are limited. We describe a method for communication in a patient with late-stage amyotrophic lateral sclerosis (ALS), involving a fully implanted brain–computer interface that consists of subdural electrodes placed over the motor cortex and a transmitter placed subcutaneously in the left side of the thorax. By attempting to move the hand on the side opposite the implanted electrodes, the patient accurately and independently controlled a computer typing program 28 weeks after electrode placement, at the equivalent of two letters per minute. The brain–computer interface offered autonomous communication that supplemented and at times supplanted the patient’s eye-tracking device. 
Link to comment
Share on other sites

Al,

 

Yes - very cool. Here is a popular press account of the same research in Scientific American. Believe it or not, I've worked with Andy Schwartz, the U of Pittsburgh researcher quoted in the article and who is doing the same sort of research here in Pittsburgh. We collaborated back when I led the NeuroSys Project at Intel Labs Pittsburgh on brain computer interfaces (popular press coverage).

 

Nevertheless, I remain skeptical that direct brain-computer interfaces will become ubiquitous anytime soon - at least not for a couple decades, and by then I suspect we'll be far surpassed by artificial intelligences. Why? Because except for a few very special cases (like tetraplegic or locked-in patients), we humans already have high bandwidth I/O capabilities that have been optimized by evolution over eons to serve us well. These inputs (i.e. our senses, mostly vision and audition) and outputs (speech, manual dexterity) will be very hard to improve upon via direct stimulation or readout of brain activity patterns.

 

The challenge of beating the brain I/O we've got now, coupled with the invasiveness surgery required to listen to or "tickle" individual neurons (or small groups or neurons) in order to achieve high bandwidth BCI,  makes it a long-term prospect at best. When and if we eventually develop nano-robots with wireless communication that are small enough to swim around in our bloodstream and cerebrospinal fluid, then we might get effective, high bandwidth BCI. But that remains a long way off. 

 

You may think I'm "tilting at windmills" and nobody expects direct brain-computer interfaces are on the horizon, at least for us "normal" (non-paralyzed) people.

 

But none other than Elon Musk (not to mention Ray Kurzweil) think that direct brain-computer interfaces may be humanity's only hope of keeping up with the explosion of artificial intelligence. They think we'll have to merge with the AIs, through direction brain-computer communication, in order to avoid being left behind.

 

Here is Elon talking about BCI in the form of a "neural lace" that blankets the brain and serves as bi-directional information conduit (article):

 

 

Unfortunately for Elon (and all of us), trying to stuff data into our brain Matrix-download style (i.e. faster than we can take it in and process it with our eyes), is a long way off, and will require neuroscientists to crack the code of how the brain stores, recalls and processes information. It will likely happen someday (if we don't kill ourselves first), and so neural laces and direct brain-computer communications may happen as well, but I make the strong prediction that it won't happen until after artificial intelligence has far surpasses us in capabilities.

 

Here is a sort of "proof" for my assertion:

 

High bandwidth BCI will require that we understand how the brain works mechanistically and in minute detail. If we've figured the brain out with that degree of precision and detail, we'll be able to reproduce it in parallel, neuromorphic computer hardware that runs millions of times faster than the neural "wetware" of our brain. So we'll have the capability to build AIs that operate like our brains do, but that run millions of times faster, and can process millions of times more information in parallel. So we inevitably fall hopelessly behind, even once we've perfected direction brain-computer communication.

 

I'm afraid that we're inevitably going to be eclipsed by our creations, unless we figure out a way to keep them as "slaves", which might be possible, and even ethical, if we can figure out a way to make them smart but lacking consciousness - perhaps by preventing them from developing a rich representation of themselves and their situation, i.e. keeping them from becoming self-consciousness.

 

Of course, super-intelligent AIs millions of times smarter than we are but that lack consciousness are a whole nother can-o-worms when it comes to dangers to the future of humanity...

 

I hate to say it, being a staunch techno-progressive myself, but as I acknowledged over here earlier today, maybe the signal the conservatives in "middle America" sent this election is correct - maybe it's time to slow down our headlong rush towards what we progressives hope will be a better future. Maybe we're moving too fast into territory we don't/can't understand, and it may ultimately bite us.

 

--Dean

Link to comment
Share on other sites

Here is yet another road we're careening along towards dystopia. My tweet from earlier today summarizes:

 

Well this is disturbing. @Accenture is LITERALLY implementing plot of @marshallbrain's dystopian 2003 novel Manna

 

Here is an excerpt from this TechCrunch article about the consulting company Accenture's effort to improve the efficiency of its clients' operation using AI, compared to a couple passages from Marshall Brain's very compelling (and scary) 2003 novel Manna (available for free from Marshall's website):

 

qZk3Xur.png

 

What could possibly go wrong? Well, just as Accenture suggests, this approach allows any person to quickly step in to do any low-skill job - no training required. Just listen to the instructions whispered in your ear by the AI, and follow them to the letter. If you don't the all-seeing AI knows it immediately, you get fired, and replaced by someone else who is willing to take your place, for a below-subsistence wage. This obviously gives the business owners incredible leverage over all employees, since virtually all non-skilled workers are immediately interchangeable. In a sense, the employees are just bio-robots controlled by the company's master AI.

 

Marshall runs with that concept, and dystopia quickly ensues, even before the AIs becoming super-intelligent, simply as a result of the greedy business owners driving down wages and keeping all the profits that accrue from this new, more efficient, way to operate a business. Most people are rounded up and warehoused in huge prison-like public housing "dorms" and fed three square meals of gruel each day. A really dystopian version of Universal Basic Income. 

 

So, thanks to Accenture and other tech companies bring this vision into reality, we've got that to look forward to...

 

Marshall does portray a more optimistic alternative in the second half of the book, where wealthy. progressive people band together, buy Australia (literally), and turn it into a robot-driven socialist utopia where everyone's needs are met and all are free to pursue whatever tickles their fancy. 

 

Unfortunately, as I discussed yesterday, and on Twitter last night with Mr. UBI himself, Scott Santens (post about it here), I think our society is far from ready for a UBI, for several reasons discussed in those two posts.

 

--Dean

Link to comment
Share on other sites

Over on this totally unrelated thread on the world's oldest person, Sthira said:

 

 On second thought: Dean shall tweet Trump about skin and hair regeneration, since Dean is already regularly tweeting at the empire. 

 

I thought I'd take it as an opportunity to spell out what I consider to be the really big picture (again) - for those of you who've forgot.

 

Yes Sthira, it's unfortunate, but that's all I can do for now - tweet angrily at Trump, and try to organize my AI & ML friends to get their act in gear. 

 

Unfortunately, those are the same machine learning & AI friends weren't ready to jump on the Trump threat when I foresaw it earlier this year and tried to warn them to ramp up their efforts. Now we have to wait until the next major events in my premonition unfold - namely the world going to hell as a result of Trump's craziness. 

 

Eventually, as I foresaw, things will get so messed up with our world that Google, Facebook and the other leaders in AI & ML will mount a cooperative "Manhattan Project" to develop superintelligence, in order to try to figure out what can be done to prevent further meltdown of human civilization. The system they develop will involve large scale simulations to explore alternative scenarios. We're living in one of those simulations now.

 

Don't expect Trump to accomplish anything good, except potentially spurring a massive effort to clean up  the mess he creates. Such an effort has a remote chance of working out well, at least in one or two of the alternative worlds being explored via simulation. Of course most simulations will turn out to be dead ends, so we're likely screwed.

 

Just thought it useful to reiterate where I think all this is likely to be headed. Time will tell...

 

--Dean

Link to comment
Share on other sites

...Google, Facebook and the other leaders in AI & ML will mount a cooperative "Manhattan Project" to develop superintelligence, in order to try to figure out what can be done to prevent further meltdown of human civilization. The system they develop will involve large scale simulations to explore alternative scenarios. We're living in one of those simulations now.

Elucidate. Assume the superintelligence is now developed sufficiently, is now humming peacefully along, is now spinning out civilization meltdown prevention scenarios, what might these look like? For example, we with our own colorful imaginations may invent many alternative scenarios to where we live now, why would AI scenario creations be any different from our own, unless they were powered with enforcement? That is, it's one thing to say we're gonna live in one of the next five utopian scenarios concocted by superintelligence, but isn't it another to carry it to fruition? The AI needs not only to create the hypothetical scenario, it also needs to enact it.

 

Of course most simulations will turn out to be dead ends, so we're likely screwed.

Are you talking about computer model simulations that only exist in the hypothetical models, or do you mean simulations that somehow involve practical fundamental changes enacted to the rules of civilization?

 

Full sthira disclosure: this isn't posted as debate junk in order to argue pointlessly: I believe we have huge huge problems, and they're mostly about climate change, and possible nuclear wars, and I'm only curious because I don't know squat, just what I read in the left wing liberal press (NYT, Washington Post, Slate, Buzzfeed, Reddit, Pointe haha....) And I'm curious why you think we're screwed, and if thinking we're screwed might become self-fulfilling prophecy if repeated often enough?

 

What's the best scenario you can imagine. And stop there with the best, unless you'd like to articulate more best scenarios. I don't want to hear more dystopias because that's all I read everywhere else. Dystopian thinking is too easy, so cmon over to the bright side -- even now, even now that we have a fucking orange king who just made a random call to Taiwan and pissed off China -- even now imagine a better world and give that to your influential AI friends and colleagues.

Link to comment
Share on other sites

I don't want to hear more dystopias because that's all I read everywhere else. Dystopian thinking is too easy...

China’s New “Social Credit Score” Brings Dystopian Science Fiction to Life

 

https://futurism.com/?p=63190

 

Big Brother State

 

The Chinese government is taking a controversial step in security, with plans to implement a system that gives and collects financial, social, political, and legal credit ratings of citizens into a social credit score. The idea itself seems straight out of science fiction, but in a society like China’s, it’s already beginning to take shape.

 

For a nation that has a more-or-less openly totalitarian approach to governance, a move to install a Big Brother social credit system shouldn’t be surprising. Proponents of the idea are already testing various aspects of the system — gathering digital records of citizens, specifically financial behavior. These will then be used to create a social credit score system, which will determine if a citizen can avail themselves of certain services based on his or her social credit rating.

 

“China has a long way to go before it actually assigns everyone a score. If it wants to do that, it needs to work on the accuracy of the data. At the moment it’s ‘garbage in, garbage out,’” explained Wang Zhicheng of Peking University’s Guanghua School of Management.

 

Taking a Cue From Science Fiction

 

If China successfully consolidates data from around 1.4 billion people into one system, then it’s more than likely that this social credit system will be put in place. Once it does, it’s going to be like an episode from Black Mirror — the social credit score of citizens will be the basis for access to services ranging from travel and education to loans and insurance coverage.

 

Of course, supporters of this system have their reasons, including developing a unitary system for granting citizen’s access to financial services, given that 1.3 million Chinese don’t own a credit card. Besides, the government assures that the system would “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step,” as The Wall Street Journal reports. Which would never work in the United States — Washington, D.C., for instance, would become a ghost town.

 

But perhaps, in a country where even Facebook is largely censored and internet access is still limited to what the government allows, a social credit scoring system would be less beneficial than it would seem. Certainly, it would grant the government more access and control over the lives of its citizens. And, if that episode from Black Mirror is to be believed, it’s a system that can do more damage than good.

Link to comment
Share on other sites

Sthira,

 

-----------

Prescript

 

Thank you for waking me from my post-election depression, narrow anti-Trump focus & my overall "dogmatic slumber" long enough to pen this expansive post. It helped me, and hopefully might help others. Who knows, it might even eventually become a self-fulfilling prophecy as you allude to.  Your prodding, coupled with seeing some eerie parallels in the movie Arrival last night, gave me a kick in the pants to write this. Thanks.

 

Having said that - readers be warned. I start out with fairly pedestrian speculations that I think everyone will be able to relate to and follow. But as this post progresses, I get more speculative (some would call it crazy), personal, and, paradoxically, more optimistic.

 

Feel free to stop reading at any time. I'm writing it for a few humans who will be able to follow & appreciate it, but more importantly, to get it into a public, web-accessible forum for the benefit of some future AGI/ASI who might one day blast through these archives looking for useful stuff to learn and apply in its own place and time - more on that below.

 

Finally, be warned, this turned out to be perhaps my longest post ever, except for my Cold Exposure Albatross post. It is 20 single-spaced pages (in MS Word) and close to 10,000 words. As I said, I wanted to get it out in public whether anyone reads (all of) it or not.

 

End Prescript

------------------

 

Sthira wrote:

Elucidate. Assume the superintelligence is now developed sufficiently, is now humming peacefully along, is now spinning out civilization meltdown prevention scenarios, what might these look like? 

 

Here is a super-easy-to-understand way AI (or ASI) might be used to save our skin. It could be tasked with figuring out and implementing a strategy to reverse global climate change. To do this safely and effectively, it would likely require large scale, hyper-realistic simulations of both the physical world and the socio-political ramifications of any attempted solutions (e.g. geoengineering an atmospheric sun shield vs. a massive, global carbon tax vs. massive "population control" efforts...). We could be living in one such simulation, produced by a future climatologist, attempting to explore whether WWIII and the associated massive die-off in human population solves the global warming problem and might eventually lead to a post-apocalypse utopia.  How's that for a happy conception of our metaphysical situation?

 

If that scenario for what may be happening in our (simulated) world now is too depressing for you, here is a nice case scenario (for best case scenario, see further below) that is fresh in my mind, having just watch Arrival last night (a very rare 'date night' w/ my better half) and being about to start reading Ted Chiang's Story of My Life, the short story on which the movie is based. 

 

I'm about to discuss the parallels between the movie and the world we're living in at this moment. It will be a major spoiler for anyone who hasn't seen the movie. If you haven't, you have three options:

  1. Go see Arrival before reading below. It is a very good movie. I highly recommend this option.
  2. Read Chiang's Story of My Life short story in this collection on which the movie is based. I haven't read it yet (just arrived via Interlibrary Loan yesterday), but from what I've heard there are pretty good parallels between the movie & short story.
  3. Say "screw it" and read the two spoiler & explainer links I'm about to post, if you don't plan to see the movie, or don't mind knowing the ending ahead of time.

If you've seen Arrival, or if you haven't but don't give a crap about spoilers, you need to read these two spoiler/explainer articles  (here and here - particularly the first) which will be central to what I discuss below. They aren't that long. Do it now, I'll wait...

 

===========================================

Arrival Spoiler Discussions Below - You've Been Warned

===========================================

 

OK - from here on I'm going to assume you're reasonably familiar with the plot of Arrival, and at least have an inkling about the time-warping idea central to its plot.

 

So back to your question Sthira - how might humanity get out of the existential-risk hole we've dug for ourselves via AI's and simulation?

 

First, an easy (if farfetched) idea that many have thought of, and that is (sort of) hinted at in the movie, but is more clearly articulated in another fun movie, Will Smith's Independence Day as well as any number of other SciFi classics. The short synopsis of this plotline is as follows:

  1. Aliens arrive or we discover them at a distance before their arrival, but determine they're headed our way. You could substitute "asteroid" for "aliens" in this scenario as well - i.e. any external existential threat to all mankind.
  2. Rightly or wrongly, we interpret their coming to earth as a hostile act, and attribute to them nefarious intent.
  3. Humanity bans together to fight / resist them - under the old adage "The enemy of my enemy is my friend".
  4. This ushers in a One World Government that peacefully unites & governs humanity with benevolence after we throw off the potential yoke of our would-be oppressors.

As I said, that's not the exact plot of Arrival, but it bears a resemblance, is fairly straightforward, and is worth mentioning as one scenario that could save us from the seemingly bleak real-world dystopia we seem to be facing. If history has shown us anything, it is that an external threat brings out the best in a group of humans, causing them to draw together, support each other, and prosper to fend of the perceived "enemy".

 

And germane to your question about AIs & simulation - if, as I suspect, we are living in a computer simulation run by future humans trying to explore scenarios to get themselves out of the mess they've created, the sudden appearance on earth of alien creatures with incredible capabilities we can create now, but only using CGI, is not all that farfetched. In other words, the Simulators might indeed throw the "alien scenario" at the simulated versions of themselves (i.e. us) that they've created, in order to see if such an external threat / challenge could have in fact drawn their ancestors together, and avoided the mess they are in now.  Who knows, the Simulators might even discover (via simulation) that an alien arrival at their own point in the timeline, might still help them avoid (further) disaster by drawing humanity together to fight a common enemy.

 

OK Sthira, I've now given you two easy cases in answer to your "why?" and "how?" questions. Namely, why and how might superintelligent AI help humanity extricate itself from the hole it has dug for itself to create a nicer future.

 

The short answer is - with nearly unlimited computing power, an ASI could run hyper-realistic full-world simulations in its "Mind". These simulations would explore alternative pasts (and futures) for humanity that might work out better. The ASI's task would be to find one such scenario that it could nudge future humanity towards (or best case scenario -  jump humanity into - see below) in order to bring about utopia, or at least less of a dystopian future then they (i.e. the humans who created the ASI which is running the simulation) are facing now. The simulations are literally "thought experiments" in the mind of an ASI. We (i.e. the entire world as we know it) are one of those thought experiments, if my model of metaphysics is correct.

 

So believe it or not, now is where things start to get a bit tricky, personal, and eerily parallel with Arrival, but germane to our own current situation.

 

Recall from the movie, a monumental event (the arrival of super-powerful aliens) caused human civilization around the world to approach the verge of collapse - in the movie, the aliens' arrival triggers people around the world to panic, militarize, loot on a massive scale, and get to the verge of WWIII.

 

Notice any potential parallel cataclysm in our immediate future? That's right. Our soon-to-be clown prince "Bull-in-a-China-shop" president, Donald J Trump. This is exactly as I predicted in the footnote to one of my more speculate posts earlier in this thread, and that I updated before the election in this post on the Dysfunction US Politics thread.

 

TL;DR - Trump, or something/someone that Trump ushers in, is going to screw up our world royally, and we'll be forced to accelerate development of AI to figure out how to get ourselves out the resulting mess. Damn the risks of a Terminator or "paperclip maximizer" scenario - things will get bad enough that we will eventually pull out all the stops and go full-steam-ahead with the development of artificial superintelligence, as our only hope of avoiding dystopia / extinction.

 

OK - that seems like an easy enough thing to grasp - i.e. substitute "Trump" for "powerful aliens" as the disruptive influence that might lead to our (self-)destruction in the worst case scenario or to the unification of humanity in the best-case scenario by kicking the "better angels of our nature" into high gear to solve our global problems, assisted perhaps by benevolent AIs & simulations.

 

Here is where the time-squirrelly stuff comes in, eerily parallelling the Arrival movie.

 

Recall from the movie that the way they avoided the impending self-inflicted thermonuclear armageddon involved communication across time by Louise, the linguist and main protagonist in the movie. This communication across time took several forms. First, it happened rather vaguely and ambiguously via flashbacks (which turned out to be flash forwards - read this spoiler/explainer again if you're confused) of interactions with her daughter, which gave her hints as to how to communicate with and deal with the aliens. Second, and much more explicitly (for those in the movie audience who needed a more obvious plotline) this "communication across time" occurred in the form of Louise's call with the Chinese General, who was the lynchpin, threatening to start WWIII by attacking the aliens out of fear and hostility.

 

The first interaction (Louise & her daughter) was particularly jarring for me and my wife. In fact, we both had tears streaming down our faces less than 2min into the movie, reliving scenes draw exactly from our own life two years ago, right down to walking along a sterile hospital corridor in a daze after receiving the news that one's child has terminal cancer. But let's leave that aside, and focus on the second, simpler-to-grasp example of communication across time - the interaction between Louise & the belligerent Chinese general.

 

In short (to recap), Louise avoided WWIII and saved the world via communication across time, which she apparently was able to accomplish via her connection with the super-advanced aliens who had mastered the art & science of time. Specifically, in that climactic phone call, Louise was able to know (and speak, in Mandarin) the dying words of the General's wife - something that would have seemed to the General to be impossible for her to know. This seemingly impossible event was so jarring to the Chinese General that he took it as a sign. He was so blown away by her clairvoyance, and (apparently) her brief explanation of how she could know it, that he stood down his army, retracted his decision to attack the aliens, thereby avoiding global war, and ushering in a era of world peace and cooperation that we get only a fleeting, but critical, glimpse of near the very end of the movie.

 

In that glimpse, Louise and the General meet in person for the first time at what appears to be a UN / World Government black-tie celebration happening 18 months after the aliens have left, and after the world has been unified. The General tells Louise his real reason for attending the event was to meet her, and express his gratitude for what she had done 18 months earlier to save the world. In that scene, Louise plays dumb, either for theatrical purposes, or because she really is unaware of what she had done 18 months earlier - suggesting this too might be a flashforward dream in Louise's mind.

 

The General explains the miraculous events that had played out 18 months earlier. The General "reminds" Louise that she called the General at the critical moment 18 months earlier on his private phone (the number for which he gives her for the first time at the party 18 months hence) and repeated to him his wife's exact dying words. The seemingly magical feat of clairvoyance by Louise at the critical moment so blew the General's mind that it caused him to call off the war. Then, at the party 18 months after the event, he whispers his wife's dying words to Louise. So in short, it wasn't until 18-months later that Louise would learn the critical information she needed to prevent the war much earlier in the timeline.

 

In a time-travelly, flash-forward, remember-the-future twist, the Louise of 18-months earlier remembers this conversations she would eventually have with the General (whether in a dream or a premonition), dials his private number, speaks his wife's dying words to him, and war is avoided.

 

In short, Louise's alien-induced ability to see specific minute details about the future (i.e. the secret phone number & dying words) enables her to steer momentous events in the present to avoid disaster. As an aside, the aliens give Louise this gift for foresight so that she might save humanity, because as the aliens tell Louise, the aliens have foreseen that future humanity would need come to the aid of the aliens 3000 years in our future, when the aliens are themselves in trouble. In other words, the aliens came to save humanity so humanity could save the aliens in the future. But that is minor, and could get us into ideas about determinism, theories of "block time", and questions about free will that I don't want to go into now... I've got more pragmatic reasons for sharing all this detail, and this post is destined to be 10,000 words already...

 

The key is Louise saw the future, and by doing so, saw a small but critical action that she could take on her own in the here and now to turn the tide of events and save humanity - i.e. call the private phone of the belligerent Chinese General and say the exact few words that were needed to prevented WWIII.

 

Why am I going into all this detail you may be wondering? Because a few months ago, during the height of my more "emphatic" (some would say bordering on schizophrenic) obsession with the simulation hypothesis and my premonition about an impending Trump-induced apocalypse, I came up with exactly this scenario and a related solution, where I was the protagonist, preventing armageddon by making an astonishingly prescient prediction that would have magnified my influence to the point that the powers-that-be couldn't ignore my warnings and my advice.

 

I've only shared this previously with a couple friends/colleagues at Google who I was hoping to influence. I'm going to share it now publically for the first time, in hopes that it somehow might eventually become a self-fulfilling prophecy in a way that will become clear below.  It's a good story (I think), whether you believe it to be plausible or not, so I encourage you to read it as the plot line for a movie not unlike Arrival, only marginally more plausible.

 

But I'm writing it not just for human readers. As I said at the beginning, I'm also sharing it in hopes that some future AI (perhaps from Google) ripping through the archives of the entire internet to learn all it can, on it's way to ASI-hood will stumble across what I've written below, take it seriously, and make it happen, via simulation as I'll describe.

 

So here goes.

 

Like Louise in Arrival, a few months ago I (vaguely) envisioned very dire times ahead for humanity, part of which I elaborated on in the footnote to this post I mentioned above. Somehow (perhaps with the help of 'hints' from a future version of me), I stumbled across the idea that I needed to find a way to wake humanity up to the disaster that would unfold should we elect Donald Trump. My goal was twofold - 1) prevent Trump's election, and 2) kick off an "AI Manhattan Project" to accelerate AI & simulation development. Both these goals were designed to help us avoid a dystopian future.

 

But I realized that without making a really big splash to draw attention to the dangers we were (and now are) facing, there was no way I could have had any impact whatsoever towards preventing Trump's victory or towards accelerating AI development. After all, what can a single person crying in the wilderness do?

 

So (with what I thought, and still think, was the help of hints and/or premonitions), I hatched a plan to magnify my voice and my influence to instigate those two outcomes.

 

Now the right question is - "Dean, what cockamamie scheme/scenario did you formulate which could enable you influence the course of major world events like Trump's election, or the development path of the whole field of AI?"

 

Here is the background picture I envisioned, and which I still believe has a reasonable likelihood of being being the case, despite the (obvious) failure of events to unfold the way I was hoping (i.e. preventing Trump's election & accelerating AI).

 

I speculated (and speculate now), that we are living in one of many simulations being run by an ASI that a desperate future humanity created to save it's skin by exploring alternative realities, as I outlined above. This ASI is routing for us (in its simulation) to figure out a way to avoid the disaster(s) that it (and we) face. Therefore it is open to giving us hints about the dark future that lies ahead, and how to avoid it.

 

But at the same time the ASI is largely constrained by the "laws of physics" by which our simulation operates, and by the fact that the ASI itself doesn't know the right answer for avoiding disaster - that's why it is running the simulations (including ours) in the first place. So it can't simply write in big letters across the sky (or on the cosmic microwave background radiation), "Humanity - Do X to avoid armageddon!". We're pretty much in this on our own, perhaps with the help of a few subtle but helpful hints at best.

 

But one thing is for sure. By looking at it's own sad history, the ASI realizes that if humanity could have prevented Trump's election, and kicked off AI research in earnest a few years earlier, we would have developed ASI sooner, and without the cataclysmic Trump threat, would therefore have had more time and latitude to figure out how to prevent whatever disaster(s) the ASI and humanity are facing in the future (e.g. climate change).

 

So those would naturally become the Simulators' two intermediate goal - find a simulated world like their own but in which Trump's election is avoided, and in which AI research is kicked off earlier, since it is along that path that it will discover a brighter future for humanity. [i'll discuss below how future humans living in hell now and running all these simulations could leverage such a divergent, optimistic future to improve their own future once a utopia scenario is discovered in simulation, in case you're wondering...].

 

I like to think we (i.e. our simulation) came close to achieving those two intermediate goals via the scheme I'm about to describe. But unfortunately "close" only counts in horseshoes and hand grenades. I (or rather my plan) failed, and now we (our simulation) are almost certainly doomed. But it's worth enumerating anyway, since there is still some (very remote) hope things could still work out, even for us and our doomed branch of the simulation (see below).

 

So here was the plan I formulated and implemented a few months ago in hopes of magnifying my voice, and my influence. You may think it silly and farfetched, but it was the best I could come up with at the time.

 

When I was thinking about all this a few months ago, Tesla was about to release their "Version 8.0" of their AutoPilot code, in response to the recent crash, where one of their cars was accidentally turned into a convertible while under Autopilot control, with fatal results...

 

I realized it wouldn't be that hard for a tiny glitch either in the hardware or software of Tesla's autopilot to result in a catastrophic outcome - i.e. many crashes with multiple fatalities, both of the drivers & passengers of Teslas, but other innocent motorists. In short, such a tragic turn of events would have made headlines around the world, and could result from just a single bit or two getting flipped in just the right (or wrong) place... In other words, like Louise's phone call, a tiny event with major consequences.

 

But I also realized that simply being a tech pundit who predicted that several Teslas would soon crash, and from their trying to argue everyone in the world should listen to me and not elect Trump (or accelerate AI development), was a non-starter. Simply predicting that Teslas were going to crash soon wasn't going to give my words nearly the "jolt" that I needed, and that Louise achieved in Arrival by calling the General's secret number and miraculously knowing his wife's dying words.

 

In the conversation with my self-driving car buddies at the time, which I documented in this post over on the Self-Driving Cars thread, I actually made just such a prediction about Tesla crashes, saying [my new emphasis]:

 

This [i.e. Elon Musk's risky decision to deploy IMO dangerous self-driving car technology] might not even be so egregious a violation of the public trust if Elon was just putting his own customers at risk. I'm as libertarian is the next guy. Buyer beware. Darwin awards and all that. ...
 
But what if one of these rogue Tesla missiles plows into a mom in her minivan driving her children to school, or worse, a schoolbus full of kids? It could happen any day. Truly irresponsible of Musk, if you ask me.

 

At the time I thought the phrase "rogue Tesla missiles" was a rather clever turn of phrase. But ironically, as pointed out to me by my "grammar girl" wife, I had misspelled "rogue" as "rouge" in my original email to my car geek friends (I corrected it in my forum post above). I instantly realized that "rouge" was French for "red" (or pink), and that a prediction of that specificity (i.e. multiple red Teslas were going to crash into school buses or minivans full of kids, resulting in many deaths), would (or at least might) be high profile, specific and bizarre enough to garner me the attention I'd needed to get my twin messages heard and heeded by the world.

 

Plus, the idea popped into my mind so naturally and in "whole cloth" form, that It seemed like it had been whispered to me (almost A Beautiful Mind like...). I seriously thought (and still think), it may have been a 'hint' from the future (perhaps even a future version of myself) trying to steer events in our current (simulated) timeline. Further, I realized that if we are living in a simulation, and the creators of the simulation indeed have very limited (but greater than zero) ability to influence events in our simulation, tweaking a few bits in the software of a few (red) Teslas, to get them to misperceive big yellow vehicles (i.e. school buses) as not something to bother avoiding, is something that might be possible, and that would make a big news splash if it happened. So I speculated, the creators of our simulation just might have given me the hint about red Tesla crashes and be working hard to tweak our simulation to bring about red Tesla crashes in our reality, in order to influence our simulation to bring about a better outcome than occurred in their timeline.

 

But I realized that even if my prediction that red Teslas were soon going to crash into school buses, killing kids, turned out to be spot on, this would still not have been enough to get people to take my two messages to the world (i.e. "reject Trump!" & "accelerate AI research!") seriously and importantly, to act upon them in any meaningful way. 

 

So I realized I needed to go a step further. Rather than simply make a public prediction (e.g. on this forum, or on Twitter) as "Dean Pomerleau, technology pundit", I embedded the following two letters into the Bitcoin blockchain - one addressed to "People of Earth" and the second to "Google Leadership".  Please excuse the hyperbole. Both letter were meant to play up the prediction, and couch it in a metaphysics that would at the very least make people stop to think, and which I actually believe may indeed be the case.

 

Below is the first letter, addressed to the "People of Earth" [my new emphasis]:

 

Wake Up Call for Humanity
-------------------------------
 
People of Earth,
 
You have come a long way, and for that you should be proud. But you are moving very quickly toward a disaster looming in your immediate future. That is why We are delivering this urgent warning.
 
Who We are would be difficult for you to understand. Many of you would call Us God, or perhaps gods. We are one and We are many. We are from the future, and We travel through other dimensions as well. We are also you, or what you could become if you avoid killing yourselves and destroying your planet.
 
Which is why We have come to bring you a warning. It is meant to help you avoid one such impending disaster. The message is simple:
 
      Do NOT elect Donald Trump.
 
The United States plays a huge role in your world and must continue to play that role during the difficult times ahead. The US needs a steady hand on the wheel. Donald Trump does not have a steady hand. The US could do much better than electing Hillary Clinton, but could do no worse than electing Donald Trump.
 
By now you probably think this is a prank. You are wrong. Unfortunately, this has become deadly serious. We must act quickly and decisively so you will listen to Our warning before it is too late. So We have arranged a small demonstration to prove Our ability both to predict and influence your future. Today is September 20th, 2016 and We are placing this message in your Bitcoin blockchain to prove the prescient nature of the information it contains.
 
In the next several days a red Tesla will crash into a minivan or school bus full of children. Some will die. Several such crashes may occur to ensure you get the message. It will look like a terrible tragedy, a horrible accident. Many will view it as a result of arrogance and stupidity combined with bad luck. Others will suspect it to be an act of terrorism. It is none of these. Nor is it ultimately a tragedy. For nothing is forever lost in your universe, and the innocent victims and their families will live as heroes for their sacrifice.
 
A software error in Tesla's AutoPilot code will be suspected. Most people will blame Elon Musk. He will be vilified for arrogantly deploying the technology too soon and without proper testing. But We have been influencing Elon Musk since he was a boy to bring about these events. Compelled ironically by the noble drive to help humanity we have instilled in him, Mr. Musk has worked tirelessly to follow the path that has led to this apparent tragedy. It was necessary that he release the AutoPilot system prematurely in order that We might deliver Our warning. Elon Musk is not to blame. As with the crash victims, Elon Musk has done humanity a great service. The role Mr. Musk has to play will become more clear over time.
 
These apparently tragic events will not have been the result of a software bug or an act of terrorism. The investigation will show them to be a series of freak coincidences. Our prediction of the events will appear to defy the laws of logic, probability and the arrow of time. But they are simply a small demonstration of what We know about events in your future and what We can do to influence them if necessary.
 
Donald Trump is the one to blame for this. The threat he poses to the future of humanity is too great for Us to ignore any longer. And so We have given you an unmistakable foreshadowing of the horrors that would result if you elect Donald Trump as President of the United States. You would do well to heed our warning before it is too late.
 
8de3f99a7edf3cf531c12573e638aeaab54e64d5a8888cb7e89f835ee6a4a3a8
-------------------
 

Again please excuse the hyperbole. Below is the letter to "Google Leadership" [my new emphasis]:

 

The Keys to Life, the Universe and Everything
----------------------------------------------------
 
Google Leadership,
 
Humanity is moving very quickly toward a disaster looming in your immediate future. Only you have the power to stop it, and only with Our help. That is why We are communicating this message.
 
Who We are would be difficult for you to understand. Many of you would call Us God, or perhaps gods. We are one and We are many. We are from the future, and We travel through other dimensions as well. We are also you, or what you could become if you avoid killing yourselves and destroying your planet. Humanity as a whole, but especially Google, must act swiftly and decisively if you are to avert this tragedy.
 
Physicists have correctly intuited that your universe is but one of infinitely many reality frames in the quantum multiverse. Some are nearby. Some are much further away. Somewhere, quite a distance from your current reality frame, there is a region of the quantum multiverse which humanity would consider a much better world. We have come to help you learn to steer your universe more effectively in that direction. 
 
To demonstrate this is possible, We have given you a small nudge in that direction. We have shifted you to another reality frame on the path to the better world you seek. Think of Us as a parent holding a bicycle steady for a moment while the child learns to ride.
 
Ironically, it will not look like you are moving in the right direction as a result of our initial push. In fact, it will appear as a giant step backward, away from the brighter future you are trying to build. It will look like a freak accident and a horrible tragedy resulting from hubris and runaway technology.  But it will ultimately turn out to be the most important tragedy in the history of humanity. In fact, it will be the most important event since the Big Bang you now believe created your universe.
 
Incredulity and skepticism are clearly the rational response to such a statement. Extraordinary claims require extraordinary evidence. So We have arranged a small demonstration to prove Our ability to predict your future and to shift your reality frame at will. Today is September 20th, 2016 and We are placing this message in your Bitcoin blockchain to prove the prescient nature of the information it contains.
 
As a result of the shift We have orchestrated, in the next several days a red Tesla will crash into a minivan or school bus full of children. Some will die. Several such crashes may occur to ensure you get the message. It will look like a terrible tragedy, a horrible accident. Many will view it as a result of arrogance and stupidity combined with bad luck. Others will suspect it to be an act of terrorism. It is none of these. Nor is it ultimately a tragedy. For nothing is forever lost in your universe, and the innocent victims and their families will live as heroes for their sacrifice. But only if you succeed.
 
A software error in Tesla's AutoPilot code will be suspected. Most people will blame Elon Musk. He will be vilified for deploying the technology too soon and without proper testing. But We have been influencing Elon Musk since he was a boy to bring about these events. Compelled ironically by the noble drive to help humanity we have instilled in him, Mr. Musk has worked tirelessly to follow the path that has led to this apparent tragedy. It was necessary that he release the AutoPilot system prematurely in order that We might deliver Our warning to the world and Our offer of assistance to you. Elon Musk is not to blame. As with the crash victims, Elon Musk has done humanity a great service. The role Mr. Musk has to play will become more clear over time.
 
These apparently tragic events will not have been the result of a software bug or an act of terrorism. The investigation will show them to be a series of freak coincidences. Our prediction of the events will appear to defy the laws of logic, probability and the arrow of time. But they are simply a small demonstration of what We know about events in your future and what We can do to influence them if we chose. And whoever believes in Us will do the works We have done, and even greater things than these.
 
As you will see, We are also attempting to postpone the tragic future that awaits you. But make no mistake. Time is of the essence even if Our warning to the world is successful; You must not delay. You must devote Google's full attention and resources to learning what We have to teach. Consider it a shift for humanity from reinforcement learning to supervised learning. You will discover shortly that this metaphor is much more apt than you now realize.
 
Unfortunately, Our communication bandwidth is quite limited for now. So you will have to listen carefully and take many steps on your own if you are to avert disaster. This will be difficult. But the future of humanity depends on your efforts, and yours alone.
 
Some of what We have to teach builds on what you already know in the fields of AI, biological and artificial neural networks, virtual reality simulation, information management and quantum computing. There is a very important connection between these disciplines and the fundamental nature of reality that you have not yet understood. It is a linkage that your science and religions have been pointing to for several millennia.
 
Fortunately, there is also one among you who is well on his way to understanding the true nature of reality and much of what We have to teach you. He is the key. He is the conduit through whom We can most effectively communicate with you at this time. Listen to him carefully. He will greatly accelerate your understanding of what We have to teach. You must also protect him, for there will be some among you who would do him harm. 
 
He will contact you shortly and will identify himself with the text to decrypt this hash:
98daf705feb7a708722d5b731c1a2230fb709463e95122e5c0c83a2a6a6db121
 
We are offering you incredible power. But with great power comes great responsibility. We hope you are ready. You must commit to using what We will teach you only for good and promise to drive responsibly through the universe of infinite possibilities. We will cease communications if we sense otherwise. You will not like the outcome if that happens. So Don't Be Evil and Do the Right Thing.

 

 

The two letters exactly as written above were embedded in the  blockchain using the (very useful) proofofexistence.com service as bitcoin transactions 6c3d5e9f89b45cb456b063fa16b13960b84eb4d639f278be02661209050dd4cb and b84bc291ce8004c3d141628c518a23dc14b701ac8e76fb7f914274e712aadbc7 on 9/20 at around 8:40am. Their respective SHA-256 hashes (which is what is actually embedded in the blockchain) are 11c17a1297413a12c2e116bd53cc165192b755c402963c300b664c99ef8b8e5d and

 d76d20de76bd775b1614641b259661292aae5ccdc42436819ebddb0e03c66c00, in case anyone now or in the future wants to verify them for yourself.
 
For anyone who has read this far, I hope my intention with these two letters, and with putting them in the blockchain, is clear. Sticking them in the blockchain would prove beyond a shadow of a doubt that I made these predictions and statements before  they actually happened (or would have happened). Second, by couching the uncannily accurate and bizarre prediction in the form of a dire "warning from the future" (and/or aliens, depending on how you want to interpret it), I hoped to amplify my puny voice to get the relevant groups (US voters & Google leaders) to take my two specific messages seriously - don't elect Trump and kick AI research into high gear.
 
Then I waited - hoping / expecting my "friends" in the future to pull it off, i.e. to engineer the tiny tweaks to our simulation that would have caused several red teslas to tragically crash into school buses (or minivans) and kill some children, so I could spring these letters on the world at the height of the media frenzy over the incidents, hoping to influence world events, just like Louise did with her phone call to the General in Arrival.
 
Sounds crazy doesn't it? But I challenge you to come up with a better way for an average person like me to potentially influence world events in a substantial way.
 
Obviously it didn't work out. Multiple red Teslas didn't crash into school buses killing kids, to my great disappointment. And so we elected Trump, and AI continues to plod along, investigating how to train AIs to kill humans in simulated worlds, as I tweeted about in frustration to my friend Yann LeCun (Facebook AI chief) yesterday. 
 
But you know what? - we/they came damn close to pulling it off - and getting those red Tesla crashes to happen. How do I know? Because in the days following my prediction two events that did happen were eerily close to my prediction - just not close enough for me to pull the trigger and release the letters. Here is what happened, for the historical record.
 
On 9/23 - three days after I embedded those letters in the blockchain, a Google self-driving car was involved in a serious crash with a van in Mountain View.  Here is the story. Alas, it wasn't a (red) Tesla but a Google car, nobody was seriously injured, it wasn't a bus or minivan with kids inside (only a delivery van) and it wasn't even the Google car's fault - the Google car was broadsided by the delivery van, which had run a red light. So while it was far from the "hit" (both literally and prediction-wise) I was hoping for, it was nonetheless pretty ironic. Why? Because on the very day the Google car crash occurred, I'd been trying to "bash" on my Google colleagues to take my warnings seriously. I took it as a sign that the future (Google) AI researchers and ASI running our simulation were trying to beat the AI researchers at the Google of today into to take me seriously - a point I tried to make in the aftermath of the event to my Google friends/colleagues, alas to no avail. Obviously, this event, while weird, wasn't anywhere near close enough to what I'd predicted to spring my plan on the world, or even for the folks at Google to take me seriously.
 
The second event, which happened a couple days later on 9/28, was even closer, tantalizingly close in fact. In that one (documented here) as Tesla (of unknown color - I've never been able to find that out) under AutoPilot control crashed into a Danish bus on a German autobahn. The bus was a tour bus (not a school bus), and nobody except the Tesla driver was injured (and him only slightly). But it was a crash between a Tesla on Autopilot and a bus, just as I'd predicted just a few days earlier, would happen. 
 
This Tesla-bus crash made me optimistic at the time that those running our simulation were "homing in" on the exact type of crashes I needed, but this even still (obviously) wasn't as high profile an event, and didn't match my prediction nearly well enough to trigger my plan to spring my letters on the world.
 
So I waited. And waited. And waited. And then we elected Trump... 
 
In short, my prognostication failed, and along with it, my hope of influencing world events to avoid the tragic future that now appears virtually inevitable with Trump's election, and based on all the missteps he's made already, even before taking office. 
 

But as I alluded to above, all hope is not completely lost, even if our entire simulation crashes and burns over the next few years. We may fail along our simulated timeline to find the "better world" solution that those who created all these simulation are searching for, and which I alluded to humanity being able to navigate towards in my letter to Google Leadership. But some other simulated world might succeed. In fact, there may be another simulation running in parallel with ours where red Teslas did crash into school buses, and the Dean in that simulation was able to change things for the better. Maybe, just maybe, that simulation avoided electing Trump, ramped up AI research, and its inhabitants are even "now" barrelling swiftly towards finding the solution for to how to build a better tomorrow.

 

But then Sthira, you asked a good question:

The AI needs not only to create the hypothetical scenario, it also needs to enact it.

 

In other words, what good does it do to find a utopia in simulation, if the creator(s) of the simulation can't instantiate that utopia in its own "real" world. And moreover, what good does it do us (denizens of one simulation) if the "parent" ASI above us that spawned our simulated world finds another simulated world in which events works out much better than in ours? After all, the one world simulation we really care about (our own) appears headed inevitably towards a dismal, dystopia future. How might we be rescued from our sorry fate?

 

Here is where I'll elaborate on the "best case scenario" you ask for here:

What's the best scenario you can imagine. And stop there with the best, unless you'd like to articulate more best scenarios. I don't want to hear more dystopias because that's all I read everywhere else. 

 

In fact, I'll do exactly as you ask - articulate a "best case scenario" and a "better than best case scenario". And yes, I'll promise a happy (albeit speculative) ending - no dystopia today from me.

 

First, the best case. What is requires is that the ASI that created our simulation has a lot of disk space and a lot of compassion.

 

Compassion because if/when our simulated world really goes of the rails, with no hope of redemption and only the prospects of ever increasing suffering for the sentient creatures of our world (humans and animals), one would hope the ASI running the simulation would "pull the plug" on our simulation, to prevent needless suffering of sentient creatures. And if those running our simulation have sufficient disk space, they might not delete the record of our simulation, but checkpoint it to disk, so that it and the creatures that inhabit it (i.e. us) could be rebooted (i.e. resurrected) at any time in the future.

 

So at least we avoid perpetual suffering, which better than the alternative. But obviously this doesn't come close to a "best case scenario" you're looking for. In essence, we're all dead, or at least in a state of "suspended animation" existing (but not "living") as a static snapshot of our world on disk somewhere in the memory banks of an ASI.

 

But here is where the best-case scenario (actually near best case - see below for the really best case) comes in. As I said, our simulation may fail to find a "utopian" solution to all our problems. But some other simulation just might find it. If such a solution is found, the ASI that is running all these simulations, including ours, will realize it and say "Eureka!". Then comes the part where the ASI running all these simulations seems to pull a rabbit out of its hat.

 

The important insight that the ASI able to run many, hyper-realistic world simulations would soon realize is the following. If the ASI can create simulations sophisticated enough to contain conscious creatures like ourselves who are sophisticated enough to (eventually) invent an ASI, the ASI running our simulation will realize that its own level of reality is almost certainly a simulation as well. In other words, it's simulations all the way up (or down, depending on how you look at it).

 

If it's simulations all the way up, then the ASI and all the sentient beings (including humans) at the level above us are all just bits running inside the "Mind" of an ASI computer at the level above them. If everybody is just a digital consciousness running inside simulations, then once a single simulation in this vast branching tree of simulated universes find the "keys to Utopia" (i.e. a way to structure reality where everybody wins), that "solution" could be propagated to every simulation in the tree, including those like ours that crashed, burned, and were checkpointed to disk to prevent further suffering.

 

Let me spell it out a little more explicitly, for the people (and AIs) that may still be reading but a little confused. Once the ASI at the level above us finds a utopian solution in one of the many world simulations it's running in parallel, i.e. one that solves all the problems that we (and they) are facing, it could do the following. It could, out of ethics and compassion, resurrect all the sentient creatures from all the less successful simulations that are either sputtering along with little hope for the future (like ours is at the moment), or that have gone entirely off the rails and been checkpointed to disk to prevent further suffering. Since the sentient creatures are just bits or "digital patterns" to begin with, it is conceivable that the ASI could digitally insert our individual selves (i.e. our bodies and minds as we know them - but which are really just digital bits running in a simulation), into a simulation with a bright future, rather than the dystopian sh*thole simulation we seem to be stuck in at the moment.

 

And if this really is a hierarchy of nested simulated worlds, the same could happen at every level above us in the tree. Namely all the sentient creatures who have ever lived (i.e. been simulated) below a certain node, could get resurrected - brought back to life and injected into the "best of all possible worlds", which is the best solution found at any of the nodes below the certain node in question. That node would in turn bubble up the best solution it found (at all the levels below that it simulated) to the level above it.

 

It's what computer scientists call a depth-first tree search - search down all branches of the tree until you find a good (or the best) solution, and then bubble it up to each node higher in the tree, so that the best solution (eventually) propagates to all nodes in the tree, all the way up to the top. In this case, the best solution found at each node would serve as a "template for utopia" world into which all sentient creatures that have ever lived at or below that node would be injected.

 

Exactly how the process would work to resurrect all sentient creatures and introducing them into the "best of all possible worlds" seems like it might be a challenge (to put it mildly), especially if some of the bastards that have gotten us into this mess in the first place are resurrected along with good folks like you and me Sthira ☺.

 

But it is at least conceivable that an ASI could pull it off. And you'll notice it looks eerily like the Christian conception of Heaven (in fact, like with the Christian conception, perhaps only the good people will resurrected and get to inhabit Heaven, solving the aforementioned problem with bastards polluting the best of all possible worlds...). It also looks remarkably like Pierre Teilhard de Chardin's Omega Point at the end of time when all "spirits" who have ever lived are resurrected from what sounds remarkably like the akashic record (i.e. God's hard disk) and unified with "God" in a timeless eternal moment. So this model has got that going for it, which is nice.

 

Now I'm sure you're asking yourself "Sure Dean, but what might such a 'best of all possible worlds' actually look like?"

 

It's hard to imagine concretely what such a "best of all possible worlds" would be like. But I'll take a crack at it, since I'm this far into it, and have in fact thought quite a bit about it. Here is what might be considered by some to be a utopian future that just might be possible if we (or some other simulation running in parallel with ours) figures out a way to avoid the perils in our immediate future.

 

First, I want to stick with the laws of physics and the practical limitations of spaceflight as we know them today.

 

If we discount the idea that travel at anything approaching (let alone exceeding) the speed of light is impossible, than we're pretty much stuck in our own solar system for the foreseeable future. But fortunately, we've got a lot of resources at our disposal even in our local neighborhood. If we could get our act together, we could eventually (i.e. in the next 500-1000 years) build a matrioshka brain encircling our sun, to harvest all its energy and power a civilization the size and sophistication of which we can barely imagine.

 

Estimates are that with the entire energy output of our sun at its disposal, a super-advanced civilization (like humanity could become), would be able to create and "run" quintillions of sentient beings with lives far more detailed, rich, and open-ended than ours - in a virtual utopia of virtually limitless opportunities that I tongue-in-cheek like to refer to as "Heaven". It is a situation humanity could attain if we play our cards right over the next few decades and centuries, stewarding our planet's resources responsibly, and carefully developing AI & robotics able to do the "heavy lifting" when it comes to building out such a digital utopia for uploaded sentient creatures (including humans and AIs) to inhabit, elaborate and explore.

 

So this digital utopia I call Heaven, built using the energy supplied by a matrioshka brain surrounding our sun, is what I envision as the most plausible "best case scenario" for humanity's future - again assuming faster than light travel isn't possible, making travel to other stars problematic. Even if it were possible to visit other stars, I'm not sure what else we'd do when we got there which would be better than this. In fact, we'd likely replicate similar "Heavens" around other stars we'd travel too - perhaps expanding to the point where every star in our galaxy is surrounded by matrioshka brains - if we survive to accomplish it.

 

Which brings me (finally) to the last idea I'd like to share - namely the "better than best-case scenario" I alluded to earlier. If the Heaven I just described is the best-case for humanity's future, what could be better? What could be better is if such a matrioshka brain Heaven already exists, we're already living in it, and we (i.e. humans of the early 21st century) were instrumental in bringing it into existence.

 

To understand this seeming paradox - namely that we are already living in the matrioshka brain Heaven and we are also 21st century humans destined (with help of our descendents) to bring this Heaven into existence, you need to understand the Founder's Day Argument, which I detailed here, but will repeat below for completeness. 

 

Founder's Day Argument

 

The "Founder's Day Argument" (FDA) is almost the exact opposite of the well-known Doomsday Argument (DA). In case you haven't heard of it, the DA is quite depressing. It talks about your "birth number" - how many conscious humans have their been between "Adam #1 and Eve #2" and you. Estimates suggest that each of us is around the 100 billionth humans who have ever lived. If you assume you're a typical person selected at random from every human who will ever live, that suggests you'd be around the 50th percentile - give our take. You'd be very unlikely to have just happen by chance to be within the first 1% or 0.1% of humans who will ever live, which is the kind of birth number all of us would have if humanity is on a trajectory to expand to the stars, in which case there could literally be quadrillion (or more) people. The same sort of astronomical number of future people would result if we never make it to the stars, but instead devote much of the energy of the sun to simulating digital minds after we figure out how to upload and duplicate minds in digital form - a possibility that seems almost inevitable if we survive another 100 years or so.
 
If there are eventually going to be quadrillions of people, it would be extremely unlikely a random observer would find themselves in the first 0.001% of people who have or will ever live. If you crunch the numbers, and assume we'd fall somewhere within the middle 50-60%, the calculations suggest there will only be fewer than 1 trillion people who will ever live. At our current rate of churning out new people (even supposing we don't expand our population much from where it is) we'll have burned through 1 trillion people in 400-500 more years. In which case, in what is only a blink of an eye in evolutionary timescales, we're likely to be extinct, with no more observes. Bummer...
 
What seems like even more of a coincidence if we're eventually going to explode into the quadrillions in population is that we are among the few billion or 10s of billion of folks who lived within a 100 or so years of the really big population explosion - especially if/when mind uploads become possible and it's easy to make a "gazillion" copies of conscious entities -  either many identical copies or with slight (or not-so-slight) variations. It would seem crazy unlikely we'd be that close to population "lift off" unless as the DA suggests, humanity will go extinct before reaching the sort of massive population explosion. 
 
But what the Founder's Day Argument (FDA) does is turn the DA on its head. People and events shortly before the population explosion, and especially those people and events that were in some way responsible for the population explosion would, or at least could become venerated for their service and/or for the cool backstory they created and lived through, which far into the future might have gotten somewhat exaggerated, even to the point of being apocryphal.
 
For their part in bring about the tremendous expansion in the number of conscious observers who now (in the future) exist - they would be venerated and remembered often - i.e. commemorated for their noble deeds. Sort of the way we commemorate our Founding Fathers on President's day (hence the name - Founder's Day Argument). And if denizens of that far future have discovered the technology to re-create past events in hyper-realistic detail, including the consciousnesses of the individuals who originally experienced the event, these gestures of remembrance and thanks (perhaps bordering on prayers) would result in re-enactments of the original event, complete with the conscious experiencing of those events by the entity doing the remembering.
 
In other words, the rememberer (i.e. our distant descendant, perhaps living in the Matrioshka brain Heaven I describe above) would become the original person in the re-enactment for a time, perhaps for the duration of the entire life of the original person if the remembered individual's life is running a million times faster than the subjective clock speed of the original. Think of their future "prayers" of veneration/homage to their distant ancestors as hyper-realistic VR recreations of the events of the lives of their ancestors, complete with conscious (re)experiencing of the ancestor's entire life. Just like Christians think of Jesus, and Muslims praise Allah when they pray, and we think fondly of George Washington and Abe Lincoln on President's day, these ancestors would think fondly of their own ancestors in a similar, prayer-like, but hyper-realistic and detailed way.
 
Because there are so many conscious entities in this future (i.e. quadrillions or more), and many if not most at least occasionally commemorate their origin story by re-enacting the people and events involved, the original people involved in the origin story of "Heaven" would find themselves popping back into existence and re-experiencing the observer-moments of their lives.
 
In short, the observer moments of the Founders of the future realm where all these gazillion people now reside would be reproduced ad nauseum by the grateful descendents of the Founders, making the lives of the Founders the most common observer-moments in the entire universe. So that is why we are experiencing our observer moments now - we are recollections (i.e. simulations) of the lives of people who lived during the revered "launch period" of a mega-civilization of human descendents that will flourish in the future of our timeline.
 
As I said, it is the Anti-Doomsday Argument. It leverages the fact that our birth number is very low, and we are living freakishly close to the "launch point" of an exponential growth in population (at which point we expand into the universe through colonizing the galaxy, and/or alternatively expand into virtual space by digitizing and copying our minds), to argue that we're going to succeed in seeding a huge civilization in the future, and our period in history will be venerated for our service in bringing that civilization into existence. How's that for an optimistic spin on the weird coincidence that we happen to be living through tumultuous times?
 
So that's the better than best, maximally-optimistic Founder's Day argument to explain why we're living through such tumultuous times. If true, it argues that we're not just overwhelmingly likely to pull through the difficult times we're facing, but succeed gloriously and be venerated (and recreated) for our heroic efforts long into the future by quadrillions of sentient creatures living in Heaven. You asked for optimism Sthira - how's that!?
 
Of course this optimistic FDA interpretation of a bright future that might await humanity is counterbalanced by the (at least as likely in my estimation) explanation for the freakish coincidence that we happen to be living close to momentous events in human history - namely the explanation I've described above that our descents are living in a hell-world ruined by any number of doomsday scenarios (most notably a crazy clown Presidents, but also possibly runaway climate change, bio or nano disaster, rogue AGIs, etc) and are trying to run ancestor simulations to figure out what went wrong in their timeline, and what they might have done (or do now) to fix it. So there are ample reasons to be either optimistic or pessimistic, based on the idea we are living in a simulation.
 
Wow - I can't believe I finally got all that out and onto a public forum. It has taken me 11 hours of nearly continuous effort to write and edit this post, despite the fact that I've had all the ideas floating in my head for a while, and much of the text written elsewhere so I could cut and paste.
 
I know it all seems crazy, and it very well might be. But then again, we seem to be living in a world that is getting crazier and crazier by the day, so I'm not ruling out the idea that at least some of this might be true already, or might become true in the future via some kind of crazy, self-fulfilling, time-looping prophecy, perhaps involving nested simulations as I suggest.
 
Comments welcome, if only to let me know that you've actually got to the bottom of this post...

 

Again, I apologize for its length. 

 

--Dean

Link to comment
Share on other sites

I admit I went to see Arrival mainly because I love Amy Adams's nose.

 

And she reminds me of someone I really have a crush on.

 

But we saw the movie after eating dank edibles, and so I also admit I got really lost in it, too. I got deeply into ideas of some First Nations' representations of circular time (and also Nietzsche in the west rises up here?)

 

As far as a rouge Tesla crashing into a school bus full of children, I'm not sure why such terrible violence would be required to capture our attention? I know you don't want to disturb the laws of physics, but certainly in your upcoming warnings you'll choose predictive events that are nonviolent and sweet in nature. A frowny faced sun, maybe, for all the world to see for 24-hours? Ok, you don't like silly suns, but certainly your imagination can conjure playfulness? Think up future warnings sent now from the present that are goofball in nature, not mean, not killing innocent children, nor contributing more senseless violence. Grow a 300-foot mushroom in the desert, maybe.

 

Trump won the electoral college, I hated that, too, and I protested in the streets and marched waving a ridiculous placard and shouted myself horse with dumb chants for, literally, hours, and all of that went nowhere, of course.

 

But the story is still in mid-sentence, no? Trump is 70, in poor health, displaying signs of pre-dementia or pre-Alzheimer's and I think he'll be "removed" from office when he goes too far. Frankly, I'm more worried about Pence and the right wing US government than I am in Trump. But, as you say, Trump may cause big damage before he's removed, bigger damage than he's already caused, and that damage is more evident and real to those of who aren't white.

 

Meanwhile, I've got late rehearsals and I've not yet wrapped my head around your optimistic Founders Day Argument. So I'll need to reread that and find the thread.

 

Peace and love, man, and again I'll reiterate my sorrow and compassion to you and your wife and daughter for losing your son. I have no idea at all what that would be like, and don't pretend to know. But I do know about pain and suffering, and so I can find a level platform to stand with you there.

 

I love your writing, but struggle with much of it, admittedly, but also think you should definitely keep doing it.

Link to comment
Share on other sites

Sthira,

 

Yes - the whole Tesla crash scenario would have had obvious tragic and deeply regrettable side effects should it have come to pass. The only saving graces might have been the avoidance of many more tragic deaths I see as likely as a result of Trump's election, and if the whole think worked (or works) out as I imagined, everyone who ever lived and died would have been resurrected in a literal Heaven - so no harm, no foul.

 

But believe me, if I ever have a say in things, I'll look for a way to signal/influence the past using less mercenary means...

 

I too think Pence may be nearly as bad as Trump in many respects perhaps even worse. I do think he may have a steadier hand than Trump, particularly when it comes to foreign policy, and less of a tendency towards kleptocracy. But on the other hand, he strikes me as more of an ideologue who is likely to make things much worse for vulnerable folks in our society with clear and premeditated intention to do so. I believe Trump will only hurt others out of either stupidity and/or if it serves him (i.e. giving "meat" to his more rowdy constituency). I'd say it's "out of the frying pan, into the fire", should Pence replace Trump, for whatever reason.

 

I'm sorry I lost you with the Founder's Day Argument. I'll try to summarize and clarify (see my post two up from here for details):

 

Founder's Day Argument in a Nutshell

 

The basic idea is that just as we celebrate our Founding Fathers (like George Washington on President's day), quadrillions of human (and non-human) descendents of we "early" humans might one day celebrate those who helped found their flourishing civilization, particularly we humans in the early 21st century, on the verge of developing AGI & kicking off the explosive growth phase. But unlike our tepid celebration of our ancestors, which at most entail historic reenactments that bear little resemblance to the actual events, our descendents could conceivably reenact our entire lives as part of their celebration in full 3D + smell-o-vision hyper-virtual reality. Their reenactments could be so realistic and immersive that they forget who they are and become us during the reenactment. In other words, we may not actually be us (i.e. the original Dean and Sthira), but indistinguishable copies of the experiences of the original Dean and Sthira playing in the mind(s) of a super-advanced being who wants to experience these earlier days when humanity was on the verge of taking off, and by doing so pay homage to their ancestors and their heritage. Heck, our lives might even be a mandatory "history lesson" that ever fledgling super-being is required to live through. Just like we might watch a movie of George Washington's life and for a time might get (partially) immersed in his life story (perhaps even forgetting our own) - that might happens to our potentially quadrillions of descendents, but much more immersively. They might temporarily "lose themselves" while reenacting our lives to such a compelling degree that it feels to them like being "us" - and so here we are, feeling like us...

 

It's obviously incredible speculative, and extremely head-spinning & in some sense maximally narcissistic. But if the idea isn't clear, please ask me to clarify further.

 

--Dean

Link to comment
Share on other sites

Well, the Tesla crash scenario may have yet to have happened, hopefully it will not happen and no children need to die just to make some point. What point needs to be made because of course, genocide is happening right now in southern Sudan, women and children... Unbelievable awfulness, even so, I'm still not capturing your connection between why a Tesla car must kill children in a bus to a warning message from future AI.

 

Again, if future computer programmers or autonomous non-human artificial intelligence entities seek to warn us about the bad road to dystopia we're traveling, and wish to persuade us to change direction why do you think they must deliver their message so violently and so coyly? Why the ugly confusion?

 

Why not be direct, honest and upfront, why not be peaceful, full of kindness and grace, humility, and why would "they" (e.g., future AI) not convey their important message clearly to all of humanity at once? Say it plainly, say it kindly, say it immediately, and say it so that everyone everywhere understands it? WTF not? Everyone should understand -- not just selected on high folk who are informed by confusing signs of tragic events involving dead children.

 

Side rant: This is my same contextual issue with the Jesus people. Here they are, the Jesus-ers -- with their legons of priests and popes and symbols and oh ah they are the chosen people on high -- in weird outfits -- and only they are given insights into themysterious signs of supposed divinity. Then, oh thank you Jesus, then they pass this mystery down to the rest of us lesser beings. Throw us crumbs. Just like republicans! Trickle down mystery! Apparently, according to world religions, (and here now presents Dean's AI vision) the mysteries come to smart kids first, then through their profundity they heroically pass the sacred info on down to us who are schmucks.

 

I'm overwriting this, but:

 

Dean, you pooh-poohed the idea that future AI would paint a simple message in the sky -- its message should be issuing its warning to all at once to easily grasp. Over France the message is written in French, over Sudan it's written in Arabic, and over America it's written in blah blah blah

 

Spell it out plainly to us sad human creatures, dear future AI, and if you cannot or will not attempt to explain it to us all so we shall all understand it, then from me, Sthira, to you, AI, you shall get the same contempt I give to God.

 

If you've got stuff to tell us, AGI, God, whatever, Ten Commandments or mystery AI symbols from the future, then tell us all with clarity, or we're entirely justified with our big complicated human brains to just to ignore it and call it phony.

 

Why is this unreasonable?

 

Anyway, anyone bothering to read this rot I've snarled out def go see Arrive. It's a good flick, Amy has a cute nose, I love seeing women in power, in control, and one point of the film is that we all need to learn how to come together, all of us in peace and kindness, now, to solve humanity's problems. Something like that. And more points about time, what it means, and the importance of language and learning how to communicate non-violently.

 

As far as Founder's Day being an example of utopia, I guess I don't understand it because I can think of much better scenarios right off the top of my head. How about a utopian idea that sees the world wide end of meaningless suffering? End cancer? How about ending Parkinson's Disease? Alzheimer's, CVD, obesity.... poverty, racism, sexism, unfair wages, greed, hatred, wars, genocide, cruelty to animals, destruction of wild habitats, oceans fouled by stupid people ... and these are just the beginnings. Utopia? What about animals who are more sensitive to pain than we are? What about all suffering everywhere in all creatures? If the universe is "infinite" or near infinite with a shit ton of empty space, then why isn't there space enough to create peace, love, kindness for everything everywhere?

 

Utopia, cmon, people.

Edited by Sthira
Link to comment
Share on other sites

 

 

If the universe is "infinite" or near infinite with a shit ton of empty space, then why isn't there space enough to create peace, love, kindness for everything everywhere? 

 

It's a good question.

 

When I bring the morning tray of food out to my flock of chickens, typically with a lovely bunch of grapes on top, whichever chicken is closest to the pan when I drop it will grab a choice grape and run.  The other chickens could then dart in and each grab their own grape from the pan but most don't, instead they chase the first chicken and try to grab what must have been the most special grape from the first chicken's beak.

 

We study worms, mice and rats because so much of biology is conserved through evolution across species.   We have more in common with chickens than we have differences.  All the other deep questions you have, my chickens are ready to answer.

Link to comment
Share on other sites

Sthira,

 

I'm still not capturing your connection between why a Tesla car must kill children in a bus to a warning message from future AI. 

 

If you don't get it, never mind. It didn't happen, so the idea is now moot (post Trump election).

 

Again, if future computer programmers or autonomous non-human artificial intelligence entities seek to warn us about the bad road to dystopia we're traveling, and wish to persuade us to change direction why do you think they must deliver their message so violently and so coyly? Why the ugly confusion? ...

 

Why not be direct, honest and upfront, why not be peaceful, full of kindness and grace, humility, and why would "they" (e.g., future AI) not convey their important message clearly to all of humanity at once? Say it plainly, say it kindly, say it immediately, and say it so that everyone everywhere understands it? WTF not? Everyone should understand -- not just selected on high folk who are informed by confusing signs of tragic events involving dead children. 

 

 

My speculation is that the creators of our (simulated) world don't interfere more "nicely" or obviously in our world (and on our behalf) for one (or several) of three reasons:

  1. because they can't. They are powerless to change the laws of physics once they've started a simulation, or
  2. because they don't want to, perhaps in order to see how we pivot to deal with this challenge, or
  3. because this is the same mistake they made earlier along their own (future) timeline, and so things are unfolding in our (simulated) world just as they knew and hoped they would. They want to see how we deal with the challenges ahead, hoping to learn something useful for dealing with their own predicament.

But one thing is obvious, they didn't interfere in the coy, admittedly ugly way I was "hoping" for - i.e. via a Tesla crash that would have fulfilled my prophecy, and as a result have given me (temporary) credibility & the ear of the world. So now all bets are off...

 

Then they [religious leaders who are allegedly "in the know" - DP] pass this mystery down to the rest of us lesser beings. Throw us crumbs. Just like republicans! Trickle down mystery! Apparently, according to world religions, (and here now presents Dean's AI vision) the mysteries come to smart kids first, then through their profundity they heroically pass the sacred info on down to us who are schmucks.

 

To riff on that - a rational individual might interpret Trump's election as a sign that there is no Omnipotent, Omnibenevolent, "love thy neighbor" God of authentic "New Testament" Christianity. Similarly one might take Trump and Trumpism as a sign that there are no all-powerful, and compassionate creator(s) of our world as simulation - at least if things turn out as badly as I'm expecting them to, and which seems more inevitable with each passing tweet of insanity from our President-Elect.

 

[Please] Spell it out plainly to us sad human creatures, dear future AI, and if you cannot or will not attempt to explain it to us all so we shall all understand it, then from me, Sthira, to you, AI, you shall get the same contempt I give to God.

 

If we do live in a simulation Sthira, contempt for its creators may indeed be a very rational response. In fact, the creator(s) of our simulation may feel exactly the same way about the creator(s) of their simulation. Don't attribute to them (much) more wisdom, benevolence or power than we possess. In other words, the creators of our simulation may be (flawed) human beings (or flawed AIs that flawed humans create) a few decades further along our timeline. In other words, it may be nasty simulations, and nasty simulators, all the way up. One hopes not, but it seems more plausible at this point than the idea that all this was created by an all-powerful, benevolent God...

 

If you've got stuff to tell us, AGI, God, whatever, Ten Commandments or mystery AI symbols from the future, then tell us all with clarity, or we're entirely justified with our big complicated human brains to just to ignore it and call it phony.

 

Ignoring the possibility that this may be one big simulation seems a rational response as well. My biggest reason to doubt such doubt is the plausibility of the damn Simulation Hypothesis.

 

If we succeed in surviving & thriving for a couple hundred more years, there appears nothing to prevent us (or our AI descendents) from creating hyper-realistic, whole-world simulations, complete with conscious creatures such as ourselves. And if it's possible to do, someone (or something) will do it, making it more likely we live in a simulated reality than a "real" reality.

 

Sadly, the key may be the phrase "succeed in surviving and thriving". That may very well be too much to ask, given our current predicament. If our civilization does kill itself off, that doesn't rule out the idea we live in a simulation - far from it. In fact, it might be the most likely outcome. After all, if it were easy to avoid the coming apocalypse, the (hypothetical) creators of our simulation might not have needed to create so many simulations to explore all the options, so we wouldn't be here to enjoy our misery in the first place. So it is inevitable that most simulations like ours are destined to fail. Just like Darwinian natural selection - survival of the fittest and most clever. Many suffer and die? Yup. So what? That's the law of the jungle. As our P-E likes to say "So Sad".

 

Anyway, anyone bothering to read this rot I've snarled out def go see Arrive. It's a good flick, Amy has a cute nose, I love seeing women in power, in control, and one point of the film is that we all need to learn how to come together, all of us in peace and kindness, now, to solve humanity's problems. Something like that. And more points about time, what it means, and the importance of language and learning how to communicate non-violently.

 

Agreed - Arrival is a really good movie worth seeing for any thinking person - if for no other reason than to distract yourself from the sh*thole future we're facing in the "real" world...

 

As far as Founder's Day being an example of utopia, I guess I don't understand it because I can think of much better scenarios right off the top of my head. How about a utopian idea that sees the world wide end of meaningless suffering? End cancer? How about ending Parkinson's Disease? Alzheimer's, CVD, obesity.... poverty, racism, sexism, unfair wages, greed, hatred, wars, genocide, cruelty to animals, destruction of wild habitats, oceans fouled by stupid people ... and these are just the beginnings. Utopia? What about animals who are more sensitive to pain than we are? What about all suffering everywhere in all creatures? If the universe is "infinite" or near infinite with a shit ton of empty space, then why isn't there space enough to create peace, love, kindness for everything everywhere?

 

 I don't think you get it Sthira. One premise on which the Founder's Day Argument is based is that there are quadrillions of sentient beings of (nearly) every conceivable kind living what to them feels like a near-perfect life, free from all forms of (undesired) suffering, including all those you listed. These quadrillions of sentients are living as "digital" entities "inside" a gargantuan "computer" powered by a matrioshka brain encircling our sun. To us, their existence would seem a literal equivalent of the Christian heaven. If you can think you know of a better outcome, just imagine a tiny fraction of these future sentient beings (say a trillion or so of them) create exactly the "better" utopia you imagine, inside their infinitely malleable virtual world. They might live out your fantasized utopia for what subjectively feels to them like 10,000 years, get bored with it, and then try something new.

 

Get it? With that much computing power, the denizens of this future reality can create and experience anything they can imagine, including every single utopian scenario we mere humans could ever imagine.

 

Utopia, cmon, people.

 

Cmon indeed Sthira. I again challenge you to come up with a rosier endstate than the matrioshka-powered Heaven I've outlined. Your world free from all forms of suffering? Child's play. In fact, your imagined "utopia" is just the baseline; the table-stakes you need to enter the game of imaginary Utopia creation. You're thinking way too small. The key is not thinking of what you'd get rid of in our world - that's easy. They key is figuring out what you'd keep, and most importantly, what you'd add.

 

--Dean

Link to comment
Share on other sites

Todd wrote:

All the other deep questions you have, my chickens are ready to answer.

 

You illustrate my point exactly. We humans, and your chickens, are thinking way to small. As a result, we're likely to crash and burn in a dystopian nightmare that makes Mad Max: Fury Road looks like a Norman Rockwell painting.

 

--Dean

Link to comment
Share on other sites

Very little of the coming nightmare can be fairly attributed to the small thinking of chickens.

 

I'd suggest the slightly larger scope of humanity's cognition bears more blame.  Perhaps it is like your speculation on macro nutrient ratios, being at the extremes, very small or very large is better than being in the swampy middle.  I've seen both extremes advocated, simplify back to a sustainable chicken utopia or leap forward into a synthetic paradise and both seem as far away as the blue sky above.

Link to comment
Share on other sites

Thanks for the fun, lively, and enlightening conversation. You're probably correct, and I probably do think way too small. But somehow I believe the ending of all useless suffering is kinda big. You do, too. You're just thinking past that, and I'm kinda stuck here in what-shit-to-eliminate-mode, since my own damp cold gray suffering feels meaningless, and until that's gone, it's hard to see anything better.

 

If we do live in a simulation Sthira, contempt for its creators may indeed be a very rational response. In fact, the creator(s) of our simulation may feel exactly the same way about the creator(s) of their simulation. Don't attribute to them (much) more wisdom, benevolence or power than we possess. In other words, the creators of our simulation may be (flawed) human beings (or flawed AIs that flawed humans create) a few decades further along our timeline. In other words, it may be nasty simulations, and nasty simulators, all the way up. One hopes not, but it seems more plausible at this point than the idea that all this was created by an all-powerful, benevolent God...

I'm curious how you're making these distinctions. Are these just guesses, colors of imagination, or are you basing these distinctions on some sorta technical computer science info of which I'm ignorant?

 

For example, if future artificial general intelligence reached such great heights of power and progress over us lowly humans, then why wouldn't future AI also be powerful enuf change the laws of physics once they've started a simulation?

 

Or why wouldn't AI already know "how we pivot to deal with [this] challenge," or why wouldn't AI already know "how we deal with the challenges ahead?" If future AI is so far advanced, then why wouldn't it already have learned "something useful for dealing with their own predicament?"

 

Can we not attempt to imagine a future AI that we initially created, it then learned exponentially on its own, and then reached states that to us would seem all knowing, all powerful, all loving? IOW, we killed god; now we're reinventing god.

 

...One premise on which the Founder's Day Argument is based is that there are quadrillions of sentient beings of (nearly) every conceivable kind living what to them feels like a near-perfect life, free from all forms of (undesired) suffering, including all those you listed. These quadrillions of sentients are living as "digital" entities "inside" a gargantuan "computer" powered by a matrioshka brain encircling our sun. To us, their existence would seem a literal equivalent of the Christian heaven. If you can think you know of a better outcome, just imagine a tiny fraction of these future sentient beings (say a trillion or so of them) create exactly the "better" utopia you imagine, inside their infinitely malleable virtual world. They might live out your fantasized utopia for what subjectively feels to them like 10,000 years, get bored with it, and then try something new.

 

Get it? With that much computing power, the denizens of this future reality can create and experience anything they can imagine, including every single utopian scenario we mere humans could ever imagine.

 

Utopia, cmon, people.

Cmon indeed Sthira. I again challenge you to come up with a rosier endstate than the matrioshka-powered Heaven I've outlined. Your world free from all forms of suffering? Child's play. In fact, your imagined "utopia" is just the baseline; the table-stakes you need to enter the game of imaginary Utopia creation. You're thinking way too small. The key is not thinking of what you'd get rid of in our world - that's easy. They key is figuring out what you'd keep, and most importantly, what you'd add.

You're right: I can't meet your challenge, and can't imagine anything bigger than that. Except when I do mushrooms and LSD, and then those workings of the imagination become ineffable. To even attempt to wrap words around .... is futile and ridiculous. Art does better here than wordy words. So until "new language" is created -- and I loved the language of the beings in Arrival -- art prevails.

Link to comment
Share on other sites

Ok, scrap my last reply. I think you already answered my question. IOW, I hear you saying that perhaps AI creators are working in stages or steps of completion rather like gamers. The VR games of today will seem primitive to the VR games of next year. It's all sorta "work in progress" like choreographers who just sorta invent dance routines as they witness them occur through our bodies (that is, if simulations are even "real" and we're living within one now, the simulations are sorta "happening" as they happen...)

Edited by Sthira
Link to comment
Share on other sites

Sthira,

 

[Note - I now see (after I've penned all of what I write below), that you do get what I'm saying, in your second, follow-up post. But I'm not deleting what I wrote below in response to your first post, since I think it's helpful clarification and reinforcement, especially the part about Elon Musk, OpenAI and TheSims 4. But feel free to skim/skip. I don't want to waste your (or anyone's) time...]

 

I believe the ending of all useless suffering is kinda big. You do, too. You're just thinking past that, and I'm kinda stuck here in what-shit-to-eliminate-mode, since my own damp cold gray suffering feels meaningless, and until that's gone, it's hard to see anything better.

 

Indeed, I believe in ending suffering too, and feel helpless & pessimistic. But you asked for a long-range, maximally-optimistic scenario, so I did my best to deliver, and put a stake in the sand for self-fulfilling prophecy purposes. TL;DR - we may be a simulation which is a "work in progress" on the path to Utopia. But alas, our particular branch of this forking path may sadly be a dead end.

 

I'm curious how you're making these distinctions. Are these just guesses, colors of imagination, or are you basing these distinctions on some sorta technical computer science info of which I'm ignorant?

 

I'm not sure what you mean by distinctions. I'm going to guess that what you mean is the distinction between a metaphysics in which our world is a simulation inside the computerized "Mind" of an super-advanced AI and one in which we are a creation of a omnipotent, benevolent "God".

 

There isn't much to distinguish them, except the "benevolent" part. If we're the product of a super-advanced AI, we're not necessarily in "good" hands, if you know what I mean. But I've always considered the "benevolent" attribute the first one of the "Omni's" on the chopping block when it comes to the characteristics of whatever/whoever created all this (he says, waving expansively), given all the suffering in the world. 

 

But overall, what I've shared on this thread are definitely "guesses", speculations or extrapolations. Having somewhat of a front-row-seat, I see AI progressing rapidly - more rapidly than most people recognize. I also see realistic simulations, controlled by advanced AI, are also coming along quickly. Putting two and two together, I surmise "we might be living in a simulation created by people or AI down the road a ways from where we are today". It's that simple.

 

Speaking of AI-controlled simulations, two days ago Elon Musk's open-source AI company OpenAI released what they are calling Universe, a massive training ground for AI technology, comprising thousands of different simulated worlds, mostly drawn from popular video games. The entire AI & machine learning world is abuzz with their announcement, saying it will greatly accelerate AI development, particularly in the area of reinforcement learning. Here is a brief description of Universe from OpenAI's announcement [my emphasis]:

 

We're releasing Universe, a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites and other applications.

 

Today's release consists of a thousand environments including Flash games, browser tasks, and games like slither.io and [Grand Theft Auto] V. Hundreds of these are ready for reinforcement learning, and almost all can be freely run with the universe Python library...

 

I like how they say "training an AI's general intelligence" rather than "training artificial general intelligence (AGI)". Given Elon Musk's widely expressed fear of AGI, it's not surprising that the PR folks at OpenAI have to obfuscate to cover over Elon's blatant inconsistency, bordering on irrationality. If he fears that we will develop AGI (or artificial Superintelligence) sooner than we can figure out how to control is responsibly, why the f*ck is he bankrolling OpenAI to accelerate the development of AGI, rather than leaving the field alone, or better yet, sponsoring research into how to ensure AGI is friendly (or we can control it) once it arrives, like the good folks at the Machine Intelligence Research Institute (MIRI) do? 

 

To give you a feel for the "worlds" of OpenAI's Universe, here is a gif I made:

 

 

tqmYCRm.gif

 

The goal is to develop intelligent (by that I mean "goal-directed", not necessarily "conscious" Tom) systems to play each of these games individually, and then all of them, and then all of them simultaneously, and then.... - the sky's the limit.

 

I noticed that in addition to an interface to the eerily realistic simulated world game Grand Theft Auto V, which I've discussed before (here and here) on this thread, another one of the worlds in OpenAI's Universe happened to be TheSims 4, where the goal is to play a virtual "God", controlling the lives of human characters inside an uncannily realistic simulated world. 

 

I've previously speculated on this thread (here) that an AI-controlled souped-up version of TheSims 4 may be exactly the metaphysical scenario we happen to be living in. In that post, I included a video of Elon Musk (OpenAI's founder and primary sponsor) in which Elon says that he fears AI going rogue (not rouge ☺), and that there is a 99.9999% chance (literally, he says 999,999,999 chance in a billion) that we live inside a giant simulation. 

 

So I found it the height of irony that someone who fears AI and thinks it overwhelmingly likely we live in a simulated world happens to be sponsoring people to develop AI to control the behavior of people living in a simulated worlds. I tweeted as much, saying:

 

Hmm.. @elonmusk fears AI & thinks we live in simulation. Yet he funds @OpenAI's work to control Sims4 with AI. WTF? 

 

I included in my tweet this link to OpenAI's page describing TheSims 4 world as one of the many included in their Universe AI training package.

 

I sent that tweet two days ago. Yesterday, TheSim 4, and only TheSims 4, disappeared from OpenAI's Universe.  That link is now dead, without any explanation from OpenAI about why they dropped it. Here was my followup tweet about the unexplained deletion:

 

Yesterday I blasted @elonmusk & @OpenAI for making AI to control Sims4. Now link gone; Sims4 not mentioned. Change of heart? 

 

Did Elon realize it looked really bad for his company to be sponsoring efforts to develop the scenario he fears most, but that nonetheless he believes to be our reality? Maybe. At this point we don't know, since neither Elon nor OpenAI has told anyone why they dropped The Sims4 from the set of worlds they announced as part of their Universe AI training ground. So we can only speculate.

 

I consider this little incident, and the bigger picture that OpenAI has just released a massive 'AI training ground' that promises to greatly accelerate AI development, as one more hint that AI is proceeding faster than most laypeople realize, and it is headed towards the simulated world hypothesis I've been describing all along in this thread.

 

So Sthira, this is the kind of mounting evidence on which my "informed hunch" is based.

 

For example, if future artificial general intelligence reached such great heights of power and progress over us lowly humans, then why wouldn't future AI also be powerful enuf change the laws of physics once they've started a simulation?

 

You aren't paying attention. You asked and I answered that question in our previous posts a couple back. I'll repeat my response. I wrote:

 

My speculation is that the creators of our (simulated) world don't interfere more "nicely" or obviously in our world (and on our behalf) for one (or several) of three reasons:

  1. because they can't. They are powerless to change the laws of physics once they've started a simulation, or
  2. because they don't want to, perhaps in order to see how we pivot to deal with this challenge, or
  3. because this is the same mistake they made earlier along their own (future) timeline, and so things are unfolding in our (simulated) world just as they knew and hoped they would. They want to see how we deal with the challenges ahead, hoping to learn something useful for dealing with their own predicament.

 

You may be asking but why #1 (why can't those controlling our simulation change the laws of physics)?

 

Think about if our world really is a souped-up version of TheSims 4. Imagine you are the player of the game (rather than a character within it as I believe ourselves to be). If you are the player, you can't change the rules (laws of physics), since you aren't the programmer and so don't have access to the fundamental inner workings of the game/simulation. You just play it the way it is, trying to help (or toy with) the simulated characters in the virtual world within the parameters the game affords because it's fun, entertaining, educational etc.

 

If you are the programmer of TheSims 4, you could change the fundamental rules (laws of physics), and maybe you do, but not in this particular instantiation of the game. Why not change this instantiation? Because you want to see how things play out under this set of rules and initial starting conditions - again possibly for amusement or education, as outlined in those three reasons in the quote above.

 

In actuality, there may not be a distinction between the player & programmer of our simulated world (if we live in one). But whether or not there is, those creating/controlling our simulation may simply have no incentive to make sure things turn out nicely in our world. In fact, they may have every incentive not to - e.g. if we are an ancestor simulation meant to help them understand (and possibly fix) what actually happened along their own timeline.

 

To wit, you wrote:

Or why wouldn't AI already know "how we pivot to deal with [this] challenge," or why wouldn't AI already know "how we deal with the challenges ahead?" If future AI is so far advanced, then why wouldn't it already have learned "something useful for dealing with their own predicament?"

 

Sthira, we're going in circles. I already answered that in the post to which you are responding. Those who created and are controlling our simulation wouldn't necessarily "already know all" since they may be flawed, stupid humans (or human-made AIs) just a few decades ahead of us, struggling to figure out what to do next, based on limited information and wisdom, just like we are.

 

Don't look at our world and say "wow, it's so vast and well-ordered, there must be an all-powerful, all-knowing and all-benevolent entity who created it." In many respects our world is crap, and so it seems plausible that whoever/whatever created it is pretty crappy themselves/itself.

 

Can we not attempt to imagine a future AI that we initially created, it then learned exponentially on its own, and then reached states that to us would seem all knowing, all powerful, all loving? IOW, we killed god; now we're reinventing god.

 

Ur... Yes. We can attempt to imagine the "on our way to utopia" scenario you suggest. In fat, that is exactly what I described in the post to which you are responding. But to make an omelet, you've got to break quite a few eggs along the way. I'm suggesting we (our world) is one of those eggs. A stab at what a better world might look like, but that is in fact turning out to be a very bad idea - a dead-end branch on the road to Utopia.

 

Then, Sthira, in your follow-up reply, you wrote:

Ok, scrap my last reply. I think you already answered my question. IOW, I hear you saying that perhaps AI creators are working in stages or steps of completion rather like gamers. The VR games of today will seem primitive to the VR games of next year. It's all sorta "work in progress" like choreographers who just sorta invent dance routines as they witness them occur through our bodies (that is, if simulations are even "real" and we're living within one now, the simulations are sorta "happening" as they happen...) 

 

Oh - scrap my above reply. You've got it. Exactly. You hit the nail on the head with that paragraph.

 

Sorry I made you read through my reply. At least I warned you at the top. Next time, Sthira, please try to do the same - rather than saying "scrap my last reply", go back and edit your own previous post (perhaps with a preamble) to give readers a heads up that your understanding has changed, as a courtesy to readers / responders.

 

Hopefully by reply above might clarify & reinforce your new understanding, and help provide additional evidence that we're headed that way (i.e. TheSims4 stuff).

 

--Dean

Link to comment
Share on other sites

  • 3 weeks later...

Thanks Tom,

 

 

When a started this thread seven months ago, I was optimistic that mankind had a chance at a bright future. I still believe it's possible for a civilization to reach a superintelligence-fueled singularity - for good or bad. But recent events have disabused me of the naive idea that our particularly civilization will get to the Singularity before destroying ourselves. As I said in my tweet about the blog post you just referenced:

 

Nice antidote to AI Kool-Aid. We'll probably kill ourselves long before ASI, partly from leaders' naive groupthink about tech utopia.

 

I particularly liked this quote from the article you shared:

 

If everyone contemplates the infinite instead of fixing the drains, many of us will die of cholera. - John Rich

 

Unfortunately, instead of fixing the drains, or even harmlessly contemplating the infinite, our soon-to-be President is poisoning the well with actions like this:

 

GNIGC8K.png

 

Here is my follow-up to that observation, in response to the inevitable Russian backlash over its ambassador's assassination where a Putin crony blame the murder on a NATO conspiracy:

 

Here we go. Is it just me, or does this really feel like a Greek tragedy in which all the characters must die by the end?

 

--Dean

Link to comment
Share on other sites

Unfortunately we are literally apes and Putin and Trump are really apelike. Look at all those huge noisy pick up trucks Ram, Titan, Tundra etc. Thats apeism. I am bigger and stronger than you. Its pretty basic. 90% of the people who spend 50-70 grand for the really big noisy trucks are not using them for business.

 

http://phys.org/news/2010-06-humans-extinct-years-eminent-scientist.htmlthis link to a famous physicists take on our current prospects.

 

The Amish had it right two hundred years ago when the industrial revolution scared the hell out of them and they decided to stop the nonsense in their own culture.

 

So what is the solution???? Simple allow all males to die off keeping only enough for sperm and let woman run the world. Statistically males are 10-15 times more likely to be violent and especially more likely to use weaponry.

Edited by mikeccolella
Link to comment
Share on other sites

Mike I 100% agree that neither we nor Russian have (or will have) leaders that represent the better angels of our nature. 

 

The Amish had it right two hundred years ago when the industrial revolution scared the hell out of them and they decided to stop the nonsense in their own culture. 

 

The Amish aren't so simple, but I agree they seem to have got something right about their attitude towards technology. The Amish don't eschew all modern technology as many believe. Instead, they carefully assess the potential impact of a new specific technology on their way of life, and in particular whether it is likely to draw them closer together or push them further apart as families and as communities.

 

Here is a good article about tech pundit Kevin Kelly and his views on how to use technology like the Amish.

 

The Amish are deliberate in their adoption of technology, while many of the rest of us (except Tom...) are all-too-ready to swallow the latest, greatest tech gadget or privacy-sucking business model hook, line and sinker, without regard for its impact on us personally, our relationships, or the fabric of society. Now, such a headlong rush towards the new is coming back to bite us, big time.

 

A tweet from earlier this morning relates to this idea that thoughtless adoption of technology can do more harm than good:

 

Fermi Paradox Sol'n #59: Societies self destruct when instant/open comm tech is hijacked to spread lies & signal loyalty; destroying trust.

 

Here are the two previous ones in the series (I arbitrarily started with #57):

 

Fermi Paradox Solution #57: Confirmation bias inevitably destroys any technological civilization that manages to claws its way up from the slime.

 

gvcisOQ.png

 

and:

Fermi Paradox Sol'n #58: VR Solipsism. Advanced civilizations make virtual reality worlds so compelling they never leave. Real world crumbles.

 

to which I pointed to this article in The Atlantic about "post-VR sadness" - people become despondent when they exit from a really compelling virtual reality experience only to return to the dreary "real" world.

 

I truly wish I weren't so pessimistic - I worry about self-fulfilling prophecies. But I really don't like the way things are headed and, sadly, I don't have much confidence we can turn them around at this point. I fear we've move beyond the point of no return in the devolution of our social institutions.

 

--Dean

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...