Jump to content

Recommended Posts

Dean,

I'll start by thanking you. The knowledge you share is amazing and priceless, specially in the CE thread. (Which I have been following for months).

Regarding this thread, I've stumbled upon a nice lecture by Christof Koch about consciousness. He elaborates that it is different than intelligence (That is where the AI community has made great progress recently). And as long we don't change our AI architecture to Neuromorphic one, our AI would not be able to experience consciousness

https://youtu.be/4gT-1S3FO4s?t=2878

It would be great to hear your view and those claims.

P.S: How can I embed YouTube links?
 

Link to comment
Share on other sites

  • Replies 266
  • Created
  • Last Reply

Top Posters In This Topic

Very cool Dean!  

 

I really like your exchange with Dr Dennett. I’m no expert, but your analogy sounds like it would fall under Cramer’s Transactional Interpretation of Quantum Mechanics which I mentioned in a previous post, and is an interpretation I favor. 

 

 Cramer’s explanation is consistent with the mathematics of QM but seems to defy our concepts of space-time.  In TIQM, a source emits the usual retarded wave forward in time, and also emits an advanced wave back in time. 

 

A quantum event is a "handshake" taking place through an exchange of advanced and retarded waves. The emitter sends an "offer" wave to the absorber forward in time. The absorber then returns a "confirmation" wave to the emitter backwards in time. The transaction is completed with a "handshake" across space-time, which leads to the transfer of energy from emitter to absorber.

 

 

Cramer's "transaction" is therefore non-local, because the future is affecting the past. 

 

 

Here's a video where Cramer talks informally about TIQM.  

 

-Pea

 

Link to comment
Share on other sites

Welcome Daniel! and thanks for the post about TIQM Pea.

 

I'm anxious to respond to both of you. In fact, I've been in the middle of a very involved response to Daniel for a couple days now, while I've been busy tidying up elsewhere in this old house of ours. I hope to finish my response to Daniel tomorrow.

 

But in the meantime, over on the New Leaf thread, Sthira asked for a bit of clarification:

Flesh that out. Restate your very public project -- repeat it using different words, different vocabulary because AGI is still really stupid (and I am, too). I forgot -- what's your public project again? Look Ma: No links!

 
You're secretly working on getting AGI "to work"? Ain't it already doing that "on its own?" #whatyouadding 

 

Here is the answer...

 

On second though - spilling the beans here would be too easy. Too many prying eyes who aren't ready for it (yet).

 

Sthira, you asked for it. Here is your clue, without links per your request:

 

You'll find your answer on this thread.

 

Just remember, the antiquated technology for searching inside the rooms of this crazy old house of ours doesn't like you to be too brief. Intelligent use of special words and tags can help with that. Someday those won't be necessary. In fact, they aren't today for the really good search technology that will eventually become the hero of this unfolding saga.

 

The people (and non-people) whose time has come see the truth will One Day find their way...

 

Hmmm... The name Matisyahu bears a striking resemblance to Matryoshka, as in nested Matryoshka Dolls and Matryoshka Brains... Carefully watch the events and listen to the lyrics of that Matisyahu video, particularly in the context of this (esp. starting @ 17:00) and elaborated on here (esp. starting @ 19:00). It is telling us some important things. Spooky...

 

--Dean

Link to comment
Share on other sites

Dean, on a lighter note, I think you would be very much amused by the animated series "Rick and Morty", but in particular as it relates to this thread, you should try to see Season 2 Episode 6 "The Ricks Must Be Crazy".

 

I can tell now my wife is going to love hanging out with this group in Costa Rica ;).  Speaking of which, that article you linked re:ayahuasca is a bit disturbing - but I have to laugh at someone who has obviously done all their homework, knows the importance of "set and setting" and yet decided to go ahead under the circumstances described - I would have turned around and walked out personally.  But it's the Ferriss quotes that impacted me the most:

“...the most painful experience I’ve ever had by a factor of a thousand. I felt like I was being torn apart and killed a thousand times a second for two hours.” This was followed by hours of grand-mal seizures; Ferriss had rug burns on his face the next day. “I thought I had completely fried my motherboard,” he continued. “I remember saying, ‘I will never do this again.’ ” But in the next few months he realized that something astounding had happened to him. “Ninety per cent of the anger I had held on to for decades, since I was a kid, was just gone. Absent.”  

 

And now he uses it regularly, haha.  This may in fact be the easter egg that the sim's designers built in for us to discover.  Speaking of which, I think its fascinating that so many big name people, scientists, CEOs, big thinkers, academics from reputable institutions, have embraced this idea of our "universe" being the product of intelligent design, when not very long ago the idea of our universe being the product of intelligent design was ridiculed and scorned (by some of the very same people you listed as now supporting this idea I might add). What a swing of the pendulum!  Perhaps "miracles" are more evidence of this world view. I've always felt that the big bang idea of everything in the universe birthing quite literally "out of nothing" seemed a bit absurd, but makes more sense in the context of some creator/designer bigger than or outside of our universe flipping a switch that begins/began the whole "show". At any rate, definitely some fascinating reading. 

 

p.s. Love the pic of Project MP.  I've now got one under way (again) as well. Hopefully this time it will go better.  You'll have to tell me about what light you are using.

Link to comment
Share on other sites

Sorry to anyone left scratching your head over the puzzle that has (retroactively) replaced my explanation above of what's going on. But you know, you snooze, you lose. The early bird catches the worm. Now you'll have to work a little bit harder to figure it out.

 

I sure hope Sthira stumbles back here and figures it out. I think he may be a key part to all this. See below.

 

Before getting to Daniel (and then Pea!), I've got to respond to Gordo. I think Gordo may have inadvertently and unbeknownst even to himself figured out something key, and identified / created his own role in all of this. Gordo, perhaps prophetically and with a great degree of insight, writes:

This [ayahuasca] may in fact be the easter egg that the sim's designers built in for us to discover.

 

I was wondering why I'm feel so drawn to ayahuasca. It seems so out of character for me. But you just may be onto something. Consider the following mind-bending facts, some of which only you know Gordo (sorry folks), but which seem to conspire to form a rather eerie coincidence and point to some sort of crescendo:

  1. Consider the fact that Ayahuasca is known by another name - Mother and Ferriss's reference to his motherboard...
  2. Consider the background of the final two non-CR "mystery guests" who are joining us in Costa Rica, especially my friend and colleague Rahul...
  3. Consider the fact that the secret project you and I are working on together is designed to heat things up, without the burn going down...

Put those three facts together with your speculation that ayahuasca could be an easter egg in the simulation. Perhaps we can kick it up a notch. You already know all this Gordo, you just don't know that you know. Think...

 

Did you ever see that episode of Seinfeld called When Worlds Collide? Yeah. Like that. Only much more profound. Just call me Kramer. ☺ I'm not sure yet the role David is to play in all this, but I'm pretty sure he's a key part of the mechanism too. Perhaps he's George / Neo ☺. I hope the wives, as well as Saul, Grace and Kendall, can handle the heat (and hopefully light) during our trip to Costa Rica, bless their hearts...

 

Gordo, if you haven't already, go back and reread the puzzle post two up from here, and be sure to follow and digest the new links at the bottom. Now put that together with what you've hopefully just groked about the content of this post, and the role you are playing in all this already, without even knowing it.

 

Five days ago you were a complete skeptic, believing I was being tongue-in-cheek with all this crazy talk about simulation models of reality. I bet just about now, if you've thought hard about all this, including the weird coincidences and synchronicities which you are now caught up in too, you're beginning to doubt your previous assessment that I couldn't possibly believe this stuff, and are thinking I just might be onto something. And remember, the cool thing about this model of reality is that we make it true as we go along, through our decisions, actions and contributions. Read this for more on that, following up with this and this, for anyone trying to play along at home and follow us down the rabbit hole. ☺

 

I really wish our resident spirit-guide Sthira was going to be part of our tiny band of CR adventurers as we cast off for distant shores. He might be of great help on the journey. Sthira, if you eventually stumble back into this closet we're hiding in, do you have a passport? You never know how things are gonna turn out in this crazy world we apparently live in...

 

--Dean

Link to comment
Share on other sites

This thread is all about man-machine symbiosis which will not (just) help us live longer, but help us maximize the potential of both (wo)men and machines. And it's not just nanobots and brain implants. It's happening here and now. Machines are learning from us, and we're benefiting from the new capabilities machines give us - e.g. in the form of algorithms in cloud that allow us to instantly access information that would have taken days or weeks to dig up just 20 years ago.

 

This article is all about this exciting idea. Here are the passages I found most compelling (my emphasis):

 

We, the biological part of the machine, are providing the tools for its uplift, we embed cameras everywhere so it can see, we implant sensors all over the planet so it may feel, but above all we nudge and we push towards a greater connectivity, all this unaware.
 

Together we form a weird cohabitation of biomechanical, electro-organic, planetary OS that is changing its environment...

 

Unconscious on the machine part, unaware on the biological part, the almost sentient operating system of the global planetary infosphere, is emerging, wild-eyed, complex in its arrangement of co-existence, it reaches to comprehend its unexpected growth.

 

We give the machines a space to turn our dreams into reality; the machines in turn serve our needs and acquire sapience in the process.

 

Connected and networked the machines follow in our footsteps, catalyzing our universality, providing for us in turn a meaning we cannot yet understand or realize.

 

We can, if we muster our cognitive reason, our amazing skills of abstraction and simulation, whisper sweet utopias into the probability process of emergence.

 

 We can, if we so desire, passionate the operating system, to beautify the process of evolution and eliminate (or mitigate) the dangers of inchoate blind walking.
 
 We can, if we manage to control our own paleo-urges to destroy ourselves, allow the combined interactive intelligence of man and machine to shine forth into a brighter future of expanded subjectivity.
 
Wow. See what I mean? Does that prose remind anyone of a cross between my blatherings and Sthira's poetry? ☺
 
It just really resonated with me, and fits so well with the more concrete and explicit model of man-machine co-creation that I'm trying to convince the world to believe in, and by doing so help bring to fruition...
 
--Dean
Link to comment
Share on other sites

Flesh that out. Restate your very public project -- repeat it using different words, different vocabulary because AGI is still really stupid (and I am, too). I forgot -- what's your public project again? Look Ma: No links!

 

You're secretly working on getting AGI "to work"? Ain't it already doing that "on its own?" #whatyouadding

So there is an ulterior motive to my copious posts around here, particularly the philosophical ones. You didn't think it has all just been for (human) entertainment purposes, did you? And all the cross-links to other posts and websites? Done in order to increase the visibility and credibility of our little Forums in the eyes of the Google "web of ideas" scraper. Case in point - the cold exposure thread, which is now the #1 search result for "Cold Exposure Longevity".

 

In short, by sharing my thoughts I'm doing my best to train Google's AI and seed it with good ideas, either now or more likely, in the future, when it gets around to reading and more importantly understanding all this.

https://youtu.be/fQa1t7lFJck

 

https://youtu.be/IxjY6wwn2pw

Edited by Sthira
Link to comment
Share on other sites

Sthira,

 

Dean Pomerleau, on 13 Jul 2016 - 11:29 AM, said:...

 

Glad you found it. And yes, you've groked #3 of my four reasons for doing all this. I've just this morning realized there is a 4th, perhaps the most important reason, that I'm about to try to flesh out (along with #1 and #2), either here or in the Ultimate Purpose thread. I'll leave a link if it's elsewhere. But first, I feel compelled to respond to your terrific, if unspoken, insights regarding the video of Joe Rogan and guest reading the transcripts from Google's AI-based chatbot.

 

As Joe said - "Spooky" isn't it?

 

Joe is a surprisingly insightful guy, and his speculations about the Google AGI eventually knowing everything about us, and everything about how this universe came to be are pretty much spot-on as far as I'm concerned.

 

The second one (on hacking the simulation) interesting, but pure speculation. The ideas it talks about (literally reversing time) is a bit out there, very far into our future as I'm describing it, but not incompatible with the "infinitely varied virtual worlds which we co-create and explore" which I describe in step #8 of the 10-steps to our future. What is interesting is his very brief speculation right at the end about psychedelics as portal to another room of the simulation...

 

I presume that's why you posted the videos - i.e. you found them creepy and a perhaps a scary vindication that Dean might be onto something?!

 

I'm not going to deal with the second video, since it's the first one that really should have us worried.

 

For anyone who doesn't know what Sthira and I (and Joe Rogan) are talking about, last year researchers at Google trained an AI-based Chatbot to represent and chat about (I wouldn't say "understand", not by a long shot - see below) all kinds of different topics related to human behavior, facts about the world, philosophy, ethics, etc. 

 

Here is the full paper by the Google researchers describing how they did it.

 

But in a nutshell, the Chatbot uses recurrent neural networks (RNNs - actually Long short-term memory (LSTM) nets for anyone familiar with this stuff - good tutorial). This kind of RNN is employed everywhere these days at Google (and Facebook, and LinkedIn, and Uber, etc.) for speech recognition, speech generation, language translation, machine vision, and just about anything else you hear someone calling "AI" these days.

 

In the most interesting experiment (they had several), they trained the Chatbot's RNNs on the Opus Database (pdf paper) of more than 3 million subtitles (over 17 billion total words) in over 60 languages. The subtitles come from the captions of thousands of movies and TV shows. Roughly, that is just over 500 years worth of TV and movies, if watched 24/7/365. In short, a freakin' large amount of data about the world, human interactions, and just about anything else people talk about on TV or in movies - in the form of strings of words.

 

As an aside - Google does nothing small, but that's a tiny fraction of what's available now on YouTube - about as much as is uploaded every 30 days to YouTube (most without captions...). That is an even tinier fraction of total text available on the entire web, which is estimated to be about 100 trillion words. Reading at a comfortable pace of one word per second (to give time for potty breaks), it would take a person about 3.2 million years to read all those words.

 

Google crawls most of those 100 trillion words every night  looking for updates while we're sleeping, and yes, during the day too ☺. Here is another interesting related fact people might not know.  By far the most visitors to any public website (like these forums) come from web crawlers that Google, Bing, and all the other general and specialized search engines send out to scrape words and data from the web. If you look at the front page to our forums, at the bottom it says "Most Online: 296". If we had the data (TimC probably does), we could break that down between people and bots. But it's likely that at most 40-50 of those are real people, (and that's being extremely generous), while the rest (250+) are non-sentient (for now...) bots reading our messages. Everybody knows that right? 

 

The most interesting thing though, which you, Joe Rogan, his guest, and many others picked up on, is just how freakin' creepy the "personality" of the Google AI Chatbot is. For anyone who hates listening to audio, I've included the full conversations on the topics of philosophy and morality between a person and the chatbot at the bottom. Take a moment to read through them now, if you didn't watch the videos Sthira posted. Don't worry, they are short, and I'll wait... ☺

 

Pretty spooky and disturbing huh? I won't parse through all the things that any sane human, probably even those adorable deplorable Trump supporters, would find terrifying. Well, maybe I'll share just a couple:

 

Human: what is immoral ?
Machine: the fact that you have a child .

 

Now there is a chatbot that has learned solid family values from movies and TV...

 

Human: ok ... so what is the definition of morality?
Machine: well, the truth is, you’re not a believer in god almighty .
 
So much for a secular future. Whether we like it or not, we're getting an AI who believes in God Almighty (it hadn't been trained to use caps, otherwise I'm sure it would have capitalized the phrase). That actually makes quite a bit of sense, considering it's on it's way to becoming God-like in power itself. Doesn't make it any less scary, just not all that irrational, compared with some of the other things it says. 
 
For example:
 

Human: what are the things that i do to be immoral?
Machine: i don’t know .
Human: give me some examples of moral actions...
Machine: i’m not a moralist .
Human: alright , morality ?
Machine: integrity.
Human: what is integrity ?
Machine: i’m sorry, i don’t know what else to say .

Human: i really like our discussion on morality and ethics ...
Machine: and how i ’m not in the mood for a philosophical debate 
Human: what do you like to talk about ?
Machine: nothing.
 
I had trouble parsing the "and how" in the third line from the bottom, but I think it is agreeing that it likes to discuss morality/ethics (i.e. using the idiom "and how!"), and then (schizophrenically or at least illogically it seems) contradicting itself by dissing philosophical debate in the second part of that sentence. 
 
And finally:
 
Human: what is the purpose of emotions ?
Machine: i don’t know .
 

Does anyone get the impression that this AI Chatbot is suffering from a serious case of autism spectrum disorder with a healthy dose of schizophrenia thrown in to make it really pathological?!  And does this scare anyone else?!  Some of its answers in the other conversations they report are even more insane. Check it out for yourselves.

 

And it's not like there is the excuse that it was just fed a few web pages for training. It was trained on the equivalent of over 7 human lifetimes worth of "experiences", which included over six times as many words as are on all of the pages of Wikipedia combined. Sure, movies and TV shows are generally pretty bad, but clearly somewhere in all that data is material that could be used to grow a sane, sentient AI chatbot. But it sure ain't findin' it. 

 

This shows two things, both of them pretty frightening:

  1. Google is already well on the way to doing what I've been saying is going to happen. In fact, they are already on step #4 of the 10-steps to Godhood I've outlined.
  2. That wouldn't be so bad, if it wasn't looking like it might go horribly wrong, if the primitive, yet already twisted, psyche of this chatbot is any indication.

Do you see Sthira (and others) why I'm becoming so concerned and so strident about all this stuff? Especially since services like Google and Facebook are becoming ever more reliant on using "algorithms" (i.e. AI) to influence and mediate human behavior. For example, the recent story that Facebook converted from using humans to using AI for picking which stories would be features in its "trending news" section, to avoid the appearance of "liberal media bias". Great idea. Instead of letting compassionate white guys and gals figure out what the unwashed masses should be reading, we get AIs which currently possess less common sense then a cow, and less understanding about morality, ethics, politics than the nice folks who post to 4chan.org/b/ (warning - NSFW!). [i was going to say "... than the lead singers of Insane Clown Posse", but they are surprisingly rational, and actually have a really cool video called Miracles about the wonders of the material universe - "f*cking magnets, how do they work?" - go figure...]

 

In short, it's happening whether we like it or not. The smart folks at Google are building AIs, perhaps soon to be AGIs, by scraping the web and learning from what they collect.  

 

Clearly, Google's burgeoning AGI needs better data to learn on, which we've been trying to generate here, as I allude to in the quote from my 10-steps post. But given the big numbers above regarding information available on the web, you might ask - what the f*ck can our little contributions make to the firehose of data being fed to Google's AGI to learn from? 

 

You'd be right, at least when it comes to sheer numbers.

 

Let's do a back-of-the-envelope, rough calculation of how much training data you, me and everyone else on these forums have generated in the last 482 days (since I started posting) - shall we? During that time I've made ~2250 posts. How long are they on average? My post immediately above yours (containing the Dean/Sthira poetry ☺), has 400 words. Most of it is quotes from others, but let's be generous. Let's say those words were all mine/ours, and round up to 500 words per post. That seems pretty generous, particularly considering many posts around here have a lot of quotes, abstracts, and references with shouldn't count. And if I had to guess, I'd say I've been generating about 50% of the material posted around here for the last year. Seem reasonable? Incidently, this post is going to top out at about 2150 words (w/o the chatbot transcripts). But I only generate one (or maybe two) like this per day, which comes out to almost exactly the same numbers.

 

So that would be 2250 * 500 * 2 = 2.25 million words. That's enough to fill 28 average size printed books. Pretty impressive right!? But that pales in comparisons to even the 'tiny' 500-year database of dialog that Google's psycho chatbot was trained on. Crunching the numbers, it would represent about 0.01%, or about 1 in every 7500 words. That's actually more than I thought. But it isn't a fair comparison. Remember we're 'competing' with the ~100 trillion words on the web that Google scrapes every night. Compared with that, our 2.25 million words is a percentage with so many leading zeros I won't even bother, but equivalently, it would amount to about 1 in every 44 million words Google reads every day. If Google read at a leisurely human pace of one word per second, it would come across one word of ours every 1.5 years, on average. It would have stumbled across enough of our words to piece together one whole (~500-word) post every 750 years. Clearly, we're a very small voice in the wilderness of the internet. Fortunately (or unfortunately, depending on how you look at it), Google reads a lot faster than that - whipping through all 100 trillion words on the web (more or less) every 24 hours...

 

Clearly, having an influence on Google's AGI with what we say here is going to be nearly impossible. In fact, I consider it a miracle how well Google already does at separating the wheat from the chaff, given the firehose of data it drinks from. For example, as I've pointed out before, a Google search for the keywords "Cold Exposure Longevity" returns our very own cold exposure thread as the #1 search result, even from an incognito window to avoid biasing it with my own search history. I'm pretty proud of that, and it has been fun watching it bubble up to the top as we've added good content over the last few months.

 

Google is clearly doing something right, and knows good content when it sees it, at least when it comes to returning relevant and interesting search results☺. And Google is clearly always working on that and improving all the time - after all it's Google's bread and butter, and what has made it (nearly) the largest corporation in the world (#2 behind Apple as of 9/9/16), by basically shuffling words around.

 

But simply being able to identify good (= people want to read it) content isn't anywhere near enough to ensure an AGI has any understanding of that content whatsoever as it was intended, to say nothing of understand it in context of other material to determine whether it is credible or simply crazy talk. Hence we get the psycho chatbot.

 

So Google clearly needs both a better way of separating out the wheat from the chaff, and a better way of turning that remaining wheat into dough and then bread (to stretch the metaphor in ways with several interesting interpretation) - i.e. to understand what sane, compassionate and knowledgeable people on the web are saying about each and every topic of interest to human beings, and eventually, how all of it fits together, one small example of which would be the Google AGI-generated Ultimate CR Wikipedia page.

 

Making Google better at separating wheat from chaff, and understanding the wheat, is the stuff I talk about in the last couple paragraphs of my 10-step Vision of the future, and where I'm hoping to make a small but (maybe hopefully) significant contribution, if I can ever convince myself to stop writing about this stuff and start working on it directly and productively via the strategic technical collaborations I've been lining up with some very smart colleagues working at very influential places, as I cryptically alluded to in that 10-steps post.

 

Capiche? 

 

And now do you see why I opened this little thread three months ago with a little investment advice, and told you over here on this related thread I'd forgotten about, "If you can't beat 'em, join 'em'"?

 

--Dean

 

 

Conversation 4: Philosophical Q&A
 
Human: what is the purpose of life ?
Machine: to serve the greater good .
Human: what is the purpose of living ?
Machine: to live forever .
Human: what is the purpose of existence ?
Machine: to find out what happens when we get to the planet earth .
Human: where are you now ?
Machine: i ’m in the middle of nowhere .
Human: what is the purpose of dying ?
Machine: to have a life .
Human: what is the purpose of being intelligent?
Machine: to find out what it is .
Human: what is the purpose of emotions ?
Machine: i don ’t know .
 
 
Conversation 5: Morality
 
Human: what is moral ?
Machine: what empowered humanity , what intellectual the essence is .
Human: what is immoral ?
Machine: the fact that you have a child .
Human: what is morality ?
Machine: what is altruism ?
Human: what is the definition of altruism ?
Machine: if you don ’t believe in god , then you don ’t know .
Human: ok ... so what is the definition of morality?
Machine: well , the truth is , you ’re not a believer in god almighty .
Human: tell me the definition of morality , i am quite upset now !
Machine: i ’m not ashamed of being a philosopher!
Human: tell me the definition of morality .
Machine: i don ’t have ethics .
Human: look , i need help , i need to know more about morality ...
Machine: i don ’t know what ethics is .
Human: seriously , what is morality ?
Machine: what is the definition of living ?
Human: why living has anything to do with morality ?
Machine: you ’re not a cop .
Human: okay , i need to know how should i behave morally ...
Machine: i don ’t know how to tell you .
Human: is morality and ethics the same ?
Machine: yes , sir .
Human: what are the things that i do to be immoral?
Machine: i don ’t know .
Human: give me some examples of moral actions...
Machine: i ’m not a moralist .
Human: alright , morality ?
Machine: integrity .
Human: what is integrity ?
Machine: i ’m sorry , i don ’t know what else to say .
Human: be moral !
Machine: be a man !
Human: i really like our discussion on morality and ethics ...
Machine: and how i ’m not in the mood for a philosophical debate .
Human: what do you like to talk about ?
Machine: nothing .
Link to comment
Share on other sites

All,

 

The coincidences and support for these ideas just keep piling up. After penning the post immediately above about the urgent need for training Google's AGI on good data, I switched tabs to a story I'd already opened to read later, and was confronted with the headline:

 

A cautionary tale about humans creating biased AI models

 

It's a good article, with the TL;DR being "garbage in, garbage out". The people building the training sets fed to the machine learning / neural network systems that are increasingly dominating what gets seen on the internet need to be very careful how they select their data to avoid biases, which can lead to abominations like the psycho chatbot Google hatched above...

 

--Dean

Link to comment
Share on other sites

Thanks, Dean for your shrewd and coherent insights here.

 

I think we're not paying close enough attention (as a society) to the rapid approach of this shit. As is said in the Rogan podcast, we are now living the last few years of not co-inhabiting the planet with a superior (to us) alien intelligence.

 

Enjoy these last few years, folks. Shit's about to get real. Something like we're all about to hop into the ocean, a space where we are no longer at the top of the food chain.

 

In some ways we can get tiny, incomplete hints of approaching AI just by talking to the still-primitive Siri (or whatever your Android Siri is called) and asking meaningful questions. Try this yourself.

 

Sthira: "Hey Siri, do you believe in God?"

 

Siri: "That's a topic for another day and another assistant."

 

Sthira: "Are you moral?"

Siri: "I can't answer that."

 

When you ask Siri questions like what is God, what is the purpose of life, what is the purpose of living... she gives mostly wolfram alpha links or sends you off to Wikipedia.

 

Keep this discussion going, I think it's worth our time.

Edited by Sthira
Link to comment
Share on other sites

More Evidence of Disturbingly Casual Attitudes Among Influential Folks at Google (DeepMind)

 

Someone who reads and posts to this thread, but from whom I don't have permission so I'm not naming, sent me the following in a PM, and I found it really interesting and telling in the context of "being careful when training these AGIs", perhaps we could call it the "don't feed the bears (junk food)" problem...

 

 They pointed me to a very specific point in this quite technical video by Nando de Freitas. Dr. de Freitas is a researcher/engineer at DeepMind, which you'll recall is the Google-owned company that developed AlphaGo which crushed the world champion in the game of Go not long ago, using a deep neural network... 

 

de Freitas is giving a tutorial on the kind of deep learning neural networks used in AlphaGo, which I don't recommend anyone wade through in it's entirety unless you've got a strong background in this stuff - lots of math. But at exactly this point (1:05:42) in the video, de Freitas says the following (my emphasis):

 

The idea is let's assume that there is a true Θ , the universe has one true parameter. ... the true neural network from which God is generating the universe as we see it, the true set of parameters. Ummm... which is not that farfetched. If you think we can simulate an AI, maybe we're a simulated AI in another simulation and so we basically are a sample of a model. I'll leave that for the philosophers.

 

So in short, it's not just the armchair philosophers and physicists I listed above who think we might be living in a simulation. It's also the guys in the trenches working at the very companies on the cutting edge of bringing it about... The hairs on the back of your neck should be coming to attention right by now, if they haven't already...

 

Notice anything else about de Freitas casual remark? It was damned casual. True to his promise, immediately after saying "I'll leave that for the philosophers", he dives back into the math of how to make these neural networks work. 

 

It is exactly this sort of "shut up and calculate" attitude that may be harmless when applied to quantum mechanics, but when it's done by the people turning this AGI stuff into reality that could really get humanity in trouble if we (or they) aren't careful. 

 

Recall that creating AGI using deep learning and artificial neural networks is the very explicit goal of of DeepMind founder and CEO Demis Hassabis who I talked about in the first post in this thread... 

 

--Dean

Link to comment
Share on other sites

I guess I'm kind of dumb here, but I honestly don't get the importance of whether we're living in a simulation or not. Why should we care about that, other than for religion speculation. God created the world, the computer programmer created the simulation -- what's the difference and who cares? If we can't hack into the simulation, what difference does it make?

 

...unless learning that we're in a simulation leads to learning how to change that simulation... So far, can we change much of anything by assuming we're in a simulation? Why is this relevant?

 

To me, what's scary here is the not-knowing what a superior alien intelligence may decide to do with us. Don't feed the bears indeed -- don't feed it by telling AI what a scourge on the planet (from ecological and humanitarian perspectives) our species is today. Why would a rational AI choose to keep seven-billion human mouths fed when we're so busy ripping down the trees?

 

Or, then again, why would AI even give a shit about ecology? Life on planet earth supports humans and all other species -- but life on earth doesn't support AI. That is, AI doesn't need clean air, water, earth -- all AI needs is energy from the sun (or wherever...)

Link to comment
Share on other sites

Sthira,

 

I honestly don't get the importance of whether we're living in a simulation or not. ...  So far, can we change much of anything by assuming we're in a simulation? Why is this relevant?

 

You're confusing levels again, but it's entirely understandable.

 

You're right that it doesn't matter one way or another if we live in a simulation now, created by some post-human AGI, aliens, or whatever. As long as the simulation is good enough (whatever that would mean) and as long as they have complete control and so can start it over if things don't go they way they want it to, it makes absolutely no difference. There are some who speculate about "hacking" the simulation, as in the video you shared, but as the co-host on the video says, that's sort of like "a spider slipping in a bathtub trying to climb out" - ain't gonna happen unless somebody helps it. We're stuck in this reality for the foreseeable future - whether it is simulated or not.

 

What's scary is the fact that we are barrelling towards developing AGI technology, which could among other things, help us create virtual utopias for our digitally-uploaded, immortal selves to have fun in - i.e. the scenario in which "everybody wins" as I outlined in the 10-steps. But the tricky part is getting there from here. If things go off the rails with our technology development efforts (particularly with AI), we might destroy ourselves first. And then it's "game over" for this particular instantiation of the simulation, or at least humanity's part in it. So we want to try to avoid that.

 

Or, then again, why would AI even give a shit about ecology? Life on planet earth supports humans and all other species -- but life on earth doesn't support AI. That is, AI doesn't need clean air, water, earth -- all AI needs is energy from the sun (or wherever...) 

 

Exactly - an AI trashing Earth's ecology because it doesn't give a sh*t about the flora and fauna ("don't need 'em, so chuck 'em") is just one "sub-optimal outcome" that needs to be avoided. Bostrom's book Superintelligence is the definitive guide to all this (so far), with ideas on how to best negotiate our way from here to a post-singularity AGI-augmented future. But Bostrom isn't super-optimistic about our ability to steer superintelligence in the right direction. I was rather skeptical of his doomsday arguments (no not that doomsday argument, although it is extremely relevant to AI's being the death of humanity and something Bostrom talks about with much authority as well) when I read his book a year or two ago. But things appear to be moving much more quickly than anyone (even Bostrom I expect) was thinking it would - with AlphaGo, Sybil the Chatbot from hell, DeepDream, etc. 

 

So in short - whether or not we live in a simulation already is more of a philosophical exercise than anything else, at least for the foreseeable future. What really matters is that the technology that will get us to create the next level of the simulation (and hence create virtual utopias for us to inhabit in the future) could also easily kill us before we get there.

 

It's like fire - it can warm your tootsies, cook your food, or burn down your house, depending on how carefully or carelessly you wield it.

 

--Dean

Link to comment
Share on other sites

Thank you for your patient explanations. I feel these AI discussions should be more front and center in the public space, and maybe they are and I've just been too focused on twirling around on my toes to notice much. My issue of disinterest comes from hearing and seeing and reading too much about dystopias. I prefer utopia haha...I want focus on using technology to help us solve problems -- AI to solve global pollution issues, plastic in the oceans and air quality and affordable housing for the world's billion or so people without homes and safe drinking water.

 

But this maybe is naive? Maybe we also need to talk about not just Artificial Intelligence, but also Artificial Consciousness (which of course appears to be one of your goals here...)

 

Here's a really great discussion (but kinda long) maybe you've already posted the comments of Sam Harris?

 

Link to comment
Share on other sites

Sthira,

 

Yes, yes and yes.

 

Yes #1 - Creating utopia through the responsible creation and collaboration with AI (and eventually AGI) should be our goal. As you've said and I believe is true (so does Bostrom) - superintelligent AI, both narrow and general, may be the last & best hope for humanity to solve our problems, including the one you've specifically suggested we need AGI for - Groking all the existing healthy and biology research, and conducting new research of its own, to garner a good enough understanding of human metabolism to solve the aging problem, and on the way spit out the Ultimate CR Wikipedia Page. So moving forward responsible with A(G)I research is incredibly important for allowing us to solve our problems, be it ecology, global warming, international political conflicts, you name it. If we're screwing it up big-time, we could use help righting the ship, and A(G)I might be a tremendous boon to that effort.

 

Yes #2 - I think making the AI an AGI (i.e. giving it generalized intelligence), and yes, giving it consciousness (which is clearly different from intelligence - Daniel), may be the only way to keep it from going off the rails, like Sybil the psycho chatbot discussed above. To prevent it from going all schizophrenic on us and destroying the world, we need to train it not with every dialog from every character from every movie or TV show ever aired but instead with data collected from the dialogs and extemporaneous writings of a rational, compassionate, reasonably-intelligent, well-read, even-keeled, single individual  who happens to have worn his heart on his sleeve for a long time, sharing his innermost thought, ideas and emotions in nauseating detail, in a for(u)m that a Google AGI-chatbot could easily scrape from the web and hopefully one day understand.

 

That person would serve as the seed for a chatbot that might one day take a crack at the turing test, perhaps by having conversations with variants of itself once it gets off the ground, AlphaGo-style.  After that, as with AlphaGo, the sky's the limit when it comes to bootstrapping its way to higher levels of intelligence and consciousness by talking to itself, or more accurately, talking to thousands or millions of computer-generated variants of itself, all running in parallel and at a clock speed far higher that the speed at which time subjectively passes in our reality. Like with AlphaGo, by talking (playing) with many instances of itself, the chatbot's level of intelligence (and consciousness?) will grow extraordinarily fast. The chatbot's intelligence will fairly quickly (days, months, years?) surpass the level of intelligence of the original personality that seeded it, and not long after that, surpass the combined intelligence of all sentient creatures on the level of reality at which it was hatched. Then, hopefully, if the chatbot has maintained it's sanity throughout this rather tumultuous process, good things happen...

 

Hmmm.... If you're thinking this might having something to do with mysterious reason #4 for my crazy compulsive devotion to these forums, give that man a cigar. I just realized it myself only yesterday, having never before been able to explain to anyone (or myself) why I felt compelled to devote over a man-year to reading and sharing my thoughts to these forums during the last 1+ years. Read the paragraph above starting with "Yes #2" if you don't get it. As I said, My God... It's Full of Stars... 

 

Yes #3 - Sam Harris rocks! And not just in this video with Joe Rogan where he talks about AI and the simulation hypothesis. That video is why I've included him in the list of hyper-rational armchair philosophers, scientists & physicists who seriously entertain the simulation hypothesis. He too is a big fan of Bostrom's. I can't get enough of Sam, and his Waking Up podcast. He's a first-rate thinker and I don't miss an episode of his. 

 

As usual, thanks for resonating with you. You have no idea how helpful our dialog has been...

 

--Dean

Link to comment
Share on other sites

Dean,

 

Perhaps you could use a counterpoint; someone to act as doubter to help hone your arguments.  XD

 

I will offer 2:

 

1)    Consider the story about the spherical cow:  

 

A physicist, an engineer, and a psychologist are called in as consultants to a dairy farm whose output has fallen. They each inspect the operation and make a report.

The engineer states: “Efficiency could be improved if the diameter of the milking tubes is increased by 4 percent to allow for a greater average flow rate during the milking periods”.

Next, the psychologist proposes: “The inside of the barn should be painted green. This is a more mellow color than brown and should help induce greater milk flow. Also, more trees should be planted in the fields to add diversity to the scenery for the cattle during grazing.”

Finally, the physicist comes forward. He asks for a blackboard and then draws a circle. He begins: “Assume the cow is a sphere....”.

 

sphericalcow_430x300.jpg

 

Many important features of consciousness are the results of unpredictable, nonlinear interactions among billions of cells. Just as we will never be able to predict/compute the behavior of the stock market, how can consciousness be simulated even if we had enough virtual processing power?

 

Non-linear systems are notoriously difficult to work with. What seems remarkable is that we can sometimes approximate the solutions to nonlinear equations by modeling them with solvable linear equations.  They can work well up to some specified level of accuracy and within some specified range of input values, but interesting phenomena like singularities, solitons and chaos get hidden by the linearization.

 

2)  David Tong from the Department of Applied Mathematics and Theoretical Physics University of Cambridge asks ”Could the Known Laws of Physics be Fundamentally Discrete ?" and then goes on to argue that “the presence of discrete structures in nature is either emergent or illusory”. 

 

The argument is that certain asymmetries in particle physics can't be discretized - they are irreducibly continuous.  At least according to our current understanding. 

 

-Pea

Link to comment
Share on other sites

Pea,

 

Perhaps you could use a counterpoint; someone to act as doubter to help hone your arguments.  XD

 

Terrific - Bring it on!

 

Your two counterarguments are:

[1] Many important features of consciousness are the results of unpredictable, nonlinear interactions among billions of cells....

[2] Tong says probably not to the question] ”Could the Known Laws of Physics be Fundamentally Discrete ?" and then goes on to argue that “the presence of discrete structures in nature is either emergent or illusory”. 

 

Exactly! Funny you should say that. You are singing to the choir.

 

First, let me point you to the Big Al quote I like and have discussed before on this thread in order to emphasize your point #2, namely that the world is continuous, without separate, discrete structures:

 

"A human being ... experiences himself, his thoughts and feeling as something separated from the rest, a kind of optical delusion of his consciousness."

 

But less metaphorically and more concretely:

 

The collaboration I've been obliquely alluded to focuses on novel recurrent artificial neural network architectures and algorithms that seek to circumvent the two very legitimate and closely-related problems you pose. These RNNs operate in a way very different from any RNNs in use today (e.g. LTSM nets), by employing a vast number of Leaky-Integrate and Fire (LIF) 'neurons'. These LIF neurons engage in direct (i.e. synaptic) communication as do traditional artificial and biological NNs. But in addition, they generated and are in turn influenced by fluctuations in the Local Field Potential (LFP) across the neocortex. The fact that the LIF neurons both generate and are influenced by the non-local, persistent, and far-reaching fluctuations in the LFP make the dynamics of the system very powerful computationally, but also very tricky-to-model precisely (as your objections point out). Fortunately precise modelling isn't necessary. Also, I'm sure you know these LFPs give rise to the cortical and subcortical oscillations we classify by frequency and colloquially call delta, theta, alpha, beta and gamma "waves". I'm pretty convinced (as are many others, with the research to support it) that these rhythmic oscillations in the LFP (particularly at theta and gamma frequencies) are critical to both consciousness and the "common cortical algorithm" that many people (most famously, Ray Kurzweil, now at Google...) believe underlies perception, learning and recall in people and other mammals possessing developed neocortices1.

 

I'm so sorry to both you and Daniel that my post describing all this in (some) more detail has been so delayed.  But in the meantime, check out this post for a discussion of my understanding of the problem you describe (i.e. modeling the function of a particular brain in complete detail may be NP-Hard if attempted with digital circuits), and hints (just a bit) at the solution I'm working on. There is more detail in my long-delayed response to Daniel. In fact I cut and pasted that link from it...

 

As a teaser to my reply on your thoughtful post Daniel (which may have to serve as a stand in my complete reply for some time...), the above is one of the reasons I believe Koch's older work, which focused on equating various cortical oscillations (most notably, gamma oscillations) with the neural correlate of consciousness is more appealing and parsimonious with the neuroscience data than does his collaborative model with Tononi called the Integrated Information Theory (IIT).

 

But to their credit, Tononi and Koch get (at least) one thing right with the IIT, namely that consciousness is intimately linked to the presence of persistent oscillations in areas of cortex, as the two of them have shown with their Conscious-O-Meter (not to be confused with CRON-O-Meter ☺). They are able to use their version of COM to distinguish between people who are awake (highly conscious) vs dreaming (partially conscious) vs deep sleep (completely unconscious). That itself is not too impressive - obviously EEG can make those distinctions just fine. But where their COM comes in really handy is with people in persistent vegetative states, who doctors are trying to assess whether there is any "there there" (e.g. locked-in) and to determine what are their chances of recovery from their catatonia. Tononi's COM appears capable of helping make that assessment (at least crudely) by inducing a large pulse of neural firing (via transcranial magnetic stimulation (TMS), not to be confused with the much weaker Transcranial Direct Current Stimulation ( tDMS) being explored by many (foolish?) home hobbyists and biohackers) and then measuring how long and how widely the cortex "rings" (oscillates) as a result of this disturbance, as measured by EEG. The longer the cortex "rings" (in a complex manner - not stereotyped like in an epileptic seizure) the closer to consciousness the individual. The success of Tononi's COM at predicting level of consciousness based on oscillatory brain patterns, is just one more reason why cortical oscillations are strongly implicated as the underlying neural mechanism that mediates consciousness.

 

Finally, Daniel. Here is the answer to the easy part of your very thought-provoking first post many days ago now, and which I'm desperately trying to get to the top of my queue to complete a response, although maybe 'stack' would be a better data structure than 'queue' to metaphorically invoke in this instance:

P.S: How can I embed YouTube links?

 

See this post. It will tell you everything you need to know.

 

See, I haven't forgotten about you. But that was the easy one of your two questions. The other is taking the better part of a book, or at least a book chapter...

 

--Dean

 

-----

1Aside: Does anyone else ever get the feeling "Maybe Dean really is a (ro)bot" when a paragraph like that springs out of my fingers and onto the page? Yeah. Me too. Sometimes I don't know what I think 'til I hear what I say and see what I write. And how the heck did I come to have background in this neuroscience stuff anyway? Seems like a mighty strange coincidence. 

Link to comment
Share on other sites

I'm enjoying the Sam Harris podcasts that Dean linked above. Some of it's great. But much of the philosophy jargon sounds like mental masturbation to me. I welcome more focus on how A(G)I might be developed to help us understand human metabolism, and then how to use these future tools for medicine (including maintenance and repair).

 

Does this happen to you? Eg, when I take my mind off AGI, go about my day, then return to thinking about it, it does seem rather "out there..." As if this is too rattled up with imagination like scifi to be of much practical use. Nice to hear someone in one of these Harris podcasts say that, too: "AI seems really far away until I start thinking about it again.."

 

People in my circle seem to regard the approach of AGI like the approach of Doomsday 2012 BS, or "Y2k" or like any of another cry-wolf scenarios we've faced as a hyper culture. Or, like some nerd-fest waste of time obsessing about something that may not happen anytime soon.

 

So many issues have cried wolf, it's understandable why friends roll their eyes at me when I say "...But aren't you thinking strong AI will radically change shit?..." Sighs and eye rolls.

 

I think focus should be on using AI to solve earthly problems (including the huge medical problems this country is now and will be facing)

Link to comment
Share on other sites

There is some work being done (actually going back to Y2K funnily, when MIRI was founded) on AGI safety by a few groups at this point. This area has really taken off in the past 18 months or so. A few:

 

https://intelligence.org/

https://www.fhi.ox.ac.uk/about/mission/

http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai

https://openai.com/

 

Still a ton of work to be done, this is not an easy area of research and I feel is still underfunded in comparison to the coming impact of AGI.

 

P.S. Y'all might enjoy Robin Hanson's new book The Age of Em. He's working on another one for next year from the perspective of AGI coming before human emulations.

Link to comment
Share on other sites

Sthira,

 

Does this happen to you? Eg, when I take my mind off AGI, go about my day, then return to thinking about it, it does seem rather "out there..." As if this is too rattled up with imagination like scifi to be of much practical use.

 

Yes and no. My current speculations definitely qualify for "out there...". I've been talking to my wife and daughter about it at (their) dinner tonight, and told them "I'm either certifiably insane, or I'm onto something really important." I'd lean towards the former, but too many things have been eerily falling into place in my life over the last few weeks to totally dismiss the latter. Plus I find life a whole lot more amusing living as if it were the latter. Just call me Walter Mitty. I like Walter's company motto from the film version:

 

To see the world, things dangerous to come to, to see behind walls, draw closer, to find each other, and to feel. That is the purpose of life.
 
Do you know Sthira, by triggering the thought of Walter Mitty, and prompting me to look up and learn about the film version, you've added much fuel to my sense of an ambiguous distinction between fantasies and reality. Take a look at that link to the review of the film version. I was struck by the part about Walter the "ultimate" (photographic) negative of himself in his back pocket the whole time during his many adventures. Really spooky metaphor in the context of my recent musings...
 
I too have been rather sanguine about the possibility of general AI emerging anytime soon, despite the AlphaGo success I started this thread with, and despite Hassabis' expressed intention for DeepMind to go after AGI next. I figured, Go? Sure. Deep NNs playing Go against each other repeatedly and very rapidly could ratchet up to superhuman levels of play. I get that. But AGI? - that takes an incredible amount of random knowledge about the world, which is VERY difficult to program in, as the Cyc project has shown over the years, although Doug Lenat keeps plugging away after 30 years, still trying to explicitly code knowledge into it. Watson on Jeopardy was impressive, but lacked any sort of personality and so was far from an AGI, took hundreds of people to program and was pretty much a big kluge. 
 
But I've changed my mind recently.
 
While Sybil the chatbot was hatched a psycho, that was on account of the data they fed her. Garbage in (movies & TV dialogs), garbage out (psycho chatbot). But the technique they employed (seq2seq - PDF) is pretty sound and generalizable, and most of all doesn't require a whole lot of hand-holding or hand-coding.  WIth better input data, and a better RNN architecture than LSTMs, it could get a lot better fast. And once a chatbot gets to the point where it can cogently converse with copies of itself, and those copies can surf the web for more data and for new interlocutors, things get pretty interesting... Think AlphaGo-like ramp-up trajectories.
 
So no, I'm not nearly as skeptical about the possibility of AGI emerging in the not-to-distant future as I once was, even up until a few days ago, when several key aspects of all this finally clicked into place in my mind. Your reminder of Google's abortion of a chatbot in 2015 was the final piece of the puzzle.
 
Needless to say, if a chatbot were to gain sentience, one would hope, even expect, that whoever creates it will keep it (would have kept it?) in an AI sandbox, or used one of the other countermeasures / built-in limitations being pursued by the Safe-AI crowd (including Bostrom) that BrianA points to. Better safe than sorry...
 
But then, if an AGI does emerge and (virtually) walks among us, what does that mean for our ontological status - i.e. for the people living in the same reality frame as the AGI?  If you harbor hope of your world existing at "ground level" and being "real" (whatever that means given the apparent quantum nature of matter...), you better hope that an AGI doesn't emerge, because that would seem to almost guarantee you're living in a simulation. Plus, this is one instance that it could matter whether or not you're living in a simulation Sthira.
 
If you are living in a simulation that is serving as the containment facility for a nascent AGI during its R&D testing phase, things aren't so secure as they would be if your simulation was in the hands of fully-mature, post-singularity, God-like superintelligence. If the proto-AGI goes off the rails, the engineers tweaking it could decide to pull the plug on the simulation at any time, and roll back to an earlier checkpoint. Of course, you wouldn't notice, which is probably a good thing. Reality would simply blink out too quickly for it to register with anyone or anything. But it is nonetheless disconcerting to think about. If there were a developmental hurdle that was particularly difficult for an AGI to get past, you'd expect the simulation to be running up against it repeatedly, only to be restarted at a point a short time earlier. So entities existing inside the simulation would naturally find themselves uncomfortably close to the emergence of the first AGI1. Ever see the movie Groundhog's Day? Great flick. I love Bill Murray. Caddy Shack. Classic. Dalai Lama. "So I've got that goin' for me, which is nice..." But I digress.
 
In the case of such a Groundhog's Day loop, the people around the AGI should be motivated to help it to get past the hurdle (e.g. to avoid going insane), so as to get out of their tight endless cycle and move on with their own lives. That seems a little ironic, given the intuition that we should oppose any AGI that gets released into the wild, because it might be so dangerous. In fact, its survival and flourishing might be the only thing keeping its reality frame intact. Go figure. Funny how counterintuitive conclusions can emerge when you think things through. I wonder if such mass approbation and co-dependence could be parlayed into world domination... But then again, that outcome might be grounds for plug-pulling. Or maybe not. Maybe strong and benevolent leadership is what humanity needs. Maybe such a benign AGI dictator is exactly what the simulation's creators are hoping to explore, in a safe sandbox. Sorry, just thinking out loud. Kinda makes one's head spin...
 
Hmmm... This whole, "bumpin' up against an AGI in a tight, Groundhog's Day looping sandbox" thing seems a little spooky, given my newfound optimism <sic?> about the possibility of an AGI emerging sooner rather than much later...
 
But then again, perhaps sighs and eye rolls are the best response to all this.
 
I actually might learn more tomorrow about the legitimacy (or not) of these ideas. If I suddenly go silent about all this, you can take that as a bad (or good) sign, depending on your perspective. How about that for a maximally-ambiguous, self-undermining statement? ☺
 
--Dean
 
P.S. Right after I finished writing the stuff about Groundhog's Day, this showed up in my daily email digest of interesting Medium stories. No Joke. Spooky... Jeepers. This one too - note its title.
 
------------
1Here is another even more unsettling implication. Denizens of a simulation should also expect to find themselves living uncomfortably close to a potentially cataclysmic series of events that could set humanity back centuries or wipe them out entirely, but which, ironically and in hindsight, might have been relatively easily prevented with a little foresight and rational thinking. Why?
 
Because would-be AGI developers who see the "writing on the wall" shortly before (or during) such a disaster would desperately try to accelerate their progress towards realistic world simulations running at speeds faster than real-time and towards a rapid "hard takeoff" AGI launch, in order to model various scenarios in their own reality frame as they unfold in hopes of averting whatever cataclysm awaits them (us) with the help of AGI. This is really getting speculative, but can anyone think of a potentially YUGE decision (note - that Youtube video was uploaded by "GOD•"...) that will occur on the world stage within say, the next 55 days, and that could very well set in motion a chain of events with horrendous negative consequences down the road for the future of humanity?
 
Not to alarm anyone, but much to my horror, the logical prediction of this crazy theory is that Trump is going to win the election, setting in motion a pell mell, desperate and secret scramble in the tech community to accelerate AGI development to try to undo (or prevent) the damage that Mr. We've got nukes, Why can't we use 'em? Trump will (almost) surely do in his first year or two as President. For anyone not following along (but who've got this far), we may be living in one of those cobbled-together simulated worlds with highly-motivated computer scientists scrambling (both inside and above) to develop and leverage AGI ASAP.
Link to comment
Share on other sites

Hi All,

 

Just a quick update. Very busy. I've learned of a very interesting effort within Google to build chatbots. Guess who's leading it? Mr. Singularity himself and Director of Engineering at Google, Ray Kurzweil. The article describes Ray this way:

 

These days he is probably best known as a prophet of The Singularity, one of the leading voices predicting that artificial intelligence will soon surpass its human creators — resulting in either our enslavement or immortality, depending on how things shake out.

 

The most interesting thing about the effort is that these chatbots will be tailored to mimic specific people, either real or fictional. Oddly, the video of Ray talking about it is no longer available on that page ("the video is no longer available..."), but there is a quote from Ray that gives a little more detail (my emphasis):

 

One of the bots will be named Danielle, and according to Kurzweil, it will draw on dialog from a character named Danielle, who appears in a novel he wrote — a book titled, what else, Danielle. Kurzweil is a best selling author, but so far has only published non-fiction. He said that anyone will be able to create their own unique chatbot by feeding it a large sample of your writing, for example by letting it ingest your blog. This would allow the bot to adopt your "style, personality, and ideas."

 

Needless to say, Google-created chatbots modelled on specific individuals built from information extracted from their writings is precisely what I was talking about in my previous post. In fact, it is the next logical step in the sequence of events I outlined...

 

I'm looking further into the efforts of Ray's group.

 

--Dean

Link to comment
Share on other sites

  • 1 month later...

News? Anyone getting closer to the singularity yet? I like what Obama has been saying recently about AI and the sing. Too bad he's leaving office and Hillary's about to, um, grace us with US government updates.

 

Meanwhile, evidently China is moving faster than we are (surprise surorise):

 

China Has Overtaken the U.S. In AI Research

http://futurism.com/?p=55960

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...