Jump to content

Recommended Posts

Sthira,

 

Thanks for the pointer to the assessment of the Chinese AI efforts, and their heavy investment in deep learning. I'm not at all surprised. The field is moving incredibly fast. I've been particularly focused on image understanding. Deep neural networks with recurrence are really starting to be able to understand and answer tough questions about photos and videos. Here are a few examples from a new paper by folks at Stanford showing what state of the art AI systems based on deep neural networks can do when it comes to answering questions about what's going on in images. The answers in the bottom line labeled 'M' are those generated by their AI system in response to the question at the top about the picture in each column:

 

CM10tEi.png

 

Here is a cute example of image-based Q&A:

 

f9EkiyJ.png

 

 

Perhaps even more impressive are these results from researchers at CMU, Sanford and The Allen AI institute. They are basically building on DeepMind's success in applying deep reinforcement learning to master videogames and Go. They are applying it to the real world task of indoor navigation (e.g. "go to the fridge and fetch me a beer") by training a neural network in incredibly realistic simulated environments. Here is a video of their work:

 

 

After training on literally hundreds of millions of trial-and-error time steps in this simulated world, the deep neural network is able to learn how to navigate to new places in the environment, including ones it wasn't trained on (e.g. "go to the microwave oven" even though "microwave oven" wasn't a target it had navigated to before). It even generalizes to navigating an actual robot to targets in the "real" world.

 

This idea of training AIs in realistic simulated worlds is exactly the path I've been predicting we'll follow towards AGI and the Singularity. Here is what I wrote yesterday to a skeptical colleague at one of the large tech companies doing this research:

 

In a comment yesterday [my colleague] said:

 

BTW I don't think it's [running multiple agents in the same simulated environment so they can learn from interactions with one another] as obvious an idea as you say; the DeepMind folks have said that they are working on [reinforcement learning] for agents but it seems more like for inter-agent communication.

 

Seriously? This seems pretty naive. For example, the simulated worlds created inside video games like Grand Theft Auto 5 keep getting more realistic, and more disturbing in the way both human player-driven and AI agent-driven interactions unfold… Here is one such example (Motherboard article) using Grand Theft Auto 5:

 

In this new video, RZED tries to find out just how large a group of poor San Andreans it will take to bring the game's 747 airline-analog to a screeching halt.

 

 

Do you really think it will be very long before AIs start interacting with other AIs, and simulated humans and even other real humans (in virtual worlds) in serious AI, ML, and ANN research?

 

In fact, it’s already happening, in a serious military context no less, as this article on a fighter pilot AI discusses:

 

A.I. DOWNS EXPERT HUMAN FIGHTER PILOT IN DOGFIGHT SIMULATION - SUPERHUMAN REFLEXES

 

Obviously that’s just one unclassified example. Who knows what’s happening in secret.

 

The final paragraph on this blog post on deep reinforcement learning and A3C by Adrian Colyer is clearly pointing in this direction too:

 

And on the topic of computers learning to play games, since Go has now fallen, when will we see a reinforcement learning system beat the (human) champions in esports games too? That would make a great theatre for a battle ☺.

 

In short, it appears to me that we’re on the verge of an explosion of research into agent-to-agent and agent-to-human interactions inside simulated worlds. Using a [technical ideas deleted], it seems to me progress could be made very quickly, with potentially scary consequences, especially when these agents start to build and iteratively improve models of both themselves, other agents and people…

 

In this article, Yoshua Bengio [top neural network researcher at Toronto] seems to have a pretty cavalier attitude towards getting a handle on such scary developments:

 

Bengio argues that trusting a computer is no different, or more dangerous, than trusting another person. “You don’t understand, in fine detail, the person in front of you, but you trust them,” Bengio continued. “It’s the same for complicated human organizations because it’s all interacting in ways nobody has full control over. How do we trust these things? Sometimes we never trust them completely and guard against them.”


The difference that Bengio doesn’t seem to get is that humans have millions of years of shared biological evolution and thousands of years of shared cultural evolution that have finely tuned us to think in ways similar to other people, hold them in regard, and interact with them (relatively) successfully. Who knows what crazy ideas self-improved ANN-based AGIs will come up with - their equivalent of trying to stop a 747 by lining people up on the runway…

 

In short, from my (semi-)insider perspective, it appears the ability of AIs based on deep reinforcement learning to understand and interact with the world is accelerating rapidly thanks to extremely realistic simulations and real world data. This sort of "large 3D, time-extended virtual experience" is also exactly what Stewart Russell (leader in field of AI, and author of the #1 AI textbook, Artificial Intelligence: A Modern Approachsays starting at 59min of this talk he gave at MIRI (Machine Intelligence Research Institute) one of the leading "Safe AI" research groups:

 

 

Russell says:

 

Imagine a large library of decision-making scenarios, represented by a large 3D virtual reality experience that would go on for some time, and the robot would have to be deciding how to behave in this scenario, kinda like a driving test. So at least you know you've got a 100,000 scenarios that you are sure that across all these it is behaving well. That would be a good test for a domestic robot, and a self-driving car as well. 

 

In fact, it makes me more convinced than ever that Elon Musk is right, that we are already living in one such simulation ourselves right now, and in fact we are exactly the kinds of AGI agents that we AI researchers are striving to build. In fact, Elon Musk and his co-sponsor of OpenAI (Sam Altman) appear to be so convinced of this scenario that they are sponsoring research to help us "break through" the simulation and communicate with whoever/whatever is "running" it (from this BusinessWeek summary of the full story in the New Yorker):

 

In the New Yorker story, Altman discusses his theories about being controlled by technology and delves into the simulation theory, which is the idea that human beings are unwittingly just the characters in someone else's computer simulation.

 

The simulation idea has gained popularity among Silicon Valley's elite — it's so popular that even Elon Musk is tired of talking about it. But now, a few of the tech industry's billionaires might be doing something about it.

 

Here's how the New Yorker tells it: 

 

Many people in Silicon Valley have become obsessed with the simulation hypothesis, the argument that what we experience as reality is in fact fabricated in a computer; two tech billionaires have gone so far as to secretly engage scientists to work on breaking us out of the simulation.

 

The piece doesn't give any clue as to who those two billionaires are — although it's easy to hazard a few guesses at who they might be, like Musk himself or Altman's friend Peter Thiel — but it's fascinating to see how seriously people are taking this theory. According to Musk, it's the most popular topic of conversation right now.

 

Obviously if we are living in a simulation, it's a bit risky to try to mess with the situations since our status could be rather precarious. So why do it? Partially out of curiousity, no doubt. But also, presumably because if we could "break out" or "break through" to communicate with whoever/whatever is running our world, we might be able to convince them to modify it in ways that would make things better for humanity. Of course, they could also just decide to pull the plug.

 

As I've said previously, we live in interesting times. I'd even go so far as to say this is a very special time in human history. This itself is more evidence that all this is a giant simulation, since given how many humans have ever lived, it would otherwise be an incredible coincidence that we just happen to be lucky enough to live during a period of such momentous developments...

 

--Dean

Share this post


Link to post
Share on other sites

Cool stuff. Just finished reading Hanson's The Age of Em - the scenario tackled in it is what might happen if emulated human brains happen before we have "hand coded" AGI. One way he suggests for us soon to be unemployed humans to stay rich (for a while, until we squander it one way or another) is to invest in tech companies. I guess based on this deep learning research, one good area to invest in might be GPU manufacturers. High powered GPUs would also be in demand in an "Em" style economy.

Share this post


Link to post
Share on other sites

BrianA,

 

I like Robin Hanson, and I've got Age of Em on order through my library. Just watched this interview he did with Nikola (aka Socrates) on Singularity1-on-1:

 

 

I used to agree with him that we were likely to get to AGI via whole brain emulation (EMs) faster than other routes. But having seen the rapid progress being made in machine learning and self-improving systems in the style of AlphaGo (e.g. deep reinforcement learning and generative adversarial networks), I now think he's probably wrong. That is I think we'll achieve AGI long before we have the technology to non-destructively scan and simulate the human brain. But his thought experiment what a world of cheap, easily copied and hyper-fast simulated human minds would be like is fascinating, we'll worth thinking about even if the future is unlikely to unfold the way he suggests.

 

--Dean

Share this post


Link to post
Share on other sites

If we were interested in learning more about the singularity, and specifically how it relates to our lives if we're living in a simulation, what currently available games would you recommend?

 

I like this -- although it's not a game, and I'm not sure it's yet available -- but it appears to be a virtual reality tour of the human body which may help us learn more creatively how the body works and (hopefully) to slow, stop, reverse its aging (I'm optimistic) or better ways to visualize repair of damage :

 

Take A Tour Of The Human Body With The Help Of Virtual Reality

http://futurism.com/?p=56159

 

Also, maybe old news to many of you, but here's Bruce Lipton and a physicist discussing the idea that we're living in virtual reality:

 

https://youtu.be/dV95anuFDQU

Edited by Sthira

Share this post


Link to post
Share on other sites

Sthira,

 

I'm not a big gamer myself, but the three games I think best reflect important aspects of where we are headed are:

 

No Man's Sky - Discussed earlier in this thread. The game play isn't supposed to be great, but the idea of procedurally generated worlds will become more and more important as time goes on. In fact, I believe our entire world, and the underlying laws of physics, are being procedurally generated, perhaps by something analogous to a deep neural network. That's why Max Tegmark [1] and others have observed the uncanny ability of deep neural networks to model physical reality.

 

Eve Online - Shows how virtual worlds, combined with human imagination, can make for a compelling (i.e. addictive) behavior, as illustrated by this recent article about the problem of gambling addiction inside Eve Online's virtual casinos. World of Warcraft is another example of the way people can build their entire social life around game worlds. Unfortunately this is starting to have pretty serious real world consequences for a generation of young men.

 

Grand Theft Auto V - Finally, if you want to be blown away by what's possible now in the way of rich and realistic virtual worlds, GTA is about the best example out there. The runway video experiment I posted a couple posts back is one scary and whimsical example. But serious scientists are using GTA's realistic graphics to train neural network based vision systems - showing a stunning convergence of real and virtual worlds on the path towards AGI.

 

Just like DeepMind was able to teach computers to play Atari games at a superhuman level, these same reinforcement learning and deep neural network techniques are starting to be used to train AGIs to interact with people in realistic environments like GTA and Eve Online.

 

--Dean

Share this post


Link to post
Share on other sites

Thanks, Dean. And as always thank you for your wonderful sharing, I'm grateful.

 

So it looks like it's still too early in VR world for what I'm looking to find. I'm not a gamer, either, and have no interest in violence. If given the choice between violent but educational vr and nothing, I'll choose nothing. We need uplifting and basically good vr games that appeal to what's best in people -- this world needs no more senseless violence no matter how wowsie it displays.

 

Are there any women in the AI and VR fields? Steven Pinker was mocked when he suggested our upcoming AGI robots might be nicer beings if more women were involved in their designs, and I think he's exactly 100% correct.

 

VR for playing gangsta car war games and kill everything that moves in outer space drama are completely worthless and destructive to life.

Share this post


Link to post
Share on other sites

Hanson is flexible enough to consider arguments from the other sides, in fact he was recently awarded a grant to work on a follow-up style book to Em that will focus on "what if AGI comes before Ems", perhaps that book will be out around 2018.

 

He does lay out some explanation near the beginning of Em why he thinks WBE will come before made-from-scratch AGI. Curious what you think once you get hold of a copy.

Share this post


Link to post
Share on other sites

Over on the New Leaf thread, TomB wrote about his skepticism regarding the prospects for strong AI anytime soon. I'm responding here since it seems like the more appropriate venue.

 

Tom,

 

I too have been watching (and modestly contributing to)  AI research since the mid-80s, and share some of your skepticism. But recent (e.g. last 6-12 months) rapid technical developments in machine learning (e.g. deep convolutional NNs, deep reinforcement learning, Generative Adversarial Networks (GANs), Variational Autoencoders, recurrent LSTMs, etc) have changed my mind, since they have made iterative self-improvement a reality for many previously unsolved (some have said nearly "unsolvable") problems like Go. 

 

These techniques are now being refined, improved, and applied to real world tasks like vision, navigation, and natural language understanding. Here just one example of impressive image-based Q&A now possible using these techniques:

 

g7Z6Ez9.png

 

The combination of GANs, deep reinforcement learning, and realistic physical simulations (like AI2-THOR shown in in video below) are making it possible for machines to learn some amazing things, and self-improve at a rapid pace. These techniques are beginning to make possible deep understand of video, physical environments and natural language. The fact that they are unsupervised / self-supervised learning techniques makes it possible that researchers will find ways to rapidly bootstrap these AI systems to levels of competence that will surprise many people, just like happened with AlphaGo.

 

Such rapid progress may still be decades off, but Google/DeepMind, Facebook, Baidu, Amazon, Microsoft, and OpenAI are literally pouring billions of dollars into these machine learning techniques. Undoubtedly DARPA and the NSA are doing the same. So despite my initial skepticism (up until about a year ago), I'm much less sanguine about the idea that we've got decades to worry about how to make sure the AIs we are on the road to creating turn out to be "friendly".

 

Note - AI systems don't have to have completely general intelligence or human-like drives/motives to be quite troublesome / disruptive. Just look at the Dyn distributed denial of service (DDoS) attack earlier this week that nearly shut down the internet in the US for a day, orchestrated by a small group of people controlling hundreds of thousands of IoT's devices (e.g. security cameras) via a botnet. It is a good example of what's possible without too much in the way of "smarts".

 

Your analogy with fusion is a poor one. Machine learning is more like fission. Once the right algorithm for iterative self-improvement is discovered, things could get out of hand quite quickly, given all the data out there for such systems to learn from. I now see several pretty viable approaches to achieving this sort of self-improvement. This has started to get me both interested and worried. Hence my new/renewed focus.

 

--Dean

 

Share this post


Link to post
Share on other sites

Again, I'm coming at it from a more conceptual point of view. I'm questioning the very meaning of the "Intelligence" part of AI. As I said before, data process and everything associated with it, including deep learning, is just one - albeit important - part of intelligence. I don't doubt for a second that there is remarkable progress, indeed breakthroughs, in the data processing part of what constitutes intelligence. But again - that's irrelevant to my objection to AI research as it's been conceived - that makes the research way, WAY, too narrowly focused. 

 

To use an analogy. Imagine that we are trying to build a car that can take passengers and travel along roads. Our intrepid researchers built an interesting engine. But just the engine, and nothing else that constitutes a "car", no chassis, no wheels, no steering and no driver. And they claim they are very close to building a car. When someone asks why do they feel they are close, they point to the engine becoming ever more powerful. That's not the point. The engine is just part of it. No matter how powerful, it will always remain only a part of what makes a car. 

 

Or to bring things closer to computer science. Imagine someone has built a calculator and says: "well, here's AI!". Sure, being able to make mathematical calculations is a characteristic of human intelligence. But it is not the entirety of it - it's barely a small part of it. Telling us that the calculations are now so many petaflops and so much faster than any human still doesn't get us one iota closer to what constitutes AI, it's just a faster calculator.

 

I see current AI research - indeed as outlined from the beginning, even in the days of Minsky (RIP) and even earlier von Neumann up till today - as one giant calculator; qualitatively a galaxy away from what makes "intelligence". Yes, data processing, deep learning, pattern recognition and on the fly algorithm creation will result in amazing expert systems (indeed it already has). But that has only a glancing relationship with creating motivated intelligence. Data collection and manipulation without self-articulating goals is nothing but a smarter pocket calculator (which is what your smart phone essentially is). 

 

We are as far from true AI as medieval monks were from creating artificial life in the form of homunculus's, composed of straw and flies. Yes, it is organic matter, but that's all it is. We are at the stage of straw and flies. Give it another 300 years and then we'll talk AI. 

Share this post


Link to post
Share on other sites

 

Why is there rope?

w/o image:  Machine, "To hang people."

with image: Machine, "To hang people."

 

Dean, you are right the AI is making progress at  a frighteningly quick pace.  Perhaps Stephen Hawking is smarter than we know...

Share this post


Link to post
Share on other sites

All,

 

Sthira wrote:

 Why do you think it's still decades away and not more rapidly approaching? 

 

I guess I didn't express myself clearly enough. I've been trying to say that strong, self-improving AI might still be decades away, but I now believe it might be closer than many people (even experts) think - hence the title for this thread.

 

TomB wrote skeptically but quite cogently:

 I'm questioning the very meaning of the "Intelligence" part of AI...


To use an analogy. Imagine that we are trying to build a car that can take passengers and travel along roads. Our intrepid researchers built an interesting engine. But just the engine, and nothing else that constitutes a "car", no chassis, no wheels, no steering and no driver. And they claim they are very close to building a car. When someone asks why do they feel they are close, they point to the engine becoming ever more powerful. That's not the point. The engine is just part of it. No matter how powerful, it will always remain only a part of what makes a car. 

 

Tom, I don't think there is any "secret sauce" to apparently intelligent behavior. What we consider intelligence is just a bunch of clever heuristics partly hard-coded into us by evolution, and running on a fairly general purpose learning algorithm / architecture that allows us to (modestly) adjust our behavior to new circumstances and learn from experience.

 

Complicated yes, but perhaps some of the complexity of how mother nature implementing intelligence in humans is simply an artifact of our convoluted evolution, rather than a result of the intrinsic complexity of the problem, or how difficult it is to solve. In short, I'm beginning to suspect there may be an easier, more direct path to general, self-improving intelligence than the path humanity has taken. So (unlike the "whack-a-mole" problem of human metabolism), there may be a way to crack the nut of AGI without recapitulating each and every quirk of human cognition and psychology in detail or resorting to whole brain emulation. Just like we solved powered flight without recapitulating feathers or flapping wings tugged by contracting muscles.

 

AlphaGo is one example, but you are right - despite it's complexity, the game of Go is quite a narrow domain. What is impressive about AlphaGo is not that Go was conquered by a machine, but how it was conquered. In short, it was learned; bootstrapped using lots of data and a self-improving learning algorithms/architecture, rather than hand-programmed and hand-tuned. Similarly for DeepMind's conquering of all the Atari games. 

 

One key advance we haven't seen before in AI research is that progress is really starting to become cumulative, both in terms of techniques (i.e. new algorithms built as improvements on prior algorithms) but perhaps as importantly, in terms of solutions. As a concrete example, consider the recent rapid progress towards self-driving cars. Watch and listen to this video by the CEO and an engineer at NVIDIA talking about their self-driving car development effort starting at 27:45 and going to about 32:00:

 

https://www.youtube.com/watch?v=KkpxA5rXjmA

 

They did what is very common now in machine vision research. They started (seeded) their deep neural net-based vision system for self-driving car with the best visual feature detectors available, derived through many man-years of effort by other researchers solving the object recognition problem. For anyone familiar with the field, NVIDIA started with the feature detectors developed for the NN that has won the most recent iteration of the ImageNet Challenge - a competition to see whose vision system can best classify the objects contained in 1.2 million images into 1000 different categories.

 

By building on the prior work of others, both in terms of algorithms and feature detectors, NVIDIA was able to develop car detectors, pedestrian detectors, and pixel-level NN classifiers (to distinguish road signs, buildings, pavement, other cars, bicyclists, etc.) in a very short amount of time, enabling them to develop fairly competent self-driving car technology in a matter of months, starting from scratch.

 

As another closely-related example, I'm pretty critical of Elon Musk's deployment of so-called self-driving cars (as my posts to twitter attests), but that's only because of his deployment strategy, which IMO is way too aggressive and risk poisoning the water for more competent systems down the road. But he has done one thing almost perfectly - namely he has instrumented all Tesla cars with a suite of sensors and sent tens (now hundreds) of thousands of them out into the world to collect training data which they send back to Tesla headquarters. His team's machine learning (likely NN-based) system has digested that driving data (sensory inputs and how the driver responded) to create surprisingly competent self-driving cars, a feat which is especially amazing given the impoverished sensor data the system has to work with. 

 

Telsa's (and NVIDIA's) success illustrate how the combination of lots of data, clever learning algorithms, leveraging of previously trained models, and huge computational resources can rapidly create quite disruptive solutions to previously intractable, real-world problems. You can call it AI if want, or refrain from doing so if you think it doesn't qualify since it lacks the kind of general-purpose knowledge that the term AI (or especially AGI) deserves/requires.

 

But whatever you call it, these successes shows that rapid, iterative improvement can result in systems with the potential to disrupt life as we know it in the relatively near future.

 

It remains to be seen just how well such techniques for iterative self-improvement can be adapted to work for natural language processing and deep understanding of text and arbitrary video. But early results are looking quite promising. I wouldn't be at all surprised if in the next few years we see technology coming out of Google, Apple, Microsoft or Amazon that can digest and understand huge amounts of on-line content (text, images and video), and leverage it to iteratively self-improve its ability to comprehend and interact with people and the world, like we've seen happen with AlphaGo and self-driving cars. These advances will likely be first seen in the form of digital assistants (like Siri, Google Now, Cortana and Alexa) that progress rapidly towards the level of competence exhibited by Samantha, the AGI in the movie Her.  Then all bets are off regarding what happens next...

 

It is definitely going to happen soon? No, not necessarily. We could enter another "AI winter" where progress grinds to a halt (or to a trickle) for a decade or more. But we're in the midst of a veritable Cambrian explosion of progress in machine learning right now, and I see it only accelerating from here, as researchers home in more and more on ML algorithms that scale and that solve big-$$$$ problems.

 

My reasonably-informed $0.02,

 

--Dean

Share this post


Link to post
Share on other sites

[...]

Tom, I don't think there is any "secret sauce" to apparently intelligent behavior.[...]

 

I agree, there is no "secret sauce" insofar as our wetware does not defy the laws of nature, we are not in possession of supernatural "souls", "spirit" etc. - it's all entirely explicable, and therefore intelligent behavior is not magic. 

 

[..]What we consider intelligence is just a bunch of clever heuristics partly hard-coded into us by evolution, and running on a fairly general purpose learning algorithm / architecture that allows us to (modestly) adjust our behavior to new circumstances and learn from experience.[...]

 

And we have our first disagreement. What you described in this sentence - and I mean each component of that sentence - describes just part of what constitutes "human intelligence". Because what is missing, conspicuously is Motivation. No amount of learning and behavior adjustment impacts the question of motivation. And yes, we are the result of evolution. The question that arises is how to program a module for goal-creation. It has been suggested, that all motivation and goal-generation is ultimately about survival and propagation at the end of a long evolutionary process. That is true, but it still needs to be characterized.

 

[...]Complicated yes, but perhaps some of the complexity of how mother nature implementing intelligence in humans is simply an artifact of our convoluted evolution, rather than a result of the intrinsic complexity of the problem, or how difficult it is to solve. In short, I'm beginning to suspect there may be an easier, more direct path to general, self-improving intelligence than the path humanity has taken. So (unlike the "whack-a-mole" problem of human metabolism), there may be a way to crack the nut of AGI without recapitulating each and every quirk of human cognition and psychology in detail or resorting to whole brain emulation. Just like we solved powered flight without recapitulating feathers or flapping wings tugged by contracting muscles.[...]

 

Certainly. After all, calculators don't calculate like humans do, and calculators do a massively better job of it, even though the process they use is infinitely simpler. I have no doubt that many problems are not intrinsically complicated. And there is no need to take the path humans took to achieve intelligent behavior. But at some point you run up against the very definition of what constitutes intelligence. Some researchers assume that intelligence equals really advanced expert systems, which is too narrow a definition that would not be accepted by most people. Some problems are more complicated - though not without solutions - for example self-awareness, or consciousness: is it an emergent property of sufficiently integrated data systems, a critical mass? Are animals an example of intelligence that's intermediate in this respect? After all, a dog is capable of "intelligent" behavior, but it is still in a categorically distinct class of self-awareness which limits what it can do. These are the more knotty problems. Consciousness is closely linked to intelligence as we humans understand it, and is critical to higher level goal-setting and motivation - these (goal-setting and motivation) are what you never mention in your discussion of intelligence, and which is the crux of our disagreement. See below:

 

[...]AlphaGo is one example, but you are right - despite it's complexity, the game of Go is quite a narrow domain. What is impressive about AlphaGo is not that Go was conquered by a machine, but how it was conquered. In short, it was learned; bootstrapped using lots of data and a self-improving learning algorithms/architecture, rather than hand-programmed and hand-tuned. Similarly for DeepMind's conquering of all the Atari games. 

 

One key advance we haven't seen before in AI research is that progress is really starting to become cumulative, both in terms of techniques (i.e. new algorithms built as improvements on prior algorithms) but perhaps as importantly, in terms of solutions. As a concrete example, consider the recent rapid progress towards self-driving cars. Watch and listen to this video by the CEO and an engineer at NVIDIA talking about their self-driving car development effort starting at 27:45 and going to about 32:00:

 

https://www.youtube.com/watch?v=KkpxA5rXjmA

 

They did what is very common now in machine vision research. They started (seeded) their deep neural net-based vision system for self-driving car with the best visual feature detectors available, derived through many man-years of effort by other researchers solving the object recognition problem. For anyone familiar with the field, NVIDIA started with the feature detectors developed for the NN that has won the most recent iteration of the ImageNet Challenge - a competition to see whose vision system can best classify the objects contained in 1.2 million images into 1000 different categories.

 

By building on the prior work of others, both in terms of algorithms and feature detectors, NVIDIA was able to develop car detectors, pedestrian detectors, and pixel-level NN classifiers (to distinguish road signs, buildings, pavement, other cars, bicyclists, etc.) in a very short amount of time, enabling them to develop fairly competent self-driving car technology in a matter of months, starting from scratch.

 

As another closely-related example, I'm pretty critical of Elon Musk's deployment of so-called self-driving cars (as my posts to twitter attests), but that's only because of his deployment strategy, which IMO is way too aggressive and risk poisoning the water for more competent systems down the road. But he has done one thing almost perfectly - namely he has instrumented all Tesla cars with a suite of sensors and sent tens (now hundreds) of thousands of them out into the world to collect training data which they send back to Tesla headquarters. His team's machine learning (likely NN-based) system has digested that driving data (sensory inputs and how the driver responded) to create surprisingly competent self-driving cars, a feat which is especially amazing given the impoverished sensor data the system has to work with. 

 

Telsa's (and NVIDIA's) success illustrate how the combination of lots of data, clever learning algorithms, leveraging of previously trained models, and huge computational resources can rapidly create quite disruptive solutions to previously intractable, real-world problems. You can call it AI if want, or refrain from doing so if you think it doesn't qualify since it lacks the kind of general-purpose knowledge that the term AI (or especially AGI) deserves/requires.

 

But whatever you call it, these successes shows that rapid, iterative improvement can result in systems with the potential to disrupt life as we know it in the relatively near future.

 

It remains to be seen just how well such techniques for iterative self-improvement can be adapted to work for natural language processing and deep understanding of text and arbitrary video. But early results are looking quite promising. I wouldn't be at all surprised if in the next few years we see technology coming out of Google, Apple, Microsoft or Amazon that can digest and understand huge amounts of on-line content (text, images and video), and leverage it to iteratively self-improve its ability to comprehend and interact with people and the world, like we've seen happen with AlphaGo and self-driving cars. These advances will likely be first seen in the form of digital assistants (like Siri, Google Now, Cortana and Alexa) that progress rapidly towards the level of competence exhibited by Samantha, the AGI in the movie Her.  Then all bets are off regarding what happens next...

 

It is definitely going to happen soon? No, not necessarily. We could enter another "AI winter" where progress grinds to a halt (or to a trickle) for a decade or more. But we're in the midst of a veritable Cambrian explosion of progress in machine learning right now, and I see it only accelerating from here, as researchers home in more and more on ML algorithms that scale and that solve big-$$$$ problems.[...]

 

Again, all about data processing (acquisition, manipulation, synthesis etc.) - which again, is only part of what constitutes intelligence as we humans commonly understand the term.

 

Anyhow, this is way to complicated to discuss on a message board, so I'm going to drop it, but there is no doubt exciting developments are happening all around us, and participating in them is a fantastic privilege! 

Share this post


Link to post
Share on other sites

TomB wrote (my emphasis):

 

What you described in this sentence - and I mean each component of that sentence - describes just part of what constitutes "human intelligence". Because what is missing, conspicuously is Motivation. No amount of learning and behavior adjustment impacts the question of motivation. And yes, we are the result of evolution. The question that arises is how to program a module for goal-creation...


Consciousness is closely linked to intelligence as we humans understand it, and is critical to higher level goal-setting and motivation - these (goal-setting and motivation) are what you never mention in your discussion of intelligence, and which is the crux of our disagreement...

Again, all about data processing (acquisition, manipulation, synthesis etc.) - which again, is only part of what constitutes intelligence as we humans commonly understand the term.

 

Tom,

 

You are right that we may still be a long way off from understanding many aspects of human cognition, including our intrinsic motivations and (certainly) consciousness. But what I don't think you get is that these aren't required for the development and deployment of strong AI (notice I'm not saying AGI) with the potential to seriously mess up (or improve!) life as we know it.

 

I contend that raw intelligence doesn't require intrinsic motivations or consciousness. It simply requires the ability to achieve goals based on a given set of inputs and affordances. A more intelligent system will be able to attain more challenging goals in a wider range of circumstances. AlphaGo is superintelligent (compared with humans) in the limited domain of Go. It doesn't have intrinsic motivation. It's developers at DeepMind simply gave it the rules of Go, and goal of "figure out how to win Go games against humans". It learned its own strategies, including many subgoals within the limited domain of Go games. Then the DeepMind folks pointed AlphaGo at Lee Sedol, and it summarily crushed him. No intrinsic motivation required.

 

Now, assume the areas of AI & machine learning currently experiencing explosive growth (i.e. reinforcement learning, deep neural networks, natural language understanding, and machine vision) continue to improve at their current rate. It isn't too hard to imagine applying this same AlphaGo model in domains other than Go, which have much greater real-world consequences.

 

Some examples could include:

  • Watch thousands of people drive, learn how to map sensor inputs to the appropriate vehicle control commands, then put millions of truck drivers (the most common occupation in 27 of 50 states) out of work. 
  • Read all these Quora Q&As and develop a model for answering similar questions, just like a person would. In the process, build a model of human psychology to predict human behavior.
  • Read all these research papers, build a model of the causal chain for Alzheimer's disease, then cure it.
  • Do the same for cancer. Or human aging...
  • Learn about the factors that influence the stock price of company X. Then figure out a way to predict it to make money. Or make the stock tank...
  • Learn about the vulnerabilities of websites, botnets etc, then figure out a way to shut down the internet across all of the US.
  • Learn about how the US power grid works, then figure out a way to shut it down.
  • Learn about human psychology, and figure out how to influence the US presidential election so candidate X wins.

Notice I'm not attributing intrinsic motivation (to say nothing of nefarious intent) to the AI in any of these scenarios. These are all externally-supplied, human-generated goals, just like to goal DeepMind tasked AlphaGo with - "figure out how to play Go and beat Lee Sedol". Sure, we could go further like Nick Bostrom does and speculate that a superintelligent AI might realize that to achieve its high level goal it should garner more resources for itself (perhaps harming humans in the process) and figure out a way to prevent humans from shutting it off. But these moves on the part of the AI would require quite a sophisticated model of self and a drive for self preservation - which you rightfully observe might be difficult to instill in an AI that hasn't evolved via natural selection. Plus presumably its handlers would be watching for such nefarious behavior and pull the plug before it got very far.

 

But I hope you can see that even without intrinsic motivations or sense of self, a truly intelligent (i.e. strong) AI could wreak untold havoc, or do tremendous good, depending on the goal given to it by its human handlers. In fact, a AI with raw intelligence but no sense of right/wrong is likely to be much more dangerous than one that is carefully crafted (or more likely trained) with built-in ethics. This is especially concerning if an organization like Elon Musk's OpenAI succeeds, thereby making AI technology capable of accomplishing the above goals available on github to anyone with an internet connection.

 

As I said, it's not entirely clear the methods that worked so well for AlphaGo and self-driving cars can be generalized to solving goals like those listed above anytime soon. I suspect they can be, and it seems like many very big tech companies do as well - given the amount of money they are pouring into R&D in this area. But we could be wrong.

 

But one thing is clear - intrinsic motivation and human-like consciousness are red herrings, and are clearly not necessary for a strong AI to change the course of human history, for good or bad.

 

--Dean

Share this post


Link to post
Share on other sites

Well, OK, so it was about definitions all along then (as so often it is, especially in philosophy!). To me, what you describe is simply expert systems (maybe on steroids). If you call expert systems "strong AI", then we have no quarrel. I define intelligence as a class above mere expert systems, where very complex goal setting and motivation is intimately connected with high level consciousness. If we are limiting "strong AI" to expert systems (or expert systems on steroids), then yes, the potential is enormous - for both good and bad. Being the optimistic sort, I tend to underplay the "bad" scenarios. It's really a tool - much further along on the scale of power, but still a tool. A knife is a tool. It's value-neutral. Given its power, you may apply special care, as in case of nuclear power, but it's still a tool. For that matter, one could have spun all sorts of dire scenarios when ARPANET was first proposed, and with the recent bot attacks capable of almost taking down large parts of the internet, we see its power when used for ill - this can be extended to attacks on the power grid, banking and really the whole economy. Sure. And the internet had a gigantic effect on the world economy and indeed civilization (I'd argue 99.9% to the good!). Expert systems - or strong AI if you insist on calling it that - is just more of the same, though maybe amplified by a factor or two in the reach of effects. This is not by any means the same thing as positing the sort of dark malevolent artificial super intelligence bent on wiping out humanity as the likes of Hawking propagate - that is straight out of science fiction. As for expert systems - I'm equally optimistic and positive as I was and still am about the internet: it's a giant opportunity for good and a great leap forward for civilization.  

Share this post


Link to post
Share on other sites

For anyone interested in the ethics of AI, here is two day's worth of videos from a conference last week on the topic at NYU. Speakers include all the leaders in the field, including David Chalmers, Nick Bostrom, Daniel Kahneman, Yann LeCun, Stewart Russell, Stephen Wolfram, etc. Here is an amusing graphic from Nick Bostrom's talk in the first session:

 

NxWq4lE.png

 

--Dean

Share this post


Link to post
Share on other sites

Good article in Nature making the same point I did with TomB above - namely that an AI (or "expert system" if that's what Tom wants to call it) need not have consciousness (or intrinsic "motivation") to be quite dangerous, and even an existential threat.

 

--Dean

Share this post


Link to post
Share on other sites

I read the Nature article. I am happy that the issue of consciousness/self-awareness has been addressed, even if very poorly. But the argument itself for the "danger" of AI (as given in this article) is a lot of hot air. I guess it's a certain cast of mind to catastrophize from linear extrapolations, a tendency my brain does not happen to express (just as I lack the religious impulse). "But what if" as described in this article is the kind of linear extrapolation that lead people to calculate that drawing projections of the growth of traffic and human density in late 19th century New York would result in the city drowning in 6 feet of horse excrement from all the horse+buggy traffic growth. In other words: nonsense based on ignoring all other factors except for those we happen to pick - like the example of the "danger" outlined in the Nature article, that an AI machine might harness all the world's resources to make paper clips. It is the same catastrophizing projection with no consideration for the rich matrix of variables that constitutes the real world, that lead people to the widespread panic of the Y2K bug - remember that? There were tons of engineers as well as lay people panicking, books written, mass hysteria - I particularly remember a computer scientist who uprooted his family for a bunker in the wilderness of Alaska in anticipation of civilization ending in a big Y2K flamout. I know this woman who is particularly prone to this nonsense - she falls for each and every one of these "the world will end" ginned up "dangers" and tries to get me excited, and I always tell her "remember when you were worried about X, Y, Z etc.? Same thing a big nothingburger" - but she doesn't learn, because it's a cast of mind, not an argument. She too is worried about AI and intelligent hostile aliens, and before that she worried that the world would end because of the supercollider in Europe spawning a black hole - there were even articles written for and against that panic.   

 

The truth is, that any sufficiently powerful technology can be dangerous if things go very, very wrong. Nothing new about that - as I've said before, it's true of nuclear power. It could end the world, theoretically. In practice, not so much - there are too many variables for this unique scenario to have any practically large probability that we have to excessively worry about. A bigger danger are biological agents (something mentioned obliquely in the Nature article) - today we have the ability to edit genes, modify and perhaps even create bacteria, viruses and the like, and should things go wrong, that too could end at least human life on earth - although again, it's complicated, and not so simple, so no need to clutch pearls just yet. In other words - same as it ever was: or as the ancients knew "nihil sub sole novum". The AI panic is just another iteration of "OMG the world is ending" that humans love and the press capitalizes on. This cycle with AI will end once we've become more familiar with the strengths and limitations of expert systems, and then the panic mongers and their eager audiences will move onto the next target.

 

Relax folks. The big bad AI is ain't gonna git ya.

Share this post


Link to post
Share on other sites

Tom,

 

No one, not even scaremongers like Nick Bostrom, say any of these AI doomsday scenarios are definitely going to happen. Just that they could happen. Yes - you are right there are many variables that we don't know about, and lots of moving parts happening simultaneously that makes the future impossible to predict. The technium (sum of human and machine systems and institutions) is getting more complex, but arguably not more robust - in fact it might be getting more fragile, as the recent DDOS attack on an company nobody has heard of, likely by a bunch of kids or novice hackers, shut down much of the internet in the US for most of a day.

 

The combination of a fragile technium with increasingly powerful "smart systems" for manipulating it, could be a recipe for disaster, especially if such systems are wielded by people with nefarious motivations. You are right that the same goes for modified biological agents. 

 

BTW, your Y2K analogy is dubious at best. Here is a good article from Slate arguing that the money spent to solve Y2K bugs was money well spent:

 

Was it all worth it? All these years later, it's difficult to determine if every single cent spent on Y2K was really necessary, especially considering that the government has been so reluctant to go back and look. "Sure, things were replaced that didn't need to be replaced, and probably some people used it as an opportunity to upgrade systems when the better option would have been repair what they had," Kappelman says. Still, he estimates that 80 percent to 90 percent of the money spent on Y2K was right on target.

 

American utility companies spent hundreds of millions working on the problem, and in testimony before the Senate, several reported that the measures they put in place prevented widespread failures [which would have resulted in widespread hardship and death given it was January in the northern hemisphere - DP].

 

I'm curious Tom - how do you (as a non-expert) differentiate between real threats and false alarms when it comes to all kinds of issues like this? I hate to suggest it, but a statement like this:

 

Relax folks. The big bad AI is ain't gonna git ya.

 

makes me think you might have your own issue with the Dunning-Kruger effect - pot calling the kettle black... Personally, I'm always dubious of anyone (expert or non-expert) who thinks they know what will or will not happen in the future.

 

The Slate article has what I consider to be a good perspective on this (my emphasis):

 

What's more, it's the only recent example of something exceedingly rare in America—an occasion when we spent massive amounts of time and money to improve national infrastructure to prevent a disaster. Typically, we write checks and make plans after a catastrophe has taken place, as we did for 9/11 and Hurricane Katrina. Y2K, by contrast, was a heroic feat of logistical planning; within just a couple years, small and large companies were able to completely review and fix computer code that had been kicking around in their systems for decades. Some experts argue that the systems built in preparation for Y2K helped New York's communication infrastructure stay up during the terrorist attacks a year and a half later. The 9/11 Commission Report says that the Y2K threat spurred a round of information sharing within the government unlike any other in recent times. The last few weeks of December 1999 were "the one period in which the government as a whole seemed to be acting in concert," the commission reported...

 

Indeed, looking back at the record, this remains one of the most interesting facts about Y2K—the whole world worked together to prevent an expensive problem. When people first became aware of the computer bug in the early 1990s, Y2K was easy to dismiss—it was a far-off threat whose importance was a matter of dispute, and which would clearly cost a lot to fix. Many of our thorniest problems share these features: global warming, health care policy, the federal budget, disaster preparedness. So what made Y2K different? How did we manage to do something about it, and can we replicate that success for other potential catastrophes?

 

In short, we made a big effort to deal with Y2K, and nothing bad happened. Knowledgeable people saying our efforts made a substantive positive difference to the outcome. And in the process of dealing with the problem, our computing infrastructure got better and more robust. Do you really consider that evidence we should ignore the potential dangers of AI and not take it seriously?

 

Do you consider global climate change a non-problem that doesn't need to be taken seriously either? Would you make the same assessment of global warming as you do about the potential dangers of AI, namely: 

It could end the world, theoretically. In practice, not so much - there are too many variables for this unique scenario to have any practically large probability that we have to excessively worry about.

 

I'm not suggesting everyone panic about the potential downsides of AI, bioengineering or global warming, but all three seem to me to be serious issues that people with the appropriate expertise need to spend a lot of time solving. Non-experts should be supportive of such efforts, not dismissive.

 

--Dean

Share this post


Link to post
Share on other sites

I particularly remember a computer scientist who uprooted his family for a bunker in the wilderness of Alaska in anticipation of civilization ending in a big Y2K flamout.

 

You mean you fell for Norman Feller? "...Last week The CBC ran a story about Norman Feller, who emerged from his Y2K bunker in September, almost 15 years after going into hiding. His reason for leaving a year early? He was sad and lonely..."

 

https://www.google.com/amp/www.deathandtaxesmag.com/211409/that-news-story-about-the-canadian-man-who-just-left-his-y2k-bunker-after-15-years-is-totally-fake%3Famp%3D1?client=safari

Share this post


Link to post
Share on other sites

Here is a really insightful analysis by Eliezer Yudkowsky (safe AI researcher) on the lessons we can learn from the win by AlphaGo.

 

Here is a quote I thought hit the nail on the head. In reference to Andrew Ng (top AI researcher) dismissing AI risk as like "worrying about overpopulation on Mars", Eliezer wrote:

 

“Oh, look,” I tweeted, “it only took 5 months to go from landing one person on Mars to Mars being overpopulated.”

 

Basically - Eliezer is saying astonishingly rapid progress can be made when an 'intelligent' machine (scare quotes for TomB's benefit...) can improve on its own. He summarized as follows:

 

We’re not even in the recursive [improvement] regime yet, [but with AlphaGo] we’re starting to enter the jumpy unpredictable phase where people are like “What just happened?”

 

That is exactly what I'm seeing in the research now - new papers coming out literally daily that are tackling perception, natural language and robotics problems previously thought intractable using new and improved machine learning algorithms and architectures.

 

--Dean

Share this post


Link to post
Share on other sites

[…]I'm curious Tom - how do you (as a non-expert) differentiate between real threats and false alarms when it comes to all kinds of issues like this? I hate to suggest it, but a statement like this:[…]


 


[…]makes me think you might have your own issue with the Dunning-Kruger effect - pot calling the kettle black... Personally, I'm always dubious of anyone (expert or non-expert) who thinks they know what will or will not happen in the future.[…]


 


I don’t think it’s really as simple as a childish “back at ya” - not all future is equally unpredictable. I’d bet pretty heavily that the sun will come up tomorrow, even though there certainly are scenarios one may spin where our space time is just one version in a multiverse and an unluckily traversing dimension just happens to intersect a white hole event in our sun, which promptly goes kablooie, and there, no sunrise ever again in this corner of the multiverse. Possible, but I’m not losing any sleep over it. Maybe thinking in this case that “I know what will or will not happen in the future” qualifies me as a victim of Dunning-Kruger effect in the society of some physicists (btw. Hawking is a big proponent of multiverses), but I try to wear my dunce’s cap with panache.


 


Meanwhile this “back at ya” has a strange quality of a snake eating its tail. Just the other day, I was joined on my jog by an acquaintance who is a big believer in “the election is being stolen”, so we had a friendly banter while jogging. He told me all about how it’s well known that “certain people” (we all know who *those* people are, ahem) get bussed in to polling stations to vote with fake IDs etc. When I expressed skepticism, he accused me of a “lack of imagination”. Strange, because I thought it was *he* who lacked imagination when he was explaining to me the mechanics of this supposed fraud. 


 


I tried to walk him through the practicalities of such a scheme - to really sway an election you’d need quite a few such fraudulent votes, even if every bus had 50 people on it, how many such busses you’d need to organize and how noticeable would such cavalcades be, and you’d need to create fake IDs for all of those tens of thousands and at no time would anything go wrong, and not a single person would make a mistake or talk, and no precinct poll worker would exclaim “hey, it says you live on this street, funny, that’s where I live and I don’t recognize you at all” and a million other things that would go wrong in an operation on such a scale so that pulling it off sub rosa where the press gets no wind of it is as likely as any other conspiracy theory including fake moon landings. I thought he lacked imagination in not accounting for all the variables that go along with such a thing, he thought I lacked imagination in not following this one straight shot that put all the balls into the corner pocket. As he put it “but it COULD happen!!!UNO!!”. We both parted thinking it was the *other* who lacked imagination. But isn’t that the very definition of Dunning-Kruger effect - maybe everyone is suffering from it, on all sides. Snake eating its tail. Not super productive as a sweeping statement. He thought he was ahead in the imagination department, whereas I thought the only reason he thought so was because my seemingly “behind” position was due to my having lapped him (us being on a running track at the time :)). 


 


Like I keep saying “it’s a cast of mind, not an argument”. Some people are just prone to conspiracy thinking. The thing that all these have in common is not in dreaming up a scenario of how “THIS COULD HAPPEN”, but that in generating such scenarios, they only ever try to find all the ways in which it goes toward their scenario, while dismissing or not considering evidence to the contrary - a giant cherry picking of evidence in favor of an already pre-selected conclusion/scenario.


 


The other characteristic of such doomsday scenarios is that they assume everyone is going to act with utterly implausible foolishness - like those horror movies that depend on everyone in it acting completely counterintuitively and idiotically (yeah, there’s an axe murdered out there, let’s go out into the darkness one by one instead of calling the cops etc.). 


 


So too with the scenarios - “but what if scientists started serving plutonium sandwiches and everyone dies! See, nuclear research should stop immediately, the risk is too great!”. 


 


Or like with Y2K - it was a minor issue with perfectly well understood solutions, and instead it got hyped into doomsday scenarios like this cover of The Times:


 


http://time.com/3645828/y2k-look-back/


 


All one has to do is take ORDINARY CARE, with solutions plain as day and easily characterized - that’s Y2K for you. Same with other doomsday scenarios, the only way they come true is if we assume those plutonium sandwiches being served. I have no time for such alarmist projections based on utter implausibilities of how the world works.


 


Now, how to tell real threats vs false alarms is a legitimate question:


 


[…]I'm curious Tom - how do you (as a non-expert) differentiate between real threats and false alarms when it comes to all kinds of issues like this?[…]


 


First, a confession. I am never impressed because an argument is being made by “an expert”. I guess it’s my pathological lack of respect for the very concept of “authority”. I come from the assumption that the one and only criterion of whether an argument is T or F is whether it is, you know, ahem, True or False, and not who or what made it. And I don’t worry about being an amateur when asserting that 1+1=2 in classical arithmetic, even if I’m told that SuperGeniusMathematicalGod tells me 1+1=3. 


 


So you can take all the experts - and shove ‘em… into outer space :) 


 


I’ve seen enough “expert” panels composed of the highest worthies of the day, being hopelessly wrong - if you doubt that, take a gander at how various panels of the distinguished predicted how future technology would develop - it’s almost always comical in how they wrongly extrapolate current trends with zero consideration for intervening variables:


 


http://www.forbes.com/sites/robertszczerba/2015/01/05/15-worst-tech-predictions-of-all-time/#51f02b3925c1


 


Not only do I not see flying cars, I don’t see them as even desirable or practical. Even if they’re just around the corner for the past 100 years.


 


I respect the achievements of such certified geniuses as Shannon or von Neumann - but even the greatest of all were wrong about technological trends, such as the unwarranted enthusiasm about cybernetics. Was just around the corner from the 1950’s - that sure fizzled.


 


Bottom line - about the worst argument to advance to me is that “experts” think this or that. I spit on it.


 


How can I, an “amateur” judge what “experts” tell us? Simple - does it make sense, and are there valid counter-arguments. Period.


 


A couple of examples. I have a good friend who happens to be a fairly highly placed analyst with the DOD. An “expert”. No worry, he never revealed any ‘secrets’ to me, and I am not revealing them either. But we’ve had friendly exchanges and it was an eye-opener. It was about WMDs. I was utterly skeptical that Saddam had them, and skeptical about the danger of such in any case (with one exception). I claimed - already back before the invasion - that it was pretty much 100% certain that there were no nuclear weapons there. How did I know? My arguments were open and transparent. I knew that the moment Powell finished his infamous presentation at the U.N. 


 


1)I claim that it is impossible to keep such secrets - as work on the acquisition of nuclear weapons. There are too many people involved. Getting to the A-bomb takes too many people - who have friends and relatives, who in turn have friends and relatives. You can keep such secrets when there are relatively few people involved. But when you have 1000’s - as you inevitably need developing nuclear weapons, it is pretty much mathematically impossible. Coincidentally, someone actually did the math on this question - how quickly can a conspiracy unravel, the variables being number of conspirators and time:


 


https://www.schneier.com/blog/archives/2016/03/the_mathematics.html


 


2)From the fact that you can’t keep those SPECIFIC kinds of secrets, I noted that Powell nonetheless supplied ZERO hard evidence of such work being done in Iraq. So I concluded: given the resources the CIA and other intelligence services have, the fact that they have not uncovered one iota of hard evidence, means one thing conclusively: the reason for the lack of such evidence is because the evidence DOES NOT EXIST. Because if such number of people were involved as to develop nuclear weapons, evidence would be plentiful. Yet zilch. So - Saddam has NO such program: QED.


 


3)Given that Powell - and the U.S. - wanted very badly to make the strongest case possible for Iraq developing such weapons, and this was their premier opportunity in front of the world to present such evidence at the U.N., the fact that they delivered such a spectacular NOTHINGBURGER, meant one thing, and one thing only: They got NOTHING. This is your strongest case? You got zilch. A zilchpickle on top of the nothingburger - that’s Powell’s presentation.


 


And so I was very secure in my conviction that the whole thing was a fraud - I, an AMATEUR. But I had math on my side, and a basic understanding of how the world works, and what’s UTTERLY implausible and basic ability to draw inferences. So now, the fact that “experts” claimed otherwise, that whole intelligence services of other countries (like Great Britain and Germany) as well as the U.S. - all stuffed to the gills with “experts and professionals who spent their entire lives etc.” claimed otherwise - meant NOTHING TO ME. I don’t give a sh|t. If the CIA/NSA/GOD/JESUSCRISTHIMSELF claim that 1+1=3, I’ll call BS, and not be abashed by one iota that I’m an “amateur”.


 


And indeed, no nuclear program in the history of the world has managed to be conducted in secrecy for any length of time (i.e. certainly revealed very quickly) - not Israel, not North Korea (as closed a society as it gets), not Pakistan, not Iran, not anyone anywhere. Proof is in the pudding. It takes enormous financial and technological and human resources - which you simply cannot hide on such a scale for any length of time.


 


Are we done yet with the worthless “but experts” argument, or do we need more?


 


No? Well, my friend expressed great concern about “terrorists” getting their hands on chemical WMDs. I laughed dismissively. Because: the only reason they are called WMD - the “mass destruction” part, is if they are in the hands of state actors, not terrorists. Only state actors have the *means of delivery on a scale that’s worth noting*. How are you dispersing those poisons or gasses or whatnot? Terrorists don’t have access to fleets of helicopters, or masses of intercontinental rockets - and never will. That’s the bailiwick of state actors, not non-state organizations. The means of deliverance available to terrorists immediately vastly limit the possible damage: see how few such chemical attacks happened in the West, and how few such victims (f.ex. the Japanese subway attacks). My point to him: I am more worried about dying from a car accident than terrorists with chemical WMD. It’s a non-issue, and the hysteria created by the term “mass destruction” only serves to obscure the broader context of threats.


 


So here you have another example: an expert worried about chemical or nuclear WMD, and me, an “amateur” calling BS. 


 


On the other hand, I do think that biological weapons are a legit worry. The investment necessary to equip an underground small lab is not huge, and within the resources of non-state groups. The levels of skill needed is not inordinate - graduate students can do it (and a group can finance the university studies of a few individuals). The level of financial commitment is within reach of non-state orgs. The number of individuals needed is not great. You can hide a small lab in a way you can’t do for a nuclear facility. You can maintain secrecy for sufficiently long, because the number of people needed is small. Means of delivery - f.ex. at key airports by spreading airborne infectious agents is eminently doable. All this spells danger. Perhaps not today, but as biotechnology marches forward, it could happen pretty soon. It’s worrisome.


 


So those example right there illustrate how I - an “amateur” - decide which threats are legitimate and which are fear-mongering. Nuclear and chemical WMD - fear-mongering. Biological weapons - legitimate threat. See?


 


And I’d like to stress how the addition of “experts” into the mix brings absolutely NOTHING to the argument. There were plenty of “experts” who fear-mongered Saddam’s WMDs - and those experts were the ones who prevailed, and on whose advice we invaded with incalculable consequences that are history-changing. That’s an example, where the experts are to be dismissed - not because they’re experts, but because their arguments never held water, and amateurs such as me knew it.


 


So I spit on “experts”. Give me *arguments*, and don’t bother with the “expert” *opinion*.


 


See, I told you, I’m allergic to arguments from authority :)


 


The answer to which threats are legitimate and which are nonsense hype, is a case by case basis. Global Warming - yes, legitimate, and the reason how I decided: because there are plenty of convincing arguments on the side of it being true, and a decided lack of convincing counter-arguments. Nothing to do with “experts”. Just the facts ‘ma’m”. 


 


And so on. The big bad AI threat, is in my estimation complete hokum and a giant nothingburger. I am not impressed by your experts and their “arguments”. 


 


In fact, to be perfectly candid, far from being impressed by all the examples of “achievement” by AI systems given here, I am shocked at how shabby and UNIMPRESSIVE those are. Way to deliver a self-goal. If the aim was to impress, it achieved the opposite effect: I’m shocked and disappointed at how dumb, UNintelligent, and primitive all these systems are. I really was hoping that in the year 2016 we’d be much further along in AI expert systems. I am shocked and dismayed at how backward and primitive it all still is.


 


The more examples such as the achievement of beating a human master at GO are given the more convinced I become that AI is a huge bubble of irrational expectation as the “cybernetics” bubble of 70 years ago - which supposedly was around the corner and almost a century later can’t even deliver a robot that can reliably run up a flight of stairs to compete with a cat that has a brain the size of a walnut. Pffft.


 


For the millionth time: creating a nifty expert system that can defeat a human in an extremely constricted environment like Chess or Go or Checkers tells us just about ZERO wrt. the supposed “threat” to humanity.


 


And I see the same silly extrapolation arguments here as well - the “self-learning” based on the Go example supposedly accelerating the rate of AI development and level of intelligence so fast it’ll soon launch into the orbit of humanly unattainable god-like omniscience. LOL.


 


Here’s something to consider. No amount of intelligence is going to transcend the laws of physics and mathematics. The consequence of that goes in BOTH directions: 


 


If I use rudimentary “intelligence” to calculate that 1+1=2 (on a primitive pocket calculator), then exponentially expanding the intelligence using the top supercomputer elevated to the power of billion, still will only yield that 1+1=2. It will not improve the precision of that calculation by one bit. No amount of intelligence will alter that conclusion. In the same way that no matter how many experts you add, it will not improve upon the “amateur’s” conclusion that 1+1=2. Saying, “but I’m increasing the petaflops of speed of calculation on this supercomputer” DOES NOTHING. It’s irrelevant. You are not changing the laws of physics and mathematics with intelligence no matter how advanced. And I’m still not impressed by experts.


 


On the reverse. Just because you increase the intelligence parameters, such as memory, speed of calculation, levels of integration, learning algorithms, data acquisition etc. - doesn’t mean that you can continue the process forever in an ever increasing intelligence. In fact, you will bump against limits SHOCKINGLY QUICKLY. Speed - not likely to exceed the speed of light (leaving speculation about tachyons aside). And the fundamental truth that many - MANY - M-A-N-Y - problems, even purely mathematical problems, are inherently INSOLUBLE - is a fundamental reality. It is not just a case of no algorithms being possible, but any asymptotic approach will eventually call upon more resources than the available energy in the universe - a very practical limitation (ignoring for the moment multiverses and harnessed calculations in multiple dimensions). In this scenario, a solution delivered by a primitive calculator may not be much worse - if at all - than the most theoretically advanced super computer. So your super intelligence may not give you much - if any - advantage over a “dumb” terminal in these types of problems, which, by the way, are abundant in real life. 


 


To illustrate these limitations, consider them from a purely physics limitation point of view. There is no supercomputer possible - even theoretically - that can calculate with precision, where a specific leaf is going to be blown in a hurricane, or a water molecule or atom will end up in an ocean wave. That’s not even theoretically possible - due to the uncertainty principle for one. If your super intelligent being needs to control the environment, it cannot possibly do so to a degree of precision that significantly outperforms a much more primitive intelligence - a much lower intelligence can calculate an approximate probabilistic distribution, just as well as your higher intelligence, and both will reach the exact same conclusion (see my earlier example of 1+1=2), because greater precision is not available due to limitations of the laws of physics of this universe. Both the lower intelligence and the superintelligence will tell us the same: the particle has x% chance of being in region a x b x c. The idea of determinism that can be calculated in the real world is essentially a Newtonian idea from the dawn of science - and falsified by modern physics and mathematics. That idea died in mathematics when Russel and Whitehead of Principia Mathematica  came up against Kurt Godel. And it died in philosophy when the entire school of logical positivism as exemplified by Rudolf Carnap, died in early 20th century Vienna (I studied the foundations of mathematics when pursuing a degree in philosophy at the university, and always admired Cantor and the whole ambition behind logical positivism, but… alas). You see, you simply cannot calculate or describe reality comprehensively enough, due to purely conceptual limitations - for more on why, follow the links in the Carnap entry above.


 


Now imagine that you are operating in an environment where there are literally thousands of incalculable variables - your odds of success are almost independent of intelligence. This is dependent on the nature of the “game” in game theory. There are games - such as chess - where a greater intelligence has huge advantages. But there are also games where genuine random chance plays so big a role, that intelligence is only of marginal utility. I don’t have time here to go into game theory - but basic classification of games is a starting point. 


 


Reality, and operating in the real world is not a game like Go. It is an open-ended game with massive unpredictable variables. No intelligence can account for them all, or even a small fraction. Therefore, it is a significant - and frankly insane - distortion to imply that somehow facility at constricted variable games like Go translate into facility at ALL classes of games. In other words, I am not worried that my future theoretical self-driving car is going to sneak into my kitchen and poison my tacos with plutonium. I am confident it’ll be able to drive me to work and back though!


 


What is more fundamental yet, and not explored here in the breathless hype about super advanced AI is the fact that intelligence - just like the speed of light, and the critical mass needed to collapse a star into a black hole - HAS AN UPPER LIMIT. I repeat: intelligence has a HARD upper limit. All the elements of intelligence: memory, speed of calculations, optimizing algorithms, data acquisition have both physical and mathematical (insoluble problems) limitations.


 


These limitations are significantly overwhelmed by the world around us - the universe. That’s why omniscience - just as omnipotence - is the realm of science fiction, religion and bad movies. Reality is much more complex and interesting. 


 


The greatest possible intelligence (hitting the hard limit) may not translate into sufficient advantage over present human intelligence to assure that humans would be wiped out in a contest for survival - the element of chance alone might be decisive. 


 


And what is the upper hard limit of intelligence? Obviously, intelligence is multifactorial, so you can increase some aspects (f.ex. memory storage) for a long time, while reaching computational speed limits very quickly, and so on. But to generalize, the hard limit of IQ may be perhaps reached pretty fast. Highest human IQs range about 200-250. I wouldn’t be shocked if hard limits to IQ weren’t all that much higher - perhaps as little as 350 - 400. Whatever the number is - maybe a lot higher or only somewhat higher - one thing is 100% certain… it doesn’t increase with no limit. There is a hard limit, due to laws of physics and mathematics. So enough with the god-like intelligence we humans can never comprehend. Remember, what we know, we know regardless of intelligence - if we humans find a mathematical proof, it is valid regardless of how high an intelligence examines the proof. It still stands. 


 


It would seem, even if that hard limit is as low as an IQ of 400, it would be a crushing advantage over humans of mere 100 on average. Interestingly, not necessarily so, and not in all scenarios. 


 


That is because we must also consider the concept of threshold intelligence. For example - I’m just throwing random numbers around - perhaps it is the case in our universe (given its laws), an IQ of below 30 (say: dogs and similar animals) matched against an IQ of 100 (human average), will easily result in dogs getting wiped out in the wild in a contest between the two. But move that IQ to a threshold of 140, and now a race of beings with IQ’s of 180 or indeed even the hard limit of 400 would not in fact always prevail over the beings of 140. Think of it this way: imagine that instead of IQ we are discussing muscular strength. Someone as weak and uncoordinated as a 1 month old baby can not tie its shoes - a grownup would crush the baby in a shoe tying contest. But have that baby increase its strength and coordination to a mere 5 year old THRESHOLD, and now it can hold its own against a superfast and supercoordinated - unto the limits of physics human being or robot when it comes to the task of tying shoes. There is no further advantage to intelligence. In other words, an IQ of 140 (human), may be entirely enough to hold our own and not be wiped out by the most intelligent creature possible at IQ of 400. 


 


A concrete example, from the very topic of war of annihilation and survival - WWII. The German Armed Forces in WWII, had *massive* advantages in multiple respects over Soviet forces: quality of weaponry, quality of soldier training, and crucially for our example, had vastly superior officer corps and the intelligence of its top commanders. The Soviets by contrast, had just undergone massive purges that wiped out almost all its experienced officers, had pitiful training and weaponry. The quality of the Soviet top command was pitiful. To put it in our terms, you had a corps of (I’m throwing out random numbers as examples), say, IQs of 140-160 competing against an army of IQ’s of 100-120. And yet, that 40 points of IQ advantage, and the resultant training, tactics, weaponry and other superiorities still lost massively against the Soviets. And it wasn’t down to numbers. It was down to other factors - primarily terrain, supply lines and weather. In other words, the IQ was not an advantage, because the Soviets were above a threshold of minimum IQ, where other factors prevailed. Had the Soviets had perhaps chimp IQs - say 60, the exact same terrain, weather and supply lines would not have allowed them to prevail, and the advantage of the German IQ would have resulted in annihilation. Conversely, you could have kept the Soviets at their 100-120 IQs, and increased the German IQ advantage to 180, or indeed 400, and the Germans would still have lost - the baby can tie its shoes, and no further IQ advantage results in victory. Let me give an example of that, also from WWII: the Finnish Soviet war.


 


The Finns had massively - MASSIVELY superior tactics, strategic thinking and leadership, quality of soldiers and officers, training and even other factors that count such as knowledge of terrain, weather tolerance and supply lines (i.e. they had advantages where the Germans had disadvantages vs the Soviets). Gustav Mannerheim the Finnish commander in chief was LEAGUES superior to any Soviet commander - you could say he had a class higher intelligence. And yet, the Soviets prevailed (even though they took over 100:1 casualties compared to the Finns). Simply put, the Soviets had such a numerical superiority - literally more soldiers than the Finns had bullets - that all of the advantages of the Finns, INCLUDING the IQ disparity between the chief commanders was of no help. The Soviets had passed the threshold intelligence - and now, executed their “stupid” but effective strategy to its grim conclusion, despite the Finns infinite gallantry, bravery and intelligence. You could have increased the Finnish intelligence to 400, and they would have still lost. At a certain level, once you pass a threshold of IQ, other factors than IQ become the deciding factors.


 


Scenario, what if human intelligence is above the threshold where we can simply be wiped out like chimps by superior intelligence in a world where there are many factors that are indeed incalculable? This is not a game of chess or Go, where there is little in the way of decisive random factors. This is life, where the factors of chance are overwhelming - and here, ONCE YOU PASS A THRESHOLD of intelligence, your chances are unpredictable enough that it can be equal no matter how high the intelligence of you opponent goes (up to the hard limit). That’s the reality of dynamic systems with unpredictable variables.


 


Imagine that you can push 100 pounds with your arms. The next guy can push 400 pounds. Seems he has an advantage if a weight of 200 pounds fall on him - he pushes it off, you are crushed. But what happens when the weight is 200000 pounds? Then both you and he are equally advantaged in that situation despite his being 4 times stronger. My point: the world is complex enough with enough incalculable factors that once you are over a certain threshold, an IQ of 140 is competitive in planned and unplanned outcomes to an IQ 400. You are both overwhelmed by the incalculable nature of the universe. 


 


To bring the example to games rather than strength. Imagine that you are playing a game of tic-tac-toe. You are of average intelligence - and your opponent is at the upper limit of IQ 400. You will still tie him, every time. 


 


Now lets bring it closer to life - imagine that you are playing chess against a supercomputer that’s vastly superior to every human in chess. Except we now modify the game. With every move, a dice is thrown and if a certain number comes up, you immediately get 2 queens and you opponent loses their queen and a bishop. What happens? It becomes essentially a game of chance, despite the vast differential in chess playing ability. As long of course as you are intelligent enough - have crossed the threshold - of being able to take advantage of 2 extra queens (which takes very little skill!). That is what I believe the world looks like: massive throws of dice, where IQ differences ABOVE A CERTAIN THRESHOLD, provide little *predictable* advantage - the role of chance is too great. That is how the world is different from a constricted game of chess or Go.


 


Now, understand that this is all a question of action in aggregate. We are talking about intelligent agents in aggregate acting upon a society that is also composed of people in aggregate. Here, the permutations of possible outcomes of calculated actions are infinitely greater. 


 


The crux of my objection to the “humans wiped out by AI” comes down to the following nodes which circumscribe the topology of possibilities of such outcomes:


 


1)The intersection of game classes that describe the world


 


and


 


2)Hard limits to Intelligence


3)Threshold intelligence effects


4)Random chance factors


5)Resilience to countermeasures


 


I believe between these - which I cannot go into detail in such a space as here - you capture the universe of plausible outcomes. Those outcomes do not include a “super AI” wiping out humans. I can be as confident about this, as I was about the threat of Saddam’s WMDs, no matter what “experts” claim. In the case of the WMDs, it came down to the nodes of: secrecy maintained by threshold numbers of conspirators, mathematical implausibility of such, time and resources. Here the nodes are different, but the conclusion is just as firm.


 


The hype about out of control AI wiping out humanity is just an absurdity. It’s a nothingburger of massive proportions.


 


Oh, but Tom, you have not seen this video, read this article, seen this presentation, etc., etc., etc.! My answer: I don’t need to. No amount of “secret intelligence” seen by “experts” would have changed my views of Saddam’s WMD, because it contradicted basic math and facts about the world that are verifiable. I don’t need to read every treatise by every kook who has written a million pages about their perpetual motion machine - because as soon as I know it contradicts the second law of thermodynamics, it’s really irrelevant how exactly the kook has reached their conclusion, and I don’t need to read their text.


 


Same here. Yes, I have not read Mr. X, or seen video Y. But if they maintain anything that steps outside of the nodes I outlined, it’s like saying “fear my perpetual motion machine” - it’s nonsense. I don’t need to know the shape of the nonsense to know it’s nonsense. If a 5000 page book of mathematical calculations purports to show that 1+1=3, I don’t need to read every page to find the exact spot where a mistake was made. I already know it’s nonsense based on it breaking fundamental axioms. 


 


To be honest, I think AI is at about a Newtonian level of development. It has a long - LONG - way to go before it becomes anything close to what the claims are made for it. Remember the long AI winter? Well, remember, before the onset of that winter, there was exuberance and strong AI just around the corner as predicted by the top AI scientists of the day - brilliant people. And yet the winter came. I believe it is exactly the same froth now. How do I know? Because I see that fundamental problems of AI have still not been even identified, let alone addressed - that’s why I see it as Newtonian, a long way to go.


 


For now: much ado about very little. 


 


I don't care if an "expert" labels me an ignoramus laboring in the shadow of the Dunning-Kruger effect... because I'm quite confident that I in turn have considered factors the expert has not, so round and round we go (except odds are - I've lapped 'em - say I:)).   


 


And with this, I bow out of this dispute, as time limitations won’t allow for a full exploration of the subject anyway. 


Share this post


Link to post
Share on other sites

Tom,

 

First, a meta-question. Why are you averse to using the quote facilities provided by the forum? It would make your posts a lot easier for the rest of us to read than your use of the cryptic [...]'s to quote others. Seems a bit lazy on your part and out of character, given the obvious large amount of time you spent composing your post. Just sayin...

 

Maybe thinking in this case that “I know what will or will not happen in the future” qualifies me as a victim of Dunning-Kruger effect ... I try to wear my dunce’s cap with panache.

 

That is obviously your right. Thanks for clarifying and putting your level of knowledge of AI in perspective.

 

Regarding your suggestion of an analogy with your narrow-mind, conspiracy-believing acquaintance, I don't think it is a good analogy at all. It doesn't seem to me that either you or I lack imagination, or the ability to think rationally about the plausibility of scenarios on either side of the "AI apocalypse" ledger. And I appreciate your taking the time to elaborate on the reasons for your skepticism.

 

All one has to do is take ORDINARY CARE, with solutions plain as day and easily characterized

 

The problem Tom is that people don't take ordinary care, and in many instances are disincentivized to do so. We've already had several flash crashes as a result of rogue high-frequency-trading bots on Wall Street, developed and deployed by hedge funds and private equity firms who are willing to go to great lengths to make a profit, putting our financial institutions at risk. The financial crisis we are still digging ourselves out of was an example of greedy people taking risks they didn't understand. Are you suggesting all we need to do is take ordinary care and we'll be sure to avoid any and all events like these in the future? Seems Polyannish to me... 

 

As our world gets more connected, and algorithms gain more influence over the physical world, such scenarios will become more and more common. Add humans with nefarious intent into the mix, and things get even worse, as the recent botnet attack on major parts of the US internet infrastructure illustrates. The world could be seriously messed up by events like these (not to mention bioweapons, nanotech etc), without any sort of "AI uprising", which I agree is implausible, at least anytime soon. 

 

So you can take all the experts - and shove ‘em… into outer space :) ...

Bottom line - about the worst argument to advance to me is that “experts” think this or that. I spit on it...

And I’d like to stress how the addition of “experts” into the mix brings absolutely NOTHING to the argument...
 

So I spit on “experts”. Give me *arguments*, and don’t bother with the “expert” *opinion*.

 

See, I told you, I’m allergic to arguments from authority :) 

How can I, an “amateur” judge what “experts” tell us? Simple - does it make sense, and are there valid counter-arguments. Period

 

I agree it's good to be skeptical and arguments based simply on authority are not to be trusted. But you've entirely ducked my question. What do you do in scenarios where your sense of what "makes sense" breaks downs or is insufficient to answer the question?

 

Take the question of global climate change, which you seem to have conveniently ignored despite the length of your response.

 

Neither you nor I have the expertise to know whether the climate really is warming, or whether it is human-induced or simply a result of natural fluctuations in temperature. There are plenty of counterarguments put forth by those who deny climate change is happening, or is manmade. So who do you believe, and what do you do? Do you go with the consensus of experts, or your own intuition and logical reasoning?

 

I say in such instances it's best to go with the experts, particularly when the stakes are high. Similarly, when you say:

 

The big bad AI threat, is in my estimation complete hokum and a giant nothingburger. I am not impressed by your experts and their “arguments”. 

 

I tend to strongly discount your perspective, since as far as I can tell you have little if any insights or expertise into the progress being made in AI, except what you read in the popular press. To wit:

 

And I see the same silly extrapolation arguments here as well - the “self-learning” based on the Go example supposedly accelerating the rate of AI development and level of intelligence so fast it’ll soon launch into the orbit of humanly unattainable god-like omniscience. LOL.


... No amount of intelligence is going to transcend the laws of physics and mathematics. 

 

Obviously you've been reading far too many popular media accounts (where "if it bleeds, it leads") and not enough (any?) original research in the area of AI Tom.  If you had really done your homework, you'd know that very few (if any) real AI experts are making such silly predictions.

 

To illustrate these limitations, consider them from a purely physics limitation point of view. There is no supercomputer possible - even theoretically - that can calculate with precision, where a specific leaf is going to be blown in a hurricane, or a water molecule or atom will end up in an ocean wave. 

 

Do you realize just how silly these strawman arguments look Tom? "A can never accomplish B" says nothing about the truth of the statement "A can accomplish C" where B ≠ C.

 

Are you really suggesting that a machine intelligence would need to be able to calculate where every water molecule in an ocean wave will end up in order to create a "very bad day" for humanity? I'm just sorry you wasted so much time spinning such elaborate examples.

 

Reality, and operating in the real world is not a game like Go. It is an open-ended game with massive unpredictable variables. No intelligence can account for them all, or even a small fraction.

 

Obviously no intelligence can predict all variables with perfect accuracy in the real world. But it isn't necessary to predict all variables, or predict any of them with perfect accuracy to create dystopian outcomes. In fact, well-intended entities (human or otherwise) with substantial influence but limited ability to predict the consequences of their actions are likely to be more of a threat than omniscient ones, for obvious reasons. Same goes for entities (particularly humans), with nefarious intentions - sometimes ignorance of likely outcomes and side-effects results in real tragedies.

 

This is life, where the factors of chance are overwhelming - and here, ONCE YOU PASS A THRESHOLD of intelligence, your chances are unpredictable enough that it can be equal no matter how high the intelligence of you opponent goes (up to the hard limit). That’s the reality of dynamic systems with unpredictable variables.

 

Are you sure about your "the playing field is equalized once a threshold is crossed" notion? Are you sure enough to bet the future of the human race on it?

 

I think I've probably passed the so-called threshold of intelligence you seem to think exists that puts everyone (including future machine intelligences) on a (nearly) equal footing. But I for one am humble enough to realize that I just don't have the cognitive capacity to solve many problems - for example understanding string theory or coming up with my own theory to unify gravity and quantum mechanics. I wouldn't be surprised if other humans can/will accomplish those things if given enough time and assistance from technology (computer modeling, CERN supercollider etc). The same goes for machines with intelligence superior to our own - they might be able to figure it out, while I never would, at least without serious augmentation.

 

So I would argue there are gradations of intelligence beyond normal human IQs, and these differences can (and will) make a difference. After all, scientists much smarter than either of us invented the nuclear bomb, and it's existence has dramatically changed the course of history, and the balance of power in the real world. Given the close calls we had during the cold war, it seems quite conceivable that the planet might have been wrecked by the brainchild of a few really smart scientists.

 

...I believe between these - which I cannot go into detail in such a space as here - you capture the universe of plausible outcomes. Those outcomes do not include a “super AI” wiping out humans. I can be as confident about this, as I was about the threat of Saddam’s WMDs, no matter what “experts” claim.

I don't care if an "expert" labels me an ignoramus laboring in the shadow of the Dunning-Kruger effect... because I'm quite confident that I in turn have considered factors the expert has not, so round and round we go (except odds are - I've lapped 'em - say I:)).   

 

Well, all I can say is I'm glad the future of the human race doesn't depend on your beliefs Tom, despite your confidence in them.

 

You and I agree on (at least) two things - the world is a complicated place, and the future is therefore hard to predict.

 

That's why rather than going with my intuitions, or listening to ill-informed outsiders, I'll instead listen to experts in their respective fields, who know much more about the subject as a result of years of study.

 

These experts say that now is the time to start thinking and acting to counter the potential serious downsides of both climate change and superintelligent AI, despite their full impact being uncertain and still many years in the future. Ignoring the seriousness and plausibility of these threats as assessed by the experts seems like ostrich head-burying.

 

But It seems like here we must agree to disagree. Thanks for engaging with me on this.

 

--Dean

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×