Jump to content

How are you planning to navigate your own actions/thoughtspace with ever-shortening AGI timelines? (AGI=>LEV/doom) How are you getting AI to interpret papers you do not understand for you?


Alex K Chen

Recommended Posts

Tesla stats are totally bogus and have been for years. Waymo's may be legit, but the vast majority of their miles have been in very benign conditions - relative slow speed driving on wide, beautiful Arizona roads in good weather, so not and apples to apples comparison.

The joke is that Waymo (and Cruise) pitch their technology based on all the lives they'll save. But relatively few deaths occur on the kind of city streets and good weather when they operate.

If they ever take off, watch for more 'coning' and acts of vandalism like we recently saw in SF against empty Waymo and Cruise vehicles (if they ever get back on the road...) when they are deadheading between fares by out-of-work Uber drivers and people pissed off by the traffic snarls and emergency vehicle interference these cars are causing.

Plus the business case for self-driving taxis is just ludicrous. First off you got the on-going millions of dollars in expenses mapping and continuously updating the maps of each new city. The vehicle + equipment costs several hundred thousand dollars each and the systems won't operate in bad weather. And you need a "mission control center" with one remote human "driver" for every 2-4 "autonomous" cars to rescue them every 10-20 miles when they screw up and cause a traffic jam.

In contrast, Uber drivers can use crude GPS maps and can go anywhere the passenger wants including outside of the carefully mapped downtown area, don't need a remote driver ready to take over, and supply and who maintain their own vehicles, at no expense to Uber.

Link to comment
Share on other sites

  • Replies 50
  • Created
  • Last Reply

Top Posters In This Topic

That's what I'm talking about. The fact that you have the need for the continuous presence of remote human supervisors to rescue these "self-driving" vehicles tells you all you need to know - the whole concept is wild hype. Same with AI - there was a scadal a while ago, where some AI startup was showcasing its capabilities and then it transpired that the customer requests that were supposedly fulfilled by AI actually had remote humans to supervise the whole process to such a degree that in essence it was the humans doing the work all along. That is the definition of HYPE - wild exaggeration of pedestrian capabilities.

Again, it's often at best small steps forward and no revolution. These technologies have their place, but they are quotidian progress of technology by small or big steps, and very, very, very, very rarely a revolution which the claims always center around. The AI hype is getting out of hand. In the end, it is the humans who have to do the heavy lifting, and AI is at best a more or less useful automation of some processes, just as the calculator helped folks with math instead of doing it by hand. Useful, yes, revolutionary as according to the hype, no.

And as Dean points out you need to carefully assess the economic case for the technology. To make an exaggerated example: imagine that I invented a "self-driving" car that drove "automatically", but necessitated an engineer onboard at all times making sure that regular screwups don't turn dangerous, like when the car system mistakes a child at the side of the road for a mailbox because sunlight hit at a certain angle, and the kid stepped into the road with disastrous results. You can go back and tweak the programming, but you will need to do that in perpetuity, because the system is simply incapable of understading the broader context of an input and you'll never run out of novel situations. Well, in that case, I'd say, my supposed "self-driving" car is a complete economic failure, because I can either employ a driver at a fraction of the cost, or drive the damn car myself. That's what I mean by saying that managing the system introduces such inefficiencies, that you may as well do the work yourself. If I have to go over an AI generated text with a fine-tooth comb for possibly disastrous flaws, I may as well do the work myself from scratch. 

To me, the kind of hype you see around AI reminds me of the mechanical turk chess playing machine fraud from back in the day, where a big box with a chessboard was claimed to be a chess-playing machine, only it transpired that hidden inside was a dwarf who was actually the one playing the chess. If you need human supervisors and constant re-checking of the results because you can't trust the "intelligence" part of the AI, the whole thing needs to come down from the clouds and back to where it really belongs: incremental progress in specialised settings. Let us leave the talk of general artifical intelligence to scif-fi for now, and revisit the issue when actual rather than hype demonstrations are available (maybe in another 200 years?). 

Link to comment
Share on other sites

This crossover is coming for every single thing humans think they're the best at. The only question is the timing for each task. There is no going back, no putting AI back into the bag, and there is no technical reason why it would ever plateau or stop at merely human levels of competence on any given task. There is nothing magical about human meat brains, so eventually you throw enough artificial neurons and training at AI that it will exceed us. The economics go from: terrible -> breakeven profitable -> humans are no longer economically competitive. And the transition when it comes, happens fast (a few years).

 

BTW, a potential post-digital-transistors method for running AIs significantly more cheaply was disclosed today:

 

https://www.extropic.ai/future

 

That startup is headed by 2 ex-Google Quantum nerds, so it isn't fully vaporware and may actually get to market.

Link to comment
Share on other sites

 

Tesla Autopilot and similar automated driving systems get ‘poor’ rating from prominent safety group

https://www.cnn.com/2024/03/11/cars/insurance-group-rates-tesla-autopilot-safety/index.html

The Insurance Institute for Highway Safety, which rates cars and SUVs for safety, examined so-called advanced driver assistance systems such as Tesla Autopilot and found them wanting.

These systems combine different sensors and technologies to help a driver keep their vehicle in its lane and avoid hitting other vehicles in front and to the sides. Usually, these systems work only on highways. Some can even allow drivers to remove their hands from the steering wheel but all require drivers to pay attention to the road and vehicles around them at all times.

Of the 14 systems tested by the agency, 11 earned a “poor” rating including Tesla’s Autopilot and so-called Full Self Driving systems. (Full Self Driving is not actually fully self driving but, unlike Autopilot and almost all other such systems, it is designed to work on city and suburban streets.)

Link to comment
Share on other sites

Automated software engineering just had a big jump today. Getting to the point now where it's just starting to take some Upwork jobs away from humans. The crossover of the capability lines is here, now. Watch how much farther above human level it goes in the next few years.

 

"Today we're excited to introduce Devin, the first AI software engineer.

Devin is the new state-of-the-art on the SWE-Bench coding benchmark, has successfully passed practical engineering interviews from leading AI companies, and has even completed real jobs on Upwork.

Devin is an autonomous agent that solves engineering tasks through the use of its own shell, code editor, and web browser.

When evaluated on the SWE-Bench benchmark, which asks an AI to resolve GitHub issues found in real-world open-source projects, Devin correctly resolves 13.86% of the issues unassisted, far exceeding the previous state-of-the-art model performance of 1.96% unassisted and 4.80% assisted."

 

https://twitter.com/cognition_labs/status/1767548763134964000

 

article: https://www.bloomberg.com/news/articles/2024-03-12/cognition-ai-is-a-peter-thiel-backed-coding-assistant?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTcxMDI0ODc3NCwiZXhwIjoxNzEwODUzNTc0LCJhcnRpY2xlSWQiOiJTQThLNFFUMEcxS1cwMCIsImJjb25uZWN0SWQiOiI5MTM4NzMzNDcyQkY0QjlGQTg0OTI3QTVBRjY1QzBCRiJ9.DZvx9NvMMQF0p-rA6xO3KKH0DxcVdAOWKaHXtW-3R6c

 

Edited by BrianA
Link to comment
Share on other sites

2 hours ago, BrianA said:

Old news, old software, stop looking in the rear view mirror Dean 😄

Call me when you're brave enough to hop in a Tesla with FSD, punch in a destination 30 miles away, and then close your eyes until you arrive. Come on. If you're so impressed, prove it by putting your life on the line. 

I hope you didn't buy a Tesla with FSD based on Musk's promise a couple years ago that Tesla's would soon become an "appreciating asset" because you'd be able to rent them out as part of his fleet of one million self-driving robotaxis and make oodles of money. How's that working out for Tesla owners now? How has the price of used Teslas held up?

Two of my former students from CMU who worked on self-driving cars with me in the 90s both own Teslas. Neither of them trusts Autopilot anywhere but limited access highways and even then they say supervising it is more stressful than actual driving - with phantom braking still a very annoying issue particular in traffic. Both are very wealthy tech geeks who worked on self-driving cars for a living but neither have purchased FSD. Too dangerous and they don't want to be one of Musk's guinea pigs.

I worked on self-driving cars for my entire career, and in 2018 I did the technical due diligence for SoftBank when they invested $2B in Cruise. I recommended against their investing, saying it would be many years before they could hope to deploy at commercial scale even in fair weather cities. I also said that the amount of remote human supervision / intervention that would be required even when the do eventually deploy would be cost prohibitive. I also pointed out that Cruise was playing fast and loose with the totally bogus "disengagements reports" they were showing investors and the California DOT, and that their then CEO Kyle Vogt (now disgraced and no longer with the company) was acting irresponsibly. Sound familiar?

SoftBank ignored my advice and went ahead with their investment in Cruise - saying they had a "long time horizon." It was at the peak of the hype cycle for self-driving cars. But SoftBank was smart. They sold their shares back to GM after two years, breaking even before the sh*t hit the fan.

 

Link to comment
Share on other sites

I'm fully aware of your background Dean (there was also a small mention of your work in a book I listened to recently: A Brief History of Intelligence by Max Bennett), and I do value your input greatly. And you totally called the Cruise failure from your description above - nice work!

 

However I also believe, as I tried to show in that chess chart, that AI capabilities advance over time relative to essential static human capability levels. From levels that are "useless", to "equivalent", to "superhuman". As far as I see, this is happening with self driving car software right now, along with pretty much every other human task such as the software engineering breakthrough today. The only question is what is the trajectory of the self driving capabilities line, and when does it cross over and exceed the average human driver level? My timeline takes into account the recent apparently accelerating improvements in AI (hardware, algorithms, money, people, etc), and I'm making an assumption that acceleration is going to continue happening - leading to a timeline that is a lot earlier than what you seem to have. Or are you predicting the AI capabilities will never cross over the human level of driving?

Link to comment
Share on other sites

14 minutes ago, BrianA said:

My timeline takes into account the recent apparently accelerating improvements in AI (hardware, algorithms, money, people, etc), and I'm making an assumption that acceleration is going to continue happening - leading to a timeline that is a lot earlier than what you seem to have. Or are you predicting the AI capabilities will never cross over the human level of driving?

I'm not saying never. Funnily enough, in 1997 during the $100M USDOT-sponsored "Automated Highway System" demo I was one of the leads for, as part of our pitch I predicted that in would be about 20 years until self-driving cars would become widely available. That was optimistic. These days I'm still saying 10 years for cars you buy, and at least five before robotaxis become reasonably common in cities. That was before the Cruise implosion and I may have to increase that ETA further. And I still think there is a good chance robotaxis may never be viable for financial and sociopolitical reasons. 

Same goes for AGI. I think it could eventually happen after a few more major breakthroughs, and it could be very disruptive / dangerous when it does. And in the meantime, the crappy LLM-based "AI" systems we're stuck with now may do as much or more damage by poisoning our information ecology and destabilizing society.

 

Link to comment
Share on other sites

I'm not a huge fan of Marcus, he seems to a bit of a "goalpost mover", trying to find new things the current AI can't quite do - but every time a new AI comes out that can do some of what he said they'd never do, he conveniently forgets about it:

 

 

Goalpost moving is a classic "cope" maneuver used by humans who'd rather not mentally deal with the entire effects of what's just around the corner, or who have staked so much of their social capital on a particular viewpoint they can't admit when it's not going their way. A related "AGI never" person seems to by Yann LeCun, who is getting dunked on deservedly on X today as things he claimed to be far in the future are happening now. Robot cleaning up dishes:

 

that RL is useless, yet today Google rolls out a game-generalizing AI using RL:

 

My take is when I see people like this make predictions over and over that turn out to be wrong, and way off on their timeline predictions, I start discounting their takes.

Link to comment
Share on other sites

Again nothing new in AI as far as the prevailing paradigm. The more tightly you can define a domain, the easier, quicker and better AI will perform. Chess is a classic example. A limited number of elements and limited number of rules. AI should eat it for breakfast, and it did. Trivial. Self-driving cars already involve many domains, and AI is having considerable difficulties. But that's just the start of the challenges. When you move into truly multi domain problems, AI collapses. This doesn't mean AI will never overmatch human capabilities in every area. It just means that the time horizon is considerably longer than the tech optimists imagine or the hypesters claim. Which is why I say, "fine, but in 200 years". Achieving general AI is an incredibly hard task, and will demand many very big breakthroughs. Heck, look how optimistic everyone was with the war on cancer declared back in the day - it was assumed five years at most and we'd have a cure for cancer, because look, we landed on the moon. Yet here we are over half a century later nowhere near conquering cancer. Because note, in many ways landing humans on the moon was a trivial problem (like chess!), as the domain was tightly defined - physics, material science and engineering. That's it, so it was easily solvable in principle. Cancer is a lot more murky, with a ton of biochemistry yet to be discovered. 

But cancer too, in principle is a fairly defined domain. It should be solvable in principle. I expect it will be. If anything I'm shocked at how poor AI performance has been to date - just on example: sustainable economical fusion energy. Very well defined domains, in principle not that different than landing on the moon: physics, material science and engineering. AI should be able to chart out a research and development program to give us a workable solution within a handful of years. And yet, nada - again I'm shocked at how very, very, very poor and unimpressive today's AI is. I expected a lot better. Seems it can't even hack trivial problems (i.e. trivial meaning "tightly defined domain").

My main objection to today's AI is the vast chasm between the grandiose claims of AI hypesters, and the abysmally sh|tty actual current AI capabilities. It's insulting.

Link to comment
Share on other sites

I listened to a podcast a couple weeks back with the CEO of Helion Energy, one of the more promising fusion startups, and IIRC he did mention they are using some in house AI stuff to help speed up their development.

 

Also there has been news in the past couple weeks of some bio companies using AI to accelerate, check the news from Insilico.

 

So things are speeding up via AI, some of it is a bit behind the scenes and not always advertised. I think over the next few years this trend will continue, to where it will be just assumed all companies and research is using AI in some way, just like we use the internet today. Anyone not using it will begin to appear a bit backwards.

 

Here's today's AI hype, I apologize if this is too triggering:

 

 

Link to comment
Share on other sites

I'm excited and optimistic about AI and humanoid robots. That Figure demo posted above is pretty impressive assuming it's not a fraud/hoax. Given how quickly things seem to be progressing right now, it's not hard to imagine very capable humanoid robots taking over many jobs within 10 years and eventually taking over most jobs. I don't fear this, I think it will lead to a new era of prosperity where traditional work becomes optional or obsolete. I agree though that right now the hype cycle is amped up to an 11, and investors are likely to be disappointed as in all past hype cycles. But then again, there's something about this that genuinely does seem different, when have we ever had the computing power to simulate or surpass a human brain 🧠? Never, but now all of the sudden we seem to be pretty close. If anything I'd guess the average person is underestimating how much things are going to change over the next 30 years. But maybe I'm also overestimating how fast it can happen... perhaps the real hard core AI will still be stuck in massive power sucking data centers and unable to make it out into the world via robotics and we just end up with a lot of expensive novelty bots that can clean our houses, do dishes and laundry and answer questions like ChatGPT does 🤔

Link to comment
Share on other sites

8 hours ago, Gordo said:

I think it will lead to a new era of prosperity where traditional work becomes optional or obsolete.

Putting aside questions of the viability and timeline of the technology that would make this possible, how do you foresee such a transition to a "work optional" society coming about Gordo?

Do you imagine the emergence of a real populist movement (not the fake one we're seeing now on the Right) that would gain sociopolitical power and then tax the rich beneficiaries of such a technological revolution to fund universal basic income for all the people who no longer have jobs?

Do you think billionaires like Elon Musk or Sam Altman are really working with the best interest of everyday people in their hearts and would be happy to equitably distribute the vast amount of wealth they are accumulating to everyone on earth? Color me skeptical. I can't see how we get there from here, given the polarization our society is already experiencing around cultural issues that are trivial by comparison. 

When you combine societal strife and dysfunction with growing geopolitical tensions, destabilizing technology, climate change, and the illusory promise of the "energy transition", I'm not optimistic about humanity's or the planet's prospects over the next few decades.

Link to comment
Share on other sites

4 hours ago, Dean Pomerleau said:

how do you foresee such a transition to a "work optional" society coming about Gordo?

Ignoring timeline because its going to take quite a lot of time, I DO in fact think the "universal basic income" idea will become the norm, but "basic" is not the right word, I'm no Elon Musk fanboy and don't own a Tesla, but I kind of agree with him that eventually "universal high incomes" are coming: https://finance.yahoo.com/news/elon-musk-predicts-universal-high-160015532.html I don't see how this DOESN'T happen, I mean for certain bots will be providing everything humans want, including food and shelter and everything else you can think of.  In some respects we already have a sort of UBI in place right now with millions getting social security, medicare/medicaid, SNAP (food stamps), etc.  But yea, I know, with $34 trillion in debt and that number already growing a couple $trillion more each year, I don't know how this really plays out, at some point will are going to have insane inflation like a banana republic country.  I think some in the investor class are seeing technology as our "way out" of the debt bomb situation we are already in, i.e. a productivity miracle should in theory solve a debt crisis, if in fact everything essentially can be produced for nearly free.  I do believe at some distant point in the future money won't even exist (see: The Economic Lessons of Star Trek’s Money-Free Society )

Link to comment
Share on other sites

10 minutes ago, Gordo said:

In some respects we already have a sort of UBI in place right now with millions getting social security, medicare/medicaid, SNAP (food stamps), etc.

If by "we" you mean rich western societies.

Are you saying that, once the robots and AI have replaced everyone's jobs and are making their owners oodles of money, those rich westerners are going to be generous enough to give all these services not only to everyone in their own country (including undocumented immigrants) but also to everyone in the global south, when climate change is wreaking havoc within our own country and causing a surge of climate refugees to cross our borders?

Yeah - good luck with that.

Link to comment
Share on other sites

An alternative is the concept that creating your own high-earning business becomes easier and easier, with the help of AI. There is already an established trend showing this, and that was before AI has become actually useful recently which will accelerate this. See the chart at this post:

 

 

 

You might say: ok, but historically being a business owner isn't a mainstream thing for the "everyday person". But I also believe statistically more new businesses than ever are being created, since the pandemic:

 

https://gusto.com/company-news/2022-new-business-surge-census-data

 

now imagine if all the little annoyances and hurdles involved in starting a business (particularly software-based businesses), just... fade away and get waaay easier with AI employees. Everyone will be doing it.

Edited by BrianA
Link to comment
Share on other sites

It's instructive to read old articles going back decades (and at this point, centuries) where prominent scientists, leaders in their fields have made predictions about how the future will look like, technology etc. You then compare it with the actual outcomes.

Same with various doom (or glorious future nirvana) predictors. All have been consistently wrong. 

What all those wrong predictions have in common, the fundamental flaw underlying and the consistent mistake is the straight trend line extension. A crude example is how once it was predicted that given the increasing density of New York and increased need for horse carriages for transportation, and how much waste each horse generated, New York would inevitably drown in sh|t. It was a straight line extension of trend lines: number of people --> more, horse carriages --> more, density --> increasing, ergo result --> amount of horse waste --> drown. What it never factored in, was a disruption in the trend line of horse drawn carriages to cars, and so the horse waste is limited to Central Park.

The same linear extension you see over and over and over again. Malthusian gloom prediction? Population growth, arable land, agricultural efficiency --> trend lines extended = mass die off in famine. Straight trend line extension. And wrong. 

Social trends and technology stimulate adaptations and responses in countless unpredictable ways. A trend doesn't just continue unimpeded - almost immediately it generates a response that breaks the trend line, so any prediction that rests on a simple straight line extension is doomed and destined to be wrong. 

We see this mistake repeated so often, it's just comical. There's this guy, Peter Zeihan, who is a lecturer somewhere and very popular on the corporate speech circuit and social media these days, whose shtick is making grandiose predictions about the near future - "China will starve and collapse", "Russian war will cause world-wide famine, collapse of German economy and Europe freezes" etc., and because the time horizon is so short, we get to see how comically wrong his predictions are. His stock in trade and technique is always the same. Grab some statistic or fact, draw a straight line extension of a trend, and announce a grand dramatic conclusion. So, Russia/Belorus produce the majority of some key ingredients in fertilizer, war disrupts access --> trend line extended, world-wide famine; Russia supplies majority of natural gas to Europe, supplies stop --> line extended, Europe freezes, German industry collapses, Germany is de-industralized. Of course, fertilizer from Russia is stopped, new sources are found and today we have excess grain production and record low grain prices; new gas is sourced, and natural gas prices are dramatically lower than even before the war.

Zeihan is of course super popular, everyone listens to him, and nobody is fazed by how ludicrously and consistently he's wrong, meanwhile he is not the least chastened by how wrong his predictions turn out to be, and he keeps churning out these dramatic predictions which people love to hear (drama!), and his straight trend line extension shtick is all he uses, at this point he can't be unaware of this basic flaw, but the fees he generates are very nice, so why change... basically, a grifter.

And really how surprising is it? Straight trend line extension is the easiest thing in the world. Any idiot can grab some fact and extend a trend line, and presto, a prediction! How can Johnny reach the moon from his back yard? Why, he puts one brick on another, that's already higher than the first, then another brick, then another, extend the trend line, and there you are, he reaches the moon! Just extending a trend line and calling it a day is so very, very, very much simpler than trying to anticipate how this trend will trigger another phenomenon, and in turn yet another in so many complicated and unanticipated ways that trying to predict how that whole complex matrix will look like far into the future is a thankless task indeed. As Yogi Berra said, making predictions is hard, especially about the future. So very much simpler to just grab a trend line and draw a pencil line straight, and there you have it, blissfully free of having to think and anticipate all the interactions and complications, all those troublesome realities of the actual world out there, rather than our cozy fake model we've built in our prediction bubble. And so convincing to the rubes out there! "Yep, true, definite fact that Europe runs on Russian gas... holy moly, right, doom ahead, wow, WOW!". The work is both easy, (just straight line extension!) and rewarding - perfect for all the hypesters and grifters out there, and so many delicious dreams to sell for the rubes to salivate over.

We've seen the same brain-dead straight trend line extension in all the technology hype decades past, and it's no different with AI. Let's take some data mining and pattern recognition give it a sexy name AI, stretch the trend lines to absurd lengths with zero reference to any possible complicating factors which may interrupt our cozy model and there you have it, a cornocupia of absurd and laughable predictions about the brave new world of AI.

Edited by TomBAvoider
Link to comment
Share on other sites

So much that they got an AI model tailored directly with them... The more out there you are, the more likely you are to get an AI model tailored so well to you that you influence the thinking of EVERYONE (they also retweeted one of my tweets saying this)

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...