Jump to content

How are you planning to navigate your own actions/thoughtspace with ever-shortening AGI timelines? (AGI=>LEV/doom) How are you getting AI to interpret papers you do not understand for you?


Alex K Chen

Recommended Posts

If anything is certain, it's that time is more valuable than ever, and investing in $SMH/$NVDA/$AMD will more than pay off for itself. The time value of money massively increases - it makes less sense to hold savings for >10 year timescales. Use your money to treat yourself more. Get the fastest PC possible, save as much of your content on PC as possible (b/c NVIDIA GPUs allow you to search your entire computer now), use Uber if transfers on public transport wreck your flow.

(it has made me buy more premium foods like what I can get from Cambridge Naturals). It's not worth getting worse-quality (or more microplastic-contaminated) food when you can get better options with minimal time investment (even if the alternatives are costlier). Semaglutide is still important and reduces food costs of more premium food (eg fermented vegetables from Cambridge Naturals)

https://01core.substack.com/ (is in the intersection of all the right spheres)

(also, spend more time in areas where things are being done - eg SF).

Paul Christiano says that interest rates may massively rise as the time value of money goes way up.  https://www.lesswrong.com/posts/ngpC5PFAgxHJMhicM/agi-and-the-emh-markets-are-not-expecting-aligned-or-1

Spend more time around people who are *living for the age of AI* (eg people AROUND AI), rather than who are still trapped by pre-AGI structures (like school)

Oh, and this makes $5000 for 200billion exosomes WAY more worth it (see my grg.org thread). 10 billion umbilical exosomes is *nothing*

Past regrets/social messes matter way less and will be more of a rounding error in the long-run/final computation

Many people in EA/AI-adjacent spheres are staring to care way less about longevity b/c "we all die as one or we all live as one" [either aligned AGI is achieved or unaligned AGI kills us all - all within ~20 years]. It's still important, however, to reduce aging rate (b/c total human compute depends on minimizing aging/pollution rate for a maximally faithful computation *and* for properly integrating with BCI). BCI/implants work *best* in a minimally-aged body (side effects/pain/inflammation are *all* worse in more-aged organisms)

To achieve enlightenment, do whatever it takes to make up for your shitty memory (this is why slowing your aging rate still matters). Making up for your shitty memory is maximally important for achieving alignment. Make all the Pareto-efficient improvements you can (get the cleanest+most searchable diet/input stream that you can - money matters less than ever). Nearcyan knows ALL the longevity hacks and has possibly THE most important Twitter ever.

Retro.bio people are cooler than all the other longevity people, spend more time with them. I was a reprogramming skeptic too, but then a recent tweet shows that OSKM *can* improve mid-life longevity.

Get someone like Anton Kulaga/Newton Winter to stash all rapamycin.news + crsociety.org content into their longevity LLM

Context/taste uniquely matter, esp as more gets automated by AI. AI still cannot automate all of human context. In this weird world, context/taste still more than make up for +2SD deficits in 20-year-old-normalized-fluid-intelligence/conscientiousness. Charles Liang of SMCI gets special moat privilege just for his early friendship with Jensen Huang, even if Charles Liang might not be **the** most innovative CEO at this stage.

Develop predictive analytics of what is "timeless" rather than the intermediate computations/pre-AGI "high scores" that don't matter.

Due to uncertainties around AGI alignment, everything must be treated with an even more extreme sense of urgency (this does not need to stress you out - b/c of this you can also afford to spend *more* on the near-term)

If you constantly feel misunderstood and hate putting any time into formatting, this matters less b/c AI can rephrase/format all your ideas for you. If you are quantity-over-quality, AI may rearrange all the good stuff in forms other people can take in

And pls always remember that Vincent Weisser is on *everyone's good side* and is the most outsider-friendly and most loveable person in longevity. If it becomes necessary for people in longevity who hate each other to un-hate each other, it will probably be through him... In any case, Sam Altman said "AI will be capable of superhuman persuasion well before it becomes superhuman on other outcomes" so AI may finally persuade the haters to unhate. If anyone in longevity has the kindness of Christ, it's Vincent Weisser.

x'posted in Zuzalu and Vitalia telegram channels - we need a commonlounge for longevity X AI discussion and one does not yet exist. The Longevity Biotech Fellowship slack does not work for long-form content that's searchable/accessible by all]

Edited by InquilineKea
Link to comment
Share on other sites

  • Alex K Chen changed the title to How are you planning to navigate your own actions/thoughtspace with ever-shortening AGI timelines? How are you getting AI to interpret papers you do not understand for you?
  • Replies 53
  • Created
  • Last Reply

Top Posters In This Topic

Posted (edited)

here's my #DiningGuide

It is super-easy to eat maximally healthy even if you are maximally lazy/want to put zero time in food preparation. All alignment outcomes (that maximize human value) are instrumentally convergent on minimizing aging+microplastics exposure rate of humanity, and there are many Pareto improvements one can make.

Edited by InquilineKea
Link to comment
Share on other sites

  • Alex K Chen changed the title to How are you planning to navigate your own actions/thoughtspace with ever-shortening AGI timelines? (AGI=>LEV/doom) How are you getting AI to interpret papers you do not understand for you?
Posted (edited)

^this is why u store as much as you can AND get proper backups (including HDDs for long-term storage, but SSDs for your most frequently used HDs)

AI will automate much of the low-level stuff (including pipetting and a lot of experimentary drudgery). See Jim Fan/Keerthana to see how 2024 will be the year of robotics. It makes less sense to devote your vital brainpower to much of this stuff when it will be done better in a few years.+

https://keerthanapg.com/tech/embodiment-agi/

Edited by InquilineKea
Link to comment
Share on other sites

Posted (edited)

It also helps to pay extra to be more secure (b/c AGI will mean more breaches), and esp to minimize your crypto's chances of getting hacked. There are SOME pareto-efficient tradeoffs that don't compromise convenience for security (tho they may cost some money)

Unconventional actions (esp those taken by @repligate) matter more than ever.

Assume that everything is findable/matchable (once your entire HD is scannable).

Also if you're upset b/c some people hate you or you fucked up your past, just know that AGI will make all this shit irrelevant. AGI may even help correct/align your imperfect integrity for you

 

Edited by InquilineKea
Link to comment
Share on other sites

Posted (edited)

relative importance of AI/CS vs longevity has been inverted. Now AI/CS matters more, b/c it seems that **the majority** of probability mass on "solving longevity within our lifetimes" is now through AGI rather than longevity itself (though that will still be important). I know Peter Fedichev has a recent bearish paper on ability to reverse PC2 of aging, but tissue/cell replacement may solve it, and the plurality of surveyed people in longevity believe that tissue replacement [way easier to do in humans than mice b/c human organs have way more time to be replaced] is the way to go (https://www.longbiofellowship.org/bottlenecks).

So if you aren't competent enough for direct longevity research (or were cursed with the wrong genetics/early life influence), don't worry. You can still do things on the margin (and making everyone healthier still counts).

Oh, also https://www.lesswrong.com/posts/JEhW3HDMKzekDShva/significantly-enhancing-adult-intelligence-with-gene-editing

My guess is that this won't come quickly enough to matter, but has maybe a 20% chance of making a difference on the margin. In any case, timelines for human-AI symbiosis are more aggressive than ever (Neuralink brain can remotely control a mouse). If you're constantly frustrated at being "too dumb" to be as quick as the fastest learners are, you'll eventually find out answers on SOME timescales (b/c AI-generated explanations/videos *and* eventually human intelligence enhancement [on longer timescales, but potentially in less time than the number of years you've lived so far])

IDK if it will make up for poor coding skills (bioinformaticians are often notorious for it), and timelines for fully refactoring poor codes may be longer than timelines to "do interesting things" [even if they may be full of error] - this is also why some think AGI might come even earlier than self-driving cars [which must be error-free]

Faking understanding/giving the illusion of understanding is now DOUBLY not worth it esp bc explanations will only get better over time 

Be as interesting as possible in whatever way you can. Uniqueness doubly matters when most human labor will be automated away (though MAYBE more programming jobs will be created than ever before in the strangest of human in the loop ways)

===

an analogue to https://www.crystalknows.com/ will get better really quick

you should have online handles unlinked to your real handles (after falling out with Facebook, latter half of GenZ [discord culture] is rediscovering the joy of anonymous handles)

Greg Brockman says therapy will get really good really soon [it may love you like no one else can]

Edited by InquilineKea
Link to comment
Share on other sites

Posted (edited)

AI x biotech content:

https://arcinstitute.org/news/blog/evo

https://api.together.xyz/playground/chat/allenai/OLMo-7B-Instruct

WikiCrow | Future House

310.ai

https://amulyagarimella.substack.com/p/techbio-startups-build-your-bio-edge?

DeepOrigin (500 credits of free compute this month)

(Tho the real gamma is in AI)

Edited by InquilineKea
Link to comment
Share on other sites

Posted (edited)

Oh and get tFUS ASAP, through whatever means. Be aggressive at reducing all discomfort and sources of irritability 

And DOUBLE READ gwern on stress 

AI will help craft the ideal self-love/metta meditation for you

somehow, posting here gives the clearest thinking b/c it's easiest to just give the fewest fucks here (relative to twitter). I have zero expectation of anyone here replying

Edited by InquilineKea
Link to comment
Share on other sites

Posted (edited)

Figure out how to consistently surprise/not be overly shocked by big changes. Have stake in uncorrelated assets/friend groups (though I know AGI seems to be turning everything into one global feed, esp as openAI still has local dominance)

Edited by InquilineKea
Link to comment
Share on other sites

Posted (edited)

It's possible personalized AI will make people believe more in their self serving  biases . There may be one shared feed among AI people, but more ppl are skeptical of Holocaust than ever before, and so many believe Trump's lies.

But there still seems to be a shared reality among educated ppl, even if they split into e/acc and pause AI camps 

 

Edited by InquilineKea
Link to comment
Share on other sites

Posted (edited)

Stakes are higher than ever, and there are more and more *secret groups* [that are super-super-tight on infosec/information control] than ever [this is b/c the politics of e/acc *and* deceleration-ism are more polarizing than ever]

"pivotal act"

[also b/c information can be copied/replicated/stored like *crazy* - as storage becomes easier - *and* get into people's personal GPTs]. That said, fake information about people will also proliferate like crazy too, and a good portion of people will become increasingly unable to distinguish between real and unreal...

The illusion of animism - intelligence in everything - will permeate across the entire cosmos.

==

I know a lot of people are freaking out over near-term AGI timelines (some say it's in a few years) and that it may destroy human work w/o destroying human ownership. I'm still not convinced by the doomers to be too concerned yet (Nora Belrose and Quintin Pope have some of the best counterarguments)

Edited by InquilineKea
Link to comment
Share on other sites

Posted (edited)

I recently removed some of my edgier questions on manifold just to be safe. Pareto-efficient improvements because these questions (while funny to a few) are not necessary. I cannot make life harder than it is already, esp b/c I *uniquely* have to rely on people's goodwill.

[decreasing timelines mean that you have to be *extra-careful* about safety]. This means I have to really rid myself of all unnecessary behavior/"side effects"

It seems ever more likely that I do not have to worry about money *at all* [or spend time on it] for the next few years (or basically until AGI) so I have to focus on what matters.

Why am I posting here? I get some visibility here but not too much. I've stopped posting on FB and Twitter now mostly b/c I get nervous about the replies/people's reactions. I can get addicted to attention/validation, but I get none of it here, so I can post more here.

https://bzolang.blog/p/the-lattice-topology-correspondence?utm_source=substack&utm_campaign=post_embed&utm_medium=web

Edited by InquilineKea
Link to comment
Share on other sites

Posted (edited)

https://console.anthropic.com/workbench/9bcfe1a8-73f8-40fb-a9e9-c678033df8fd

anyways, given my uniqueness (the world is so strange), there is considerable chance that alignment IS more likely when I am as untraumatized as possible [+am given every possible device to think as clearly as possible] (and whatever it takes to untraumatize me and bring me out of all the pain I'm in)

Edited by InquilineKea
Link to comment
Share on other sites

I got selected as a user on propheticAI!

I didn't tweet for several months, this ended it. Two of my friends feel it might be safe for me to prophesize again, and I sense they sensed the second-derivative, but I don't think it's fully 100% safe yet.

Link to comment
Share on other sites

Posted (edited)

For so so so long I wish I was born later. But now that AGI timelines are near, it seems that it won't matter for people of my generation (and I have more years of positive experiences to live for it). I do have extreme need to minimize calories for the years of youth I have left (however many there are), but it seems possible that I have a shot at staying young through AGI and that is all that matters

Edited by InquilineKea
Link to comment
Share on other sites

It's amazing how little things change - like in not at all - when it comes to human nature. What never ceases to amaze me is how little people learn from history. Every hype and bubble seems new to people when it's happening, and when it's pointed out, at best you get "this time it's different". 

Now we're in full swing of the AI hype, and a lot of people are just losing their minds, and you get these sorts of hysterical well nigh incoherent rambling as the posts above illustrate.

As if we have not gone through this exact play hundreds of times, with the same result "roar of a mountain that gives birth to a mouse". And that's at best. I'm still waiting for the endless energy from fusion that's been perpetually promised just around the corner, always the same few years from now (just recently there's been another flurry of promises after some minor experimental result).

We went through this with "cybernetics" in the middle of the last century where humanoid robots were supposed to alter civilization like sci-fi replicants, and at the end of this we got, decades and decades later some factory robots and the roomba. In other words, extravagant hysterical googly-eyed pronouncements and paltry down to earth results. Biotech in the 80's was going to cure all disease starting with cancer; we got a handful of drugs for a few often obsure conditions. Quantum computers are going to calculate us into nirvana, or else nightmare of broken passwords, and so far we don't have a single quantum computer that can even come close to what a mediocre consumer desktop can crunch, but every few years the quantum computer hype bamboozles the next batch of gullible rubes, before dying down again. 

And AI itself, gee how many waves of hype have we been through? Already back in the 50's we were promised artificial general intelligence in just a few years, and got literally nothing other than a few tortured attempts to pass a Turing test by the "clever" method of grabbing onto a human interlocutor's statement, turning it into a question and letting the human babble on, with the obligatory interpolation of "and how did that make you feel?" to prolong the "conversation", fooling absolutely nobody except the above mentioned gullible rubes. In the 80's it was time for another AI hype wave, with massive database mining and the result were so called "expert systems", but at least there was a small mouse at the end of that mountain roar, and you got voicemail option trees and some banking applications.

Now this current AI hype wave even more hysterical than before. No doubt a few mice will be birthed by this mountain of BS, with deep pattern recognition in imaging already being implemented in medicine among others, some process automation in a few non-critical areas (too fault-prone to trust with mission critical stuff) and the like. But it will inevitably crash against the same devastating limitation that has always broken previous AI hype - the merciless and rather pedestrian principle of GIGA (Garbage In Garbage Out), that cruel mistress of all mechanical analysis. No current or foreseeable future AI is able to employ knowledge domain switching on a deep level, and that makes it nothing more than a glorified calculator (or going back in time for a better insult, an abacus). We will need great breakthroughs to make that possible, optimistcally, maybe in another 200 years?

Yawn. The coundown for the hype to die down has started, as we contemplate the pedestrian results, and there will be quiet waiting until the next wave of hype explodes with a new generation of gullible rubes. Nihil novi sub sole.

Link to comment
Share on other sites

LOL, Dean, I thought about self-driving cars, but the OP did mention them in a bizarre aside wherein somehow those are going to come about later because they're a tougher problem than gen AI... in other words too absurd for words. I can't fathom why people are still getting killed taking Musk's hype claims about Tesla's self-driving capabilities at face value. But at least in the case of self driving cars a few technological benefits will come about as a result of all the billions poured in, so it's not a complete waste even if 100% autonomy is not going to be a reality anytime soon.

By the way, I like Sabine Hossenfelder's various hype takedowns, going back to her original ones where she takes on a bunch of sacred cows in both theoretical and experimental physics. She tackled quantum computers and AI recently too. A rare voice of sanity in the deluge of hype BS.

Link to comment
Share on other sites

Hmm, last I heard Waymo does now have fully autonomous service in an increasing amount of service areas? It was just approved to expand to more areas for example in California.

 

As for Tesla, their latest 12.2 software which completes a rewrite of their system to an end-to-end neural network now can do 45 minute drives in the rain with 0 human interventions, so things seem to be getting pretty close to what was promised?

 

 

Link to comment
Share on other sites

Regarding Waymo - California municipalities are being forced to accept Waymo expansion kicking and screaming that they don't want it. An empty Waymo vehicle was recently torched by an unruly crowd. Not one but TWO Waymo vehicles plowed into the back of a trucking being towed by a tow truck recently within the span of a few minutes.

GM Cruise - LOL.

Tesla - Occassionally Full Self-Drive can pull off a long drive without human intervention and that's when Musk fanboys post videos. Meanwhile people keep dying in fiery crashes with Autopilot or FSD engaged, including Tesla employees.

Here is a more realistic review (with video) of the latest FSD version, including several safety critical disengagements:

https://insideevs.com/news/711227/tesla-full-self-driving-v12-toughest-test/

Here is a good overview of the recent hype cycle for self-driving cars:

https://www.theverge.com/24065447/self-driving-car-autonomous-tesla-gm-baidu

Link to comment
Share on other sites

Dean covered it pretty well, so. Anyhow, look, optimism is important, and even excessive optimism (after all, how else are you going to get early investors), but I draw a line when human lives are at stake, which is why I'm so disgusted by Musk's hype outright lies. There are always going to be teething problems and lives lost (see: airplane development and test pilots), but when you're talking about the public rather than professionals who knowingly take the risk, you need to be realistic and ask hard questions about risk/reward. 

The problem with these are the same as with AI: risk of catastrophic failure due to knowledge domain handling. You cannot foresee each and every scenario that might come up, and while a sensor might recognize every signal in its domain, it fails catastrophically when you need to employ signals from a completely different domain, something a human does automatically based on vast cultural experience which is impossible to duplicate in a database, at least not today. These systems are therefore failing in ways humans call "lacking in common sense", and teaching systems such knowledge domain management is extremely difficult, what we translate into "common sense" might be trivial to us, but astonishingly complex for AI. Think of the extreme risks associated with running a nuclear power plant, flying an jumbo jet, or safely transporting a bus full of schoolchildren or even putting our military nukes under AI command. How relaxed are you going to be with AI in charge of any of these, while lacking in that hard to define "common sense"? And if you say, "well, for those we'll always need humans in the decision making loop", think about what you just admitted... you can't leave AI in charge of mission critical situations. At that point we're down to it really in the final analysis being a glorified calculator. Or as the Churchill anecdote went, when a lady at dinner protested "who do you think I am??" Winston: "we already know who you are, now we're just haggling over the price". Well, it's the same as with the wh*re - we already know AI is just a glorified calculator ("we know what you are", hence why we will always need a human in the loop), we're just haggling over the price - very, very glorious, marvellous, expensive calculator.

I understand that there is a lot of amusement with asking AI to write this or that - but it's the same as it was when Siri first came out - on some level a novelty, and it was fun to ask questions. But the limitations are very apparent, and for mission critical stuff, it fails dramatically. Write an essay for your highschool paper, fine. Do a lawyer's job, write a petition to a judge or opposing counsel - we've seen the dramatic and embarrasing failures there. Low priority stuff ok. Mission critical, yeah, no.

Yes, it has its place: automate non-critical processes, datamine, pattern recongnition, image analysis etc.. So not useless, just, like the majority of time, some good tools and technological progress, just not revolutionary. Revolutionary is rare by definition. 

The thing that I find a drag in all such situations is the overly broad applications which end up more of an inefficiency than help. Imagine needing to write something of importance (we've seen how well that goes when legal documents were involved!), and then having to check every sentence for some subtle or not subtle flaw. It's more work than just doing it yourself from scratch. Using AI is like doing the work twice. I'd just as soon do it once myself.

Link to comment
Share on other sites

The statistics I've seen for both Waymo and Tesla (from insurance statistics, so you could say it's 3rd party verified) indicate significantly fewer crashes and deaths per mile driven, compared to human drivers. And that was with the older Tesla software.

 

Isn't that the ultimate goal here, to reduce all the unnecessary human death and disability from terrible human driving? Y'all seem to be wanting to hold the AIs to higher-than-human "perfect" standards.

 

Do you want me to go dig up on youtube all manner of human-driven car crashes and accidents? Cause there's tons more than the AI ones.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...