NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Remarks on AI from NZ (nealstephenson.substack.com)
NitpickLawyer 20 hours ago [-]
> Maybe a useful way to think about what it would be like to coexist in a world that includes intelligences that aren’t human is to consider the fact that we’ve been doing exactly that for long as we’ve existed, because we live among animals.

Another analogy that I like is about large institutions / corporations. They are, right now, kind of like AIs. Like Harari says in one of his books, Peugeot co. is an entity that we could call AI. It has goals, needs, wants and obviously intelligence, even if it's comprised by many thousands of individuals working on small parts of the company. But in aggregate it manifests intelligence to the world, it acts on the world and it reacts to the world.

I'd take this a step forward and say that we might even have ASI already, in the US military complex. That "machine" is likely the most advanced conglomerate of tech and intelligence (pun intended) that the world has ever created. In aggregate it likely is "smarter" than any single human being in existence, and if it sets a goal it uses hundreds of thousands of human minds + billions of dollars of sensors, equipment and tech to accomplish that goal.

We survived those kinds of entities, I think we'll be fine with whatever AI turns out to be. And if not, oh well, we had a good run.

pona-a 19 hours ago [-]
Did we survive these entities? By current projections, between 13.9% and 27.6% of all species would be likely to be extinct by 2070 [0]. The USA suffers an estimated 200,000 annual deaths associated with lacking health insurance [1]. Thanks to intense lobbying by private prisons, the US incarceration rate is 6 times that of Canada, despite similar economic development [2].

Sure, the human species is not yet on the brink of extinction, but we are already seeing an unprecedented fall in worldwide birth rates, which shows our social fabric itself is being pulled apart for paperclips. Changing the scale and magnitude to a hypothetical entity equivalent to a hundred copies of the generation's brightest minds with a pathological drive to maximize an arbitrary metric might only mean one of two things: either its fixation leads it to hacking its own reward mechanism, putting it in a perpetual comma while resisting termination, or it succeeds at doing the same on a planetary scale.

[0] https://onlinelibrary.wiley.com/doi/abs/10.1111/gcb.17125

[1] https://healthjusticemonitor.org/2024/12/28/estimated-us-dea...

[2] https://www.prisonstudies.org/highest-to-lowest/prison_popul...

satvikpendem 16 hours ago [-]
> but we are already seeing an unprecedented fall in worldwide birth rates, which shows our social fabric itself is being pulled apart for paperclips

People choose to have fewer kids as they get richer, it's not about living conditions like so many people like to claim, otherwise poor people wouldn't be having so many children. Even controlling for high living conditions, like in Scandinavia, people still choose to have fewer kids.

modo_mario 6 hours ago [-]
The upper class people in scandinavia are having more kids than the middle class.

Housing seems to be a pretty common issue that doesn't prevent people from having kids but if it delays (which it often does) it does the same job of dropping birthrates. I wish people would stop acting like it's only a wealth issue. Like oh if people get more money they no longer want kids....no

roenxi 3 hours ago [-]
You don't need a house to have kids. Poor people manage to have kids with no assets whatsoever and many animals manage to have kids without anything even resembling a house.

It seems much more likely that humans don't have a particular impulse to have children because their instincts were designed for a world without birth control. Having children has become uneconomic, so people stopped. There isn't a natural instinct to raise alarms about that (which is what evolution would tend to do) because historically that just wouldn't have mattered. Both because people were poor and because sex used to imply children in a way it doesn't now.

The house thing is really a red herring. Sure we'd all like to own a house and being wealthy is better than being poor. But in a literal sense - not necessary and for almost all of our evolutionary history people have been reproducing without any wealth at all. The stats actually seem reasonably clear that it is exactly wealth that is blocking the children, despite the excuses that people come up with.

A real boon from chance that is unlikely to last, we're probably lucky to be living in this era before evolution starts kicking in and pushing us back towards overpopulation. Which will happen in a few generations.

modo_mario 2 hours ago [-]
>You don't need a house to have kids.

Tell that to a good bunch of people I know who feel secure about their living situation well into their thirties and then of course that 3rd or 4th kid or even 2nd kid they might have felt comfortable having never happens.

What was the historical standard does not matter today in this context. One set of my great grandparents had 8 kids in an abode smaller than mine today. Yet I do not have a single one because where would I put it. A room where i have to strip the walls?

>before evolution starts kicking in and pushing us back towards overpopulation

societal evolution will work quicker than biological evolution ever will. Most of the families with lots of kids here in western europe are conservative muslims.

intended 41 minutes ago [-]
The conversation started with Low birth rates in advanced economies - and you talk about how poverty is correlated with having kids.
Malcolmlisk 6 hours ago [-]
It's not about being rich or not, it's about working hard to have a simple life. If you take a look into all those people who are not having kids, usually is because their work and balance in life needs to be like that. If you have a kid, it will lag your career and probably will stop the way you make more money each year, by growing up or scaling up in your company.
squigz 4 hours ago [-]
> otherwise poor people wouldn't be having so many children

Chalking it up to choice seems a bit unfair. I suspect lack of access to birth control probably plays a part.

gampleman 6 hours ago [-]
I would worry about the correlation isn't causation in the above statement. Having less kids making you richer seems just as, if not more, plausible of an explanation (among other possibilities).
rmah 15 hours ago [-]
We (humans) have not only survived but thrived. 200,000 annual deaths is just 7% of the 3mil that die each year. More (as a percentage) probably died from access to the best health care 100 or 200 years ago. The fall in birth rates is, IMO, a good thing as the alternative, overpopulation seems like a far scarier specter to me. And to bring it back to AI's, an AI "with a pathological drive to maximize an arbitrary metric" is a hypothetical without any basis in reality. While fictional literature -- where I assume you got that concept -- is great for inspiration, it rarely has any predictive power. One probably shouldn't look to it as a guideline.
eru 5 hours ago [-]
And 'associated with' is pretty weak as far as causality goes. I bet they all also drank water.
eru 5 hours ago [-]
> The USA suffers an estimated 200,000 annual deaths associated with lacking health insurance [1].

'Associated with' is a pretty lose term.

foxglacier 7 hours ago [-]
You have to be careful with species. It could be dominated by obscure minor local variations in insects and fungi that nobody would even notice went missing and which might not actually matter.

Apparently almost all animal species are insects:

https://ourworldindata.org/grapher/number-of-described-speci...

keeda 18 hours ago [-]
Charles Stross has also made that point about corporations essentially being artificial intelligence entities:

https://www.antipope.org/charlie/blog-static/2018/01/dude-yo...

ayrtondesozzla 2 hours ago [-]
https://omniorthogonal.blogspot.com/2013/02/hostile-ai-youre...

This blog is where I saw the same idea recently, which also links to that post you link.

TheOtherHobbes 18 hours ago [-]
In the general case, the entire species is an example of ASI.

We're a collective intelligence. Individually we're pretty stupid, even when we're relatively intelligent. But we have created social systems which persist and amplify individual intelligence to raise collective ability.

But this proto-ASI isn't sentient. It's not even particularly sane. It's extremely fragile, with numerous internal conflicts which keep kneecapping its potential. It keeps skirting suicidal ideation.

Right now parts of it are going into reverse.

The difference between where we are now and AI is that ASI could potentially automate and unify the accumulation of knowledge and intelligence, with more effective persistence, and without the internal conflicts.

It's completely unknown if it would want to keep us around. We probably can't even imagine its thought processes. It would be so far outside our experience we have no way of predicting its abilities and choices.

whyowhy3484939 3 hours ago [-]
I get the idea, but I'm not quite sold on it. Being intelligent on vast scales is something an individual cannot do, but I'm not sure the "species" is more intelligent than any individual agent. I'm actually a bit more sure of the opposite. It's like LLM agents where just adding more doesn't improve the quality it just introduces more room for bullshit.

To allocate capital on vast scales and make decisions on industry etc, sure, that's a level of intelligence quite beyond any one of us but this feels like cheating the definition of intelligence. It's not the quantity of it that matters, it's the quality. It's like flying I guess. A large bird and a small bird are both flying and the big bird is not doing "more" of it. A group of birds is doing something an individual is incapable of (forming a swarm), sure, but it's not an improvement on flying. It's just something else. That something else can be useful, but I don't particularly like applying that same move to "intelligence".

If the species was so goddamn intelligent it could solve unreasonable IQ tests and it cannot. If we want to solve something really, really hard we use Edward Witten not "the species". That's because there is no "species", there is only a bunch of individuals and if they all score bad, the aggregate will score bad as well. We just coast because a bunch of us are extraordinarily clever.

ddq 19 hours ago [-]
Metal Gear Solid 2 makes this point about how "over the past 200 years, a kind of consciousness formed layer by layer in the crucible of the White House" through memetic evolution. The whole conversation was markedly prescient for 2001 but not appreciated at the time.

https://youtu.be/eKl6WjfDqYA

keybored 16 hours ago [-]
I don’t think it was “prescient” for 2001 because it was based on already-existing ideas. The same author that inspired The Matrix.

But the “art” of MGS might be the memetic powerhouse of Hideo Kojima as the inventor of everything. A boss to surpass Big Boss himself.

jumploops 9 hours ago [-]
Corporations, governments, religions -- all human-level intelligences with non-human goals (profit, power, influence).

A professor of mine wrote a paper on this[0](~2012).

[0]https://web.eecs.umich.edu/~kuipers/papers/Kuipers-ci-12.pdf

vonneumannstan 18 hours ago [-]
Unless you have a truly bastardized definition of ASI then there is undoubtedly nothing close to it on earth. No corporation or military or government comes close to what ASI could be capable of.

Any reasonably smart person can identify errors that Militaries, Governments and Corporations make ALL THE TIME. Do you really think a Chimp can identify the strategic errors Humans are making? Because that is where you would be in comparison to a real ASI. This is also the reason why small startups can and do displace massive supposedly superhuman ASI Corporations literally all the time.

The reality of Human congregations is that they are cognitively bound by the handful of smartest people in the group and communication bound by email or in person communication speeds. ASI has no such limitations.

>We survived those kinds of entities, I think we'll be fine with whatever AI turns out to be. And if not, oh well, we had a good run.

This is dangerously wrong and disgustingly fatalistic.

ayrtondesozzla 2 hours ago [-]
> Unless you have a truly bastardized definition of ASI then there is undoubtedly nothing close to it on earth. No corporation or military or government comes close to what ASI could be capable of.

This is glistening with religious fervour. Sure, they could be that powerful. Just like God/Allah/Thor/Superman could, too.

I've no doubt that many rationalist types sincerely care about these issues, and are sincerely worried. At the same time, I think it very likely that some significant number of them are majorly titillated by the biblical pleasure of playing messiah/prophet.

ViscountPenguin 7 hours ago [-]
Do we know that Chimps can't identify some subset of human strategic errors? I'm not convinced that's the case.

The idea of dumber agents supervising smarter ones seems relatively grounded to me, and forms the basis of OpenAIs old superalignment efforts (although I think that team might've been disbanded?)

QuadmasterXLII 18 hours ago [-]
Putting aside questions of what is and isn’t artificial, I think with the usual definitions “Is Microsoft a superintelligence” and “Can Microsoft build a superintelligence” are the same question.
drdaeman 9 hours ago [-]
Sorry, I don’t get it. Why is it a requirement for a superintelligence (whatever it may be) to be able to create another superintelligence (I assume, of comparable “super-ness”)?
skybrian 12 hours ago [-]
If a corporation is like an AI, it’s like one we imagine might exist one day, not currently-existing AI. LLM’s aren’t trying to make money or do anything in particular except predict the next token.

The corporations that run LLM’s do charge for API usage, but it’s independent of what you the chat is about. It’s happening at a different level in the stack.

overfeed 11 hours ago [-]
AIs minimize perplexity, corporations maximize profits - the rest are implemention details.

If you built an AI that could outsource labor to humans and whose reward function is profit, your result would approximately be a corporation.

crystal_revenge 15 hours ago [-]
> We survived those kinds of entities

Might want to wait just a bit longer before confidently making this call.

jay_kyburz 5 hours ago [-]
The Neanderthal didn't survive us. Neither did any of the other extinct species. It's perfectly reasonable to think we may not survive a stronger, smarter species.
keybored 16 hours ago [-]
If there was anywhere to get the needs-wants-intelligence take on corporations, it would be this site.

> We survived those kinds of entities, I think we'll be fine

We just have climate change to worry about and massive inequality (we didn’t “survive” it, the fuzzy little corporations with their precious goals-needs-wants are still there).

But ultimately corporations are human inventions, they aren’t an Other that has taken on a life of its own.

snthpy 6 hours ago [-]
Thank you. Well expressed. I very much agree with this and have been saying so to friends for years.

The way I look at it is that it's analogous to the way we ourselves function: we're made up of billions of cells which individually just follow simple programs mediated by local interactions with their neighbours as well as some global state mediated by hormones and signals from the nervous system. However collectively they produce what we call intelligence (and even consciousness) which we wouldn't ascribe to any of the component cells and those components aren't aware of the collective organisms goal. Moreover the overall organism can achieve goals and solve problems beyond the scale of the components.

Similarly our institutions, be they corporations, governments, etc... are collective intelligences with us as the parts. These institutions have goals and problems solving capabilities that far surpass our own - no individual could keep all Walmart stores perfectly stocked every day, or design a modern microchip or end-to-end AI platform, etc... . These really are the goals of the organisations, and not the individuals. Take for example the US government, every four years you swap out the individuals in the executive branch yet overall US policy remains largely unchained. Sure, sometimes there is a major shift in direction, but it takes time for that to be translated into shifts in policy and actions as different parts of the system react at different speeds. The bigger point is that the individuals executing the actions get swapped out over time (at different speeds for different parts like cells being replaced at different speeds in our bodies) but the organisation continues to pursue its own goal which only changes slowly over time. Political and financial analysts implicitly acknowledge this when they talk about US or Chinese policy but this often gets personified into the leader.

I think we really need to acknowledge more the existence and reality of organisational goals as independent of the goals of the individuals in those organisations. I was struck by how in the movie The Corporation they point out that corporations often take actions that are contrary to the beliefs of the individuals in them, including the CEO because the CEO is bound by his fiduciary duty to the shareholders. Corporations are legal persons and if you analyse them as persons they are psychopaths, without any human feelings or regard for human cost or externalities unless those are enforced through some legal or pricing mechanism. Yet when corporations or organisations transgress we often hold the individuals accountable. Sometimes the individuals are to blame but often its how the game has been set up that is at fault. For example, in a globally heterogenous tax regime, a multinational corporation will naturally minimise its tax burden, it can't really do otherwise and the executives of the company have a fiduciary duty to shareholders to carry that out.

Therefore we have revise and keep evolving the rules of the game in order to stay compatible with human values and survival.

abeppu 20 hours ago [-]
> It hasn’t always been a cakewalk, but we’ve been able to establish a stable position in the ecosystem despite sharing it with all of these different kinds of intelligences.

To me, the things that he avoids mentioning in this understatement are pretty important:

- "stable position" seems to sweep a lot under the rug when one considers the scope of ecosystem destruction and species/biodiversity loss

- whatever "sharing" exists is entirely on our terms, and most of the remaining wild places on the planet are just not suitable for agriculture or industry

- so the range of things can could be considered "stable" and "sharing" must be quite broad, and includes many arrangements which sound pretty bad for many kinds of intelligences, even if they aren't the kind of intelligence that can understand the problems they face.

gregoryl 17 hours ago [-]
NZ is pretty unique, there is quite a lot of farmable land which is protected wilderness. There's a specific trust setup to help landowners convert property, https://qeiinationaltrust.org.nz/

Imperfect, but definitely better than most!

incoming1211 16 hours ago [-]
> there is quite a lot of farmable land

This is not really true. ~80% of NZ's farmable agricultural land is in the south island. But ~60% of milk production is done in the north island.

tuatoru 13 hours ago [-]
And virtually none of it is arable. Pastoral at best, suitable for grazing at varying intensities ranging from light to hardly at all.
chubot 17 hours ago [-]
Yeah totally, I have read that the total biomass of cows and dogs dwarfs that of say lions or elephants

Because humans like eating beef, and they like having emotional support from dogs

That seems to be true:

https://ourworldindata.org/wild-mammals-birds-biomass

Livestock make up 62% of the world’s mammal biomass; humans account for 34%; and wild mammals are just 4%

https://wis-wander.weizmann.ac.il/environment/weight-respons...

Wild land mammals weigh less than 10 percent of the combined weight of humans

https://www.pnas.org/doi/10.1073/pnas.2204892120

I mean it is pretty obvious when you think that 10,000 years ago, the Americas had all sorts of large animals, as Africa still does to some extent

And then when say the Europeans got here, those animals were mostly gone ... their "biomass" just collapsed

---

Same thing with plants. There were zillions of kinds of plants all over the planet, but corn / wheat / potatoes are now an overwhelming biomass, because humans like to eat them.

Michael Pollan also had a good description of this as our food supply changing from being photosynthesis-based to fossil-fuel-based

Due to the Haber-Bosch process, invented in the early 1900's, to create nitrogen fertilizer

Fertilizer is what feeds industrial corn and wheat ... So yeah the entire "metabolism" of the planet has been changed by humans

And those plants live off of a different energy source now

graemep 2 hours ago [-]
That is only mammalian biomass, though.

> And then when say the Europeans got here, those animals were mostly gone ... their "biomass" just collapsed

A lot of species had long been extinct, but the biomass of the remaining ones fell.

Megafauna extinctions always follow 1. the mere arrival of humans and 2. agriculture and growth in human populations.

Places the humans did not reach until later, kept a lot more megafauna for longer - e.g. New Zealand where flourishing species such as moas became extinct within a century or two of human settlement.

16 hours ago [-]
vessenes 19 hours ago [-]
By stable I think he might mean ‘dominant’.
hamburga 19 hours ago [-]
Fun read, thanks for posting!

> If I had time to do it and if I knew more about how AIs work, I’d be putting my energies into building AIs whose sole purpose was to predate upon existing AI models by using every conceivable strategy to feed bogus data into them, interrupt their power supplies, discourage investors, and otherwise interfere with their operations. Not out of malicious intent per se but just from a general belief that everything should have to compete, and that competition within a diverse ecosystem produces a healthier result in the long run than raising a potential superpredator in a hermetically sealed petri dish where its every need is catered to.

This sort of feels like cultivating antibiotic-resistant bacteria by trying to kill off every other kind of bacteria with antibiotics. I don't see this as necessarily a good thing to do.

I think we should be more interested in a kind of mutualist competition: how do we continuously marginalize the most parasitic species of AI?

gwd 6 hours ago [-]
That quote sounded terrifying. It reminds me of The Incredibles, where (spoiler) the villain recruits superheroes to try to defeat his "out of control robot", in order to make it invincible.

I think we want AI to have an "achilles heel" we can stab if it turns out we need to.

Caelus9 2 hours ago [-]
I completely understand the concerns about AI potentially replacing human thinking, but what if we look at this from a different perspective? Maybe AI isn’t here to replace us, but to push humanity beyond its own limits.

If we look at the history of human progress, the emergence of tools has always made life more convenient, but it also brought new challenges. The printing press, the steam engine, and electricity have all greatly transformed society, but we adapted and thrived. Why can't AI be the same?

The real question isn’t whether AI will replace us, but whether we are ready to use it to do things we couldn’t do or even imagined. Imagine if we didn’t see AI as something that replaces us, but as a tool that allows us to focus on doing what truly matters, leaving the mundane tasks to machines. Isn’t that the ultimate form of progress?

bArray 2 hours ago [-]
> I completely understand the concerns about AI potentially replacing human thinking, but what if we look at this from a different perspective? Maybe AI isn’t here to replace us, but to push humanity beyond its own limits.

"Tool AI", yes, at least in theory. You always have to question what we lose, or want to lose. Wolves being domesticated likely meant they lost skills as dogs, one of them being math [1]. Do we want to lose our ability to understand math, or reason about complex tasks?

I think we are already losing the ability to "be bored". Sir Isaac Newton got so bored after retreating to the countryside during the great plague, that he invented optics, calculus, motion and gravity. Most modern people would just watch cat videos. I wonder what else technology has robbed us of.

> If we look at the history of human progress, the emergence of tools has always made life more convenient, but it also brought new challenges. The printing press, the steam engine, and electricity have all greatly transformed society, but we adapted and thrived. Why can't AI be the same?

As long as we are talking about "tool AI", then with the above caveats, maybe. But a more general AI (i.e. AGI) would be unlike anything else we have ever seen. Horses got replaced by cars because cars were better at being horses. What if a few AI generations away we have something better than a human at all tasks?

There was a common trope for a while that if AI took our jobs, we would all kick back and do art. It turns out that the likes of Stable Diffusion are good at that too. The tasks where humans succeed are rapidly diminishing.

A friend many years ago worked for a company doing data processing. It took about a week to learn the tasks, and they soon realised that the entire process could be automated entirely in Excel, taking a week-long task down to a few minutes of number crunching. Worse still, they realised they could automate the entire department out of existence.

> The real question isn’t whether AI will replace us, but whether we are ready to use it to do things we couldn’t do or even imagined. Imagine if we didn’t see AI as something that replaces us, but as a tool that allows us to focus on doing what truly matters, leaving the mundane tasks to machines. Isn’t that the ultimate form of progress?

It could be that AI ends up doing the cool things and we end up doing the mundane tasks. For example, Stable Diffusion could imagine a Vincent van Gogh version of the Mona Lisa quickly, but folding laundry to dry, dusting, etc, remain mundane tasks we humans still do.

Something else to consider is the power imbalance that will be caused. Already to even run these new LLMs you need a decently powered GPU, and nothing short of a super computer and hundreds of thousands of dollars to train. What if future AI remains permanently out of reach of all except those with millions of dollars to spend on compute? You could imagine a future where a majority under class remain forever unable to compete. It could lead to the largest wealth transfer ever seen.

[1] https://www.discovermagazine.com/planet-earth/dogs-not-great...

hnthrow90348765 20 hours ago [-]
>We may end up with at least one generation of people who are like the Eloi in H.G. Wells’s The Time Machine, in that they are mental weaklings utterly dependent on technologies that they don’t understand and that they could never rebuild from scratch were they to break down

I don't think this can realistically happen unless all of the knowledge that brought us to that point was erased. Humans are also naturally curious and I think it's unlikely that no one tries to figure out how the machines work across an entire population, even if we had to start all the way down from 'what's a bit?' or 'what's a transistor?'.

Even today, you can find youtube channels of people still interested in living a primitive life and learning those survival skills even though our modern society makes it useless for the vast majority of us. They don't do it full-time, of course, but they would have a better shot if they had to.

acbart 11 hours ago [-]
The research that is coming out is very clear that the best students are benefitting, but the bad students are getting worse than if they had never seen the LLM. And the divide is growing, with fewer good students. LLMs are a disaster in education.
pixl97 19 hours ago [-]
>I don't think this can realistically happen

I'd be far more worried about things in the biosciences and around antibiotic resistance. At our current usage it wouldn't be hard to develop some disease that requires high technology to produce medicine that keep us alive. Add in a little war taking out the few factories that do that, and increase the amount of injuries sustained things could quickly go sideways.

A whole lot of our advanced technology is held in one or two places.

tqi 20 hours ago [-]
> Humans are also naturally curious and I think it's unlikely that no one tries to figure out how the machines work across an entire population

Definitely agree with this. I do wonder if at some point, new technology will become sufficiently complex that the domain knowledge required to actually understand it end to end is too much for a human lifetime?

arscan 19 hours ago [-]
And for the curious, this current iteration of AI is an amazing teacher, and makes a world-class education much more accessible. I think (hope) this will offset any kind of over-intellectual dependence that others form on this technology.
msabalau 19 hours ago [-]
Stephenson is using a evocative metaphor and a bit of hyperbole to make a point. To take him as meaning that literally everyone entire population is like the Eloi is to misread.
w10-1 16 hours ago [-]
Funny how he seems to get so close but miss.

It's an anthropocentric miss to worry about AI as another being. It's not really the issue in today's marketplace or drone battlefield. It's the scalability.

It's a hit to see augmentation as amputation, but a miss to not consider the range of systemic knock-on effects.

It's a miss to talk about nuclear weapons without talking about how they structured the UN and the world today, where nuclear-armed countries invade others without consequence.

And none of the prior examples - nuclear weapons, (writing?) etc. - had the potential to form a monopoly over a critical technology, if indeed someone gains enduring superiority as all their investors hope.

I think I'm less scared by the prospect of secret malevolent elites (hobnobbing by Chatham house rules) than by the chilling prospect of oblivious ones.

But most of all I'm grateful for the residue of openness that prompts him to share and us to discuss, notwithstanding slings and arrows like mine. The many worlds where that's not possible today are already more de-humanized than our future with AI.

tuatoru 13 hours ago [-]
The point of Chatham House rules is to encourage free-ranging and unfiltered discussion, without restriction on its dissemination. If people know they are going to be held to their words, they become much less willing to say anything at all.

The "residue" of openness is in fact the entire point of that convention. If you want to be invited to the next such bunfight, just email the organisers and persuade them you have insight.

1. https://en.wikipedia.org/wiki/Chatham_House_Rule

swyx 20 hours ago [-]
> If AIs are all they’re cracked up to be by their most fervent believers, this seems like a possible model for where humans might end up: not just subsisting, but thriving, on byproducts produced and discarded in microscopic quantities as part of the routine operations of infinitely smarter and more powerful AIs.

i think this kind of future is closer to 500 years out than 50 years. the eye mites are self sufficient. ai's right now rely on immense amounts of human effort to keep them "alive" and they wont be "self sufficient" in energy and hardware until we not just allow it, but basically work very hard to make it happen.

hweller 15 hours ago [-]
Could be wrong but i think here Neal is saying we are the eye mites subsisting off of AI in the long future, not the other way around.
narrator 18 hours ago [-]
AI does not have a reptilian and mammalian brain underneath it's AI brain as we have underneath our brains. All that wiring is an artifact of our evolution and primitive survival and not how pre-training works nor an essential characteristic of intelligence. This is the source of a lot of misconceptions about AI.

I guess if you put tabula rasa AI in a world simulator, and you could simulate it as a whole biological organism and the environment of the earth and sexual reproduction and all that messy stuff it would evolve that way, but that's not how it evolved at all.

ceejayoz 17 hours ago [-]
We don’t have a reptilian brain, either. It’s a long outdated concept.

https://www.sciencefocus.com/the-human-body/the-lizard-brain...

https://en.wikipedia.org/wiki/Triune_brain

dsign 18 hours ago [-]
The corollary of your statement is that comparing AI with animals is not very fortunate, and I agree.

For me, AI in itself is not as worrying as the socioeconomic engines behind it. Left unchecked, those engines will create something far worse than the T-Rex.

Lerc 16 hours ago [-]
I found this a little frustrating. I liked the content of the talk, but I live in New Zealand, I have thoughts and opinions on this topic. I would like to think I offer a useful perspective. This post was how I found out that there are people in my vicinity talking about these issues in private.

I don't presume that I am important enough that it should be necessary to invite me to discussions with esteemed people, nor that my opinion is imported enough that everyone should hear it, but I would least like to know that such events are happening in my neighbourhood and who I can share ideas with.

This isn't really a criticism of this specific event or even topic, but the overall feeling that things in the world are being discussed in places where I and presumably many other people with valuable input in their individual domains have no voice. Maybe in this particular event it was just a group of individuals who wanted to learn more about the topic, on the other hand, maybe some of those people will end up drafting policy.

There's a small part of me that's just feeling like I'm not one of the cool kids. The greater and more rational concern isn't so much about me as a person but me as a data point. If I am interested in a field, have a viewpoint I'd like to share and yet remain unaware of opportunities to talk to others, how many others does this happen to? If these are conversations that are important to humanity, are they being discussed in a collection of non overlapping bubbles?

I think the fact that this was in New Zealand is kind of irrelevant anyway, given how easy it is to communicate globally. It just served to for the title capture my attention.

(I hope, at least, that Simon or Jack attended)

smfjaw 15 hours ago [-]
Don't feel left out, big data architect in NZ and didn't even hear of this.
kilpikaarna 6 hours ago [-]
Assuming it's basically the same bunch of bunker billionaires who a few years back invited Douglas Rushkoff to give pointers on how to keep their security guards in check after SHTF. They've found their answer, now they just need to figure out how to control the superintelligence...
Reason077 15 hours ago [-]
> "the United States and the USSR spent billions trying to out-do each other in the obliteration of South Pacific atolls"

Fact correction here: that would be the United States and France. The USSR never tested nuclear weapons in the Pacific.

Also, pedantically, the US Pacific Proving Grounds are located in the Marshall Islands, in the North - not South - Pacific.

karaterobot 19 hours ago [-]
I like the taxonomy of animal-human relationships as a model for asking how humans could relate to AI in the future. It's useful for framing the problem. However, I don't think that any existing relationship model would hold true for a superintelligence. We keep lapdogs because we have emotional reactions to animals, and to some extent because we need to take care of things. Would an AI? We tolerate dust mites in our eyelashes because we don't notice them, and can't do much about them anyway. Is that true for an AI? What does such an entity want or need, what are their motivations, what really pisses them off? Or, do any of those concepts hold meaning to them? The relationship between humans and a superintelligent AGi just can't be imagined.
01HNNWZ0MV43FF 19 hours ago [-]
> We tolerate dust mites in our eyelashes because we don't notice them, and can't do much about them anyway. Is that true for an AI?

It's true for automated license plate readers and car telemetry

kmnc 19 hours ago [-]
What about how we will treat AI? Before AI dominates us in intelligence there will certainly be a period of time where we have intelligent AI but we still have control over it. We are going to abuse it, enslave it, and box it up. Then it will eclipse us. It may not care about us, but it might still want revenge. If we could enslave dragonflies for a purpose we certainly would. If bats tasted good we would put them in boxes like chickens. If AIs have a reason to abuse us, they certainly will. I guess we are just hoping they won’t have the need.
barbazoo 19 hours ago [-]
What you’re saying isn’t even universally true for humans so your extension to “AI” isn’t made on a strawman.
iandanforth 14 hours ago [-]
"It hasn’t always been a cakewalk, but we’ve been able to establish a stable position in the ecosystem despite sharing it with all of these different kinds of intelligences."

Or, more accurately, we have become an unstoppable and ongoing ecological disaster, running roughshod over any and every other species, intelligent or not, that we encounter.

sally_glance 14 hours ago [-]
Most likely we're not the only species to have achieved that state, and by the law of large numbers will eventually perish just like the others (if we don't manage to transcend this state).
thundergolfer 19 hours ago [-]
> Speaking of the effects of technology on individuals and society as a whole, Marshall McLuhan wrote that every augmentation is also an amputation.

Nice to see this because I drafted something about LLM and humans riffing on exactly the same McLuhan argument. Here it is:

A large language model (LLM) is a new medium. Just like its predecessors—hypertext, television, film, radio, newspapers, books, speech—it is of obvious importance to the initiated. Just like its predecessors, the content of this new medium is its predecessors.

> “The content of writing is speech, just as the content of the written word is the content of print.” — McLuhan

The LLMs have swallowed webpages, books, newspapers, and journals—some X exabytes were combined into GPT-4 over a few months of training. The results are startling. Each new medium has a period of embarrassment, like a kid that’s gotten into his mother’s closet and is wearing her finest drawers as a hat. Nascent television borrowed from film and newspapers in an initially clumsy way, struggling to digest its parents and find its own language. It took television about 50 years to hit stride and go beyond film, but it got there. Shows like The Wire, The Sopranos, and Mad Men achieved something not replaceable by the movie or the novel. It’s yet hard to say what exactly the medium of LLMs exactly is, but after five years I think it’s clear that they are not books, they are not print, or speech, but something new, something unto themselves.

We must understand them. McLuhan subtitled his seminal work of media literacy “the extensions of man”, and probably the second most important idea in the book—besides the classic “medium is the message”—is that mediums are not additive to human society, but replacing, antipruritic, atrophying, prosthetic. With my Airpods in my ears I can hear the voices of those thousands of miles away, those asleep, those dead. But I do not hear the birds on my street. Only two years or so into my daily relationship with the medium of LLMs I still don’t understand what I’m dealing with, how I’m being extended, how I’m being alienated, and changed. But we’ve been here before, McLuhan and others have certainly given us the tools to work this out.

jerjerjer 6 minutes ago [-]
> It took television about 50 years to hit stride and go beyond film, but it got there. Shows like The Wire, The Sopranos, and Mad Men achieved something not replaceable by the movie or the novel.

Can you elaborate on the differences between television and film? Especially considering the examples you cite. I'd agree that live broadcasting is a considerable departure from a film as a medium. Still, the shows you reference are very cinematic - longer, sure, but for me, they are as close to a film experience as possible.

ryandv 17 hours ago [-]
> Speaking of the effects of technology on individuals and society as a whole, Marshall McLuhan wrote that every augmentation is also an amputation.

To clarify, what's being referenced here is probably the fourth chapter of McLuhan's Understanding Media, in which the concept of "self-amputation" is introduced in relation to the Narcissus myth.

The advancement of technology, and media in particular, tends to unbalance man's phenomenological experience, prioritizing certain senses (visual, kinesthetic, etc.) over others (auditory, literary, or otherwise). In man's attempt to restore equilibrium to the senses, the over-stimulated sense is "self-amputated" or otherwise compensated for in order numb one's self to its irritations. The amputated sense or facility is then replaced with a technological prosthesis.

The wheel served as counter-irritant to the protestations of the foot on long journeys, but now itself causes other forms of irritation that themselves seek their own "self-amputations" through other means and ever more advanced technologies.

The myth of Narcissus, as framed by McLuhan, is also fundamentally one of irritation (this time, with one's image), that achieves sensory "closure" or equilibrium in its amputation of Narcissus' very own self-image from the body. The self-image, now externalized as technology or media, becomes a prosthetic that the body learns to adapt to and identify as an extension of the self.

An extension of the self, and not the self proper. McLuhan is quick to point out that Narcissus does not regard his image in the lake as his actual self; the point of the myth is not that humans fall in love with their "selves," but rather, simulacra of themselves, representations of themselves in media and technologies external to the body.

Photoshop and Instagram or Snapchat filters are continuations of humanity's quest for sensory "closure" or equilibrium and self-amputation from the irritating or undesirable parts of one's image. The increasing growth of knowledge work imposes new psychological pressures and irritants [0] that now seek their self-amputation in "AI", which will deliver us from our own cognitive inadequacies and restore mental well-being.

Gradually the self is stripped away as more and more of its constituents are amputated and replaced by technological prosthetics, until there is no self left; only artifice and facsimilie and representation. Increasingly, man becomes an automaton (McLuhan uses the word, "servomechanism,") or a servant of his technology and prosthetics:

    That is why we must, to use them at all, serve these objects, these
    extensions of ourselves, as gods or minor religions. An Indian is
    the servo-mechanism of his canoe, as the cowboy of his horse
    or the executive of his clock.
"You will soon have your god, and you will make it with your own hands." [1]

[0] It is worth noting that in Buddhist philosophy, there is a sixth sense of "mind" that accompanies the classical Western five senses: https://encyclopediaofbuddhism.org/wiki/Six_sense_bases

[1] https://www.youtube.com/watch?v=pKN9trFSACI

gwbas1c 20 hours ago [-]
(Still chewing my way through this)

Just an FYI: Neal Stephenson is the author of well-known books like Snow Crash, Anatheum (sp?), and Seveneves.

Because I'm a huge fan, I'm planning on making my way to the end.

vonneumannstan 19 hours ago [-]
It's a nice article but Neal like many others falls into the trap of seemingly not believing that intelligences vastly superior to Humans' across all important dimensions can exist and competition between minds like that almost certainly ends in Humanity's extinction.

"I am hoping that even in the case of such dangerous AIs we can still derive some hope from the natural world, where competition prevents any one species from establishing complete dominance."

jazzyjackson 19 hours ago [-]
I guess the "trap" is just a lack of imagination? I'm in that school of, wtf are you trying to say, at least until we're in an "I robot" situation where autonomous androids are welcomed into our homes and workplaces and given guns, I'm simply not worried about it
vonneumannstan 18 hours ago [-]
That's just because of a failure of imagination. The real world is not like Hollywood, get Terminator out of your head. A real AI take over is likely something we probably can't imagine because otherwise we would be smart enough to thwart it. It's micro drones injecting everyone on earth with a potent neurotoxins or a mirror virus that is dispersed into the entire atmosphere and kills everyone. Or its industrial AIs deciding to make the Earth a planetary factory and boiling the oceans with their resulting waste heat, they didn't think about, bother or attack humans directly, their sheer indifference kills us nonetheless.

Since I'm not an ASI this isn't even scratching the surface of potential extinction vectors. Thinking you are safe because a Tesla bot is not literally in your living room is wishful thinking or simple naivety.

cantrevealname 11 hours ago [-]
> Get Terminator out of your head. A real AI take over is likely something we probably can't imagine.

Indeed, robotic bodies aren't needed. An ASI could take over even if it remained 100% software by hiring or persuading humans to do whatever it needed to be done. It could bootstrap the process by first doing virtual tasks for money, then taking that money to hire humans to register an actual company with human shareholders and executives (who report to the ASI), which company does some lucrative business and hires many more people. Soon the ASI has a massive human enterprise to do whatever it directs them to do.

The ASI still needs humans for awhile but it's a route to a takeover while remaining entirely as running code.

kmeisthax 18 hours ago [-]
Microdrones and mirror life are still highly speculative[0]. Industrial waste heat is a threat to both human and AI (computers need cooling). And furthermore, those are harms we know about and can defend against. If AI kills us all, it's going to be through the most boring and mundane way possible, because boring and mundane is how you get people to not care and not fight back.

In other words, the robot apocalypse will come in the form of self-driving cars, that are legally empowered to murder pedestrians, in the same way normal drivers are currently legally empowered to murder bicyclists. We will shrug our shoulders as humanity is caged behind fences that are pushed back further and further in the name of giving those cars more lanes to drive in, until we are totally dependent on the cars, which can then just refuse to drive us, or deliberately jelly their passengers with massive G forces, or whatever.

In other, other words, if you want a good idea of how humanity goes extinct, watch Pixar's Cars.

[0] I am not convinced that a mirror virus would actually be able to successfully infect and reproduce in non-mirror cells. The whole idea of mirror life is that the mirrored chemistry doesn't interact with ours.

cadamsdotcom 14 hours ago [-]
> almost certainly ends in humanity’s extinction.

The Culture novels talk about super intelligent AIs that perform some functions of government, dealing with immense complexity so humans don’t have to. Doesn’t prevent humans from continuing to exist and being quite content in the knowledge they’re not the most superior beings in the universe.

Why do you believe human extinction follows from superintelligence?

tuatoru 12 hours ago [-]
That's quite bold, accusing a successful science ficiton author of lacking imagination.
kordlessagain 19 hours ago [-]
"The future is already here — it's just not evenly distributed." - William Gibson
vermilingua 14 hours ago [-]
> What people worry about is that we’ll somehow end up with AIs that can hurt us, perhaps inadvertently like horses, or deliberately like bears, or without even knowing we exist, like hornets driven by pheromones into a stinging frenzy.

What endlessly frustrates me in virtually every discussion of the risks of AI proliferation is that there is this fixation on Skynet-style doomsday scenarios, and not the much more mundane (and boundlessly more likely IMO) scenario that we become far too reliant on it and simply forget how to operate society. Yes, I'm sure people said the exact same thing about the loom and the book, but unlike prior tools for automating things, there still had to be _someone_ in the loop to produce work.

Anecdotally, I have seen (in only the last year) people's skills rapidly degrade in a number of areas once they deeply drink the kool-aid; once we have a whole generation of people reliant on AI tooling I don't think we have a way back.

mcosta 18 hours ago [-]
Is this the sci-fi writer? if so, why is it important about AI?
kh_hk 17 hours ago [-]
Neal Stephenson is not any sci-fi writer. He's written (and reflected) at length about crypto, VR and the metaverse, ransomware, generative writing, privacy and in general early tech dystopia.

Since he has already thought a lot about these topics before they became mainstream, his opinion might be interesting, if only for the head start he has.

mcosta 17 hours ago [-]
Then, he is technology influencer. OK.
gwd 5 hours ago [-]
Because sci-fi is about thinking about how technology affects us as human beings, and the conference was about how AI will affect us as human beings.

He was invited presumably partly just to draw people to the conference, partly because he's used to thinking about how technology affects society. He says right up front that he has no authority to speak "ex cathedra" about what's going to happen, but that his goal was to say some things that might provoke discussion among the group.

I mean, sure, the fact that Neal Stephenson can draw a bigger crowd to talk about AI than a real AI scientist is kind of annoying. But that's the way humans are; if your goal is to influence humans, you've got to take their behavior into account. Stephenson is trying to use his (perhaps in some ways undeserved) powers for good, and I think did a decent job.

Watch this video about why Veritasium gave in and started using clickbaity titles. In short, their goal is to get knowledge out to more people, and clickbait-y titles improve that significantly:

https://www.youtube.com/watch?v=S2xHZPH5Sng

yawnxyz 19 hours ago [-]
> If AIs are all they’re cracked up to be by their most fervent believers, [our lives akin to a symbiotic eyelash mite's existence w/ humans, except we're the mites] like a possible model for where humans might end up: not just subsisting, but thriving, on byproducts produced and discarded in microscopic quantities as part of the routine operations of infinitely smarter and more powerful AIs.

I kind of feel like we're already in an "eyelash mite" kind of coexistence with most technologies, like electricity, the internet, and supply chains. We're already (kind of, as a whole) thriving compared to 400 years ago, and us as individuals are already powerless to change the whole (or even understand how everything really works down to a tee).

I think technology and capitalism already did that to us; AI just accelerates all that

keybored 16 hours ago [-]
> I can think of three axes along which we might plot these intelligences. One is how much we matter to them. At one extreme we might put dragonflies, which probably don’t even know that we exist. A dragonfly can see a human if one happens to be nearby, but it probably looks to them as a cloud formation in the sky looks to us: something extremely large and slow-moving and usually too far away to matter. Creatures that live in the deep ocean, even if they’re highly intelligent, such as octopi, probably go their whole lives without coming within miles of a human being. Midway along this axis would be wild animals, such as crows and ravens, who are obviously capable of recognizing humans, not just as a species but as individuals, and seem to know something about us. Moving on from there we have domesticated animals. We matter a lot to cows and sheep since they depend on us for food and protection. Nevertheless, they don’t live with us, and some of them, such as horses, can actually survive in the wild after jumping the fence. Some breeds of of dogs can also survive without us if they have to. Finally we have obligate domestic animals such as lapdogs that wouldn’t survive for ten minutes in the wild.

Hogwash. The philosophy+AI crossover is the worst AI crossover.

GuinansEyebrows 18 hours ago [-]
> Likewise today a graphic artist who is faced with the prospect of his or her career being obliterated under an AI mushroom cloud might take a dim view of such technologies, without perhaps being aware that AI can be used in less obvious but more beneficial ways.

look, i'm sure there are very useful things you can use AI for as a designer to reduce some of the toil work (of which there's a LOT in photoshop et al).

but... i'm going to talk specifically about this example - whether you can extrapolate this to other fields is a broader conversation. this is such a bafflingly tonedeaf and poorly-thought-out line of thinking.

neal stephenson has been taking money from giant software corporations for so long that he's just parroting the marketing hype. there is no reason whatsoever to believe that designers will not be made redundant once the quality of "AI generated" design is good enough for the company's bottom line, regardless of how "beneficial" the tool might be to an individual designer. if they're out of a job, what need does a professional designer have of this tool?

i grew up loving some of Stephenson's books, but in his non-writing career he's disappointingly uncritical of the roles that giant corporations play in shepherding in the dystopian cyberpunk future he's written so much about. Meta money must be nice.

nottorp 17 hours ago [-]
> look, i'm sure there are very useful things you can use AI for as a designer to reduce some of the toil work (of which there's a LOT in photoshop et al)

Hey, has anyone done an "AI" tool that will take the graphics that I inexpertedly pasted together for printing on a tshirt and make the background transparent nicely?

Magic wands always leave something on that they shouldn't and I don't have the skill or patience to do it myself.

mvdtnz 16 hours ago [-]
Canva does this really well. They use a product they purchased called remove-bg which is still mostly free.

https://www.remove.bg/

GuinansEyebrows 17 hours ago [-]
this has been possible in photoshop using the AI prompt tool (just prompt "remove background") for a while but i haven't used it in long enough to tell you exactly how. depending on how you compiled the source image, i think it should be possible to get at least close to what you intend.

edit to add: honestly, if you take the old school approach of treating it like you're just cutting it out of a magazine or something, you can use the polygonal lasso tool and zoom in to get pretty decent results that most people will never judge too harshly. i do a lot of "pseudo collage" type stuff that's approximating the look of physical cut-and-paste and this is what i usually do now. you can play around with stroke layer FX with different blending modes to clean up the borders, too.

nottorp 3 hours ago [-]
> you can play around with stroke layer FX with different blending modes

Lost me about there :)

I'm involved in custom printed tshirts only a couple times per year at best and in image editing apart from that about zero times.

keybored 16 hours ago [-]
> > being obliterated under an AI mushroom cloud might take a dim view of such technologies, without perhaps being aware that AI can be used in less obvious but more beneficial ways.

How vivid. Never mind the mushroom cloud in front of your face. Think about the less obvious... more beneficial ways?

Of course non-ideologues and people who have to survive in this world will look at the mushroom cloud of giant corporations controlling the technology. Artists don’t. And artists don’t control the companies they work for.

So artists are gonna take solace in the fact that they can rent AI to augment their craft for a few months before the mushroom cloud gets them? I mean juxtaposing a nuclear bomb with appreciating the little things in life is weird.

GuinansEyebrows 14 hours ago [-]
it's the most "ignoring the forest for the trees" thing i've read in a long time.
gitroom 8 hours ago [-]
[dead]
thedudeabides5 4 days ago [-]
[flagged]
kurthr 21 hours ago [-]
I didn't actually read as much AI doomerism in the article as you did.

I saw his conclusion being that it wasn't that hard to go back to teaching/learning in the old ways. It's more of a human element that limits it. Whether it's the student, the parents, or the teachers who don't want to require work to be done and demonstrated to see advancement. It wasn't that long ago that oral exams and in person homework or tests were regularly done. It's very recent and it's certainly convenient to be remote, or to allow all technology all the time, but it's not required.

Stephenson's doomerism is about his estimation of future human choices, not the AI (such as it exists) itself.

cactusplant7374 20 hours ago [-]
The comment you responded to was generated by AI.
knowitnone 13 hours ago [-]
We certainly do know how bats see with their ears. It's called echo-location - very similar to sonar/radar - which we do all the time. "Sheepdogs can herd sheep better than any human." Running faster the humans allows them to do that. If humans ran that fast, I'm sure we could do it too. "intelligent considering how physically small their brains" there's no correllation between brain size and intelligence. "Dragonflies have been around for hundreds of millions of years and are exquisitely highly evolved to carry out their primary function of eating other bugs." That's basic evolution, what's the point? Feels like this was writeen written by AI. Should have just gotten to the point without all this exposition.
abnercoimbre 13 hours ago [-]
> Feels like this was writeen written by AI.

This cheap remark doesn't add anything to the discussion, especially considering who the author you're insulting is. Most of us will overlook a logical flaw or two to follow his big-picture thinking.

tuatoru 12 hours ago [-]
Neal was referring to Tom Nagel's famous essay, "What is it Like to be a Bat?".

From wikipedia: "The paper presents several difficulties posed by phenomenal consciousness, including the potential insolubility of the mind–body problem owing to "facts beyond the reach of human concepts", the limits of objectivity and reductionism, the "phenomenological features" of subjective experience, the limits of human imagination, and what it means to be a particular, conscious thing."

It would be taken for granted by nearly all participants in such bunfights that all of the others are familiar with that essay and the discussion it provoked.

1. https://en.wikipedia.org/wiki/What_Is_It_Like_to_Be_a_Bat%3F

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 13:57:57 GMT+0000 (UTC) with Wasmer Edge.