NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
The Abstraction Fallacy: Why AI can simulate but not instantiate consciousness (deepmind.google)
metalcrow 21 hours ago [-]
I've attempted desperately to understand this paper after thoroughly reading it and have made 0 progress. Can anyone who does understand it attempt to explain?

Currently my understanding is that this paper is claiming that "concepts" are a fundamental building block of experience (which relates to consciousness), and can only be built by a mapmaker which is something that directly converts continuous physical phenomena into discrete tokens. But I couldn't get further into how that related to consciousness.

EDIT: the paper seems to be assuming that something simulating a mapmaker, or the process of doing it, can by nature not be a mapmaker since performing alphabetization is inherently something that must be "instantiated". How do they confirm if something is doing simulation vs if it's actually instantiating it? How can you tell the difference? They say how, much like simulating photosynthesis will not produce glucose, simulating mapmaking won't produce concepts. But you can't measure concepts, they're intangible, so you can't differentiate simulating mapmaking vs a real mapmaker.

GMoromisato 20 hours ago [-]
It starts by saying that a simulation of something is not the real thing. A simulation of a hurricane is not a hurricane. That's certainly true and even obvious.

Then they say that current AI is just a simulation of consciousness and therefore is not real consciousness. Moreover, it can never be real consciousness because it is just a simulation.

But that's a circular argument: they are defining AI as a simulation. But what if AI is not a simulation of consciousness but actual consciousness? They don't offer any argument for why that's impossible.

ribosometronome 20 hours ago [-]
>A simulation of a hurricane is not a hurricane

If we simulated a hurricane by somehow inducing a rotating, organized system of clouds and thunderstorms over warm tropical waters with wind speeds over 75+ mph, the difference could end up being fairly unimportant to those in the simulation's path.

Computer simulations of hurricanes obviously lack those important properties of what makes something a hurricane. I'm not so sure that the same would apply to something as abstract and difficult to define as consciousness.

GMoromisato 19 hours ago [-]
Agreed! The paper is not explicit about how to distinguish between a simulation and the real thing, and that's how it gets into trouble.

With consciousness, the extra difficulty is that we can't distinguish via observable evidence. With a hurricane, we can measure wind-speed and track insurance claims to distinguish between simulation and the real thing. How do we do that with consciousness? What is the observable effect of consciousness?

mannykannot 20 hours ago [-]
On the other hand, an accurate digital simulation of a mechanical calculator really does calculate. The "a simulation is not the real thing" objection breaks down when the function is information processing, on account of information's substrate independence.
metalcrow 20 hours ago [-]
Yep that's about what i managed to get out of it as well. If you define AI as a simulation of a mapmaker, it can't be a real mapmaker. But they are never able to prove that it IS only a simulation, instead of an actual mapmaker.
dgellow 19 hours ago [-]
It’s a simulation of language, not consciousness. Though the problem you mention is pretty much the same
RC_ITR 18 hours ago [-]
We invented a word for a very specific thing (consciousness) and are now debating whether that relatively unimportant word represents a large open set or a narrow closed set.

We do one thing in our bodies with relatively binary nervous system and a fundamentally continuous endocrine system. That's clearly and unanimously consciousness. We also, however, see other animals with similar set-ups but less capabilities, so we understand it exists on a spectrum.

We separately invented a thing that gets to similar outcomes with fundamentally binary logic gates.

Our minds are drawn to comparison and classification, so we fight over how similar or different those two things are in a way that often feels unsatisfactory because in order to meaningfully compare the two, we have to reduce them in a way that feels like its underselling either/both.

CamperBob2 20 hours ago [-]
Also, since there's no way to prove that we're not entities in a simulation of something else, the argument runs out of steam in the opposite direction as well.
Rekindle8090 13 hours ago [-]
[dead]
jstanley 21 hours ago [-]
They're defining consciousness ("mapmaker") to exist outside the AI, and then showing that AI can't meet their definition of consciousness.
jsdalton 20 hours ago [-]
Yes, and it immediately called to mind for me the phrase “the map is not the territory.”

Put another way: no matter how detailed or “perfect” you make a map, it will never be the territory, ie the thing that is mapped.

Computers and AI are like a map in this regard —- just ones and zeros that we have assigned meaning to arbitrarily. No matter how “good” AI gets, it’s still just a map of the thing not the thing itself.

So AI saying “I feel sad” is never more than a representation of sadness that should not be confused with the subjective experience of sadness itself.

bee_rider 20 hours ago [-]
If you make a big enough map you can fly it over and drop it on the territory I guess. Then does it become the territory?
josefritzishere 16 hours ago [-]
According to the paper, no.
ReadEvalPost 20 hours ago [-]
I've tried to explain this paper to people in similar circumstances and have also struggled!

In my mind the key point of departure between this paper and the more standard computational functionalist approaches is the importance of metabolism. Metabolism _precedes_ organism. The body is first deeply entangled with the environment through exchanges of resources (content causality) before it is capable of building computers (vehicle causality). Having built and alphabetized the world we can understand them in terms of discrete state transitions.

I expect my explanations have been unsatisfying as we can immediately move to seeing metabolism as some alphabetized input/output system that can be immediately placed back into the computational framework. Moving outside of this framework requires engaging with the enactivist/organicist traditions, which is a rich but minority view.

harpiaharpyja 20 hours ago [-]
I'm only partway through, but I believe one of the foundational blocks is that computation is fundamentally an interpretation of physical events, not something that can just exist by itself.
renticulous 20 hours ago [-]
Currently out understanding of living systems is that they have to inhabit the body. What if tomorrow we find Alien race which is like drone operator operating a drone somewhat like Navi controlling other other animals but wireless. Would we change our definition of consciousness if brain (command and control centre) and body (physical execution) are distinct systems? This argument was stated by Daniel Dennett
soco 18 hours ago [-]
"ceci n'est pas une pipe" - a century old argument which still holds.
tsimionescu 20 hours ago [-]
I've never understood why certain philosophers view computation as some kind of abstract symbolic manipulation, while they easily accept that consciousness is a physical process.

Computation is something that a computer provably does. We build physical hardware, at great effort, to do computation. The hardware works and does the computation regardless of whether there is anyone to understand or interpret it. If it didn't, we couldn't have built anything like, say, an automatic door: that is a form of computation that provably happens as a physical process that is completely observer-independent.

Sure, a different entity than a human might view it completely differently than a door opening when someone is near - but the measurable physical effect would be the exact same, with the exact same change in momentum and position of the atoms in what we call the door based on the relative position of some other atoms and the sensor.

gwd 18 hours ago [-]
> I've never understood why certain philosophers view computation as some kind of abstract symbolic manipulation

Possibly very early AI misled people here. In the 80's, a huge amount of AI was logic manipulation; "If A then B is valid"; "A is true"; therefore, "B is true". It's not hard to see how people would conclude that that sort of symbolic manipulation could never result in consciousness.

But modern neural nets aren't like that at all. Calling modern neural nets "symbolic manipulation" seems insane; like calling libraries forests, and insisting we can apply scientific principles about forests to them, because books are made of trees.

Maxatar 20 hours ago [-]
>I've never understood why certain philosophers view computation as some kind of abstract symbolic manipulation

The abstraction is over the multitude of different physical ways that computation can be performed. That is the role of abstraction, to separate something from a particular means of implementation so that we can think about computation without having to fix a particular physical process.

tsimionescu 19 hours ago [-]
Sure, but I don't think that's what this paper and other similar ones are saying. I agree, of course, that things like programming languages or algorithms or even logical circuit diagrams are abstractions, obviously. But they are abstract descriptions of a real physical process that happens, for example, inside a CPU - in exactly the same way that an electrical diagram is an abstract descriptio of a real physical process that happens in an electrical circuit, or a thermodynamic calculation is a description of what happens inside an engine.

But the engine, the electrical circuit, and the computation inside the CPU are objective realities. There could be many other ways to describe and characterize the same physical realities, of course, but that doesn't make them observer-dependent phenomena.

cameldrv 19 hours ago [-]
The issue that the paper brings up is that the same physical process can be interpreted as multiple different computational processes. If the content of consciousness (the hard problem, qualia) is only dependent on the computational process, and not its particular physical instantiation, then which qualia are generated from a particular physical process?
tsimionescu 19 hours ago [-]
> The issue that the paper brings up is that the same physical process can be interpreted as multiple different computational processes.

I don't think this is relevant to the notion that consciousness is a form of computation.

The assertion that consciousness is a form of computation basically means that the physical process that happens in the brain/body that we recognize as consciousness can be described in terms of a computational process. A consequence of this, if it is true, is that replicating the same computation in a CPU would make the physical process that happens in the CPU just as conscious - assuming that we had identified the correct computation.

In this theory, the thing that would be conscious would be the physical CPU, just like the thing that is conscious is a physical human brain/body. The computation is just an abstract description of the common properties between the CPU and the human brain/body. It's not relevant that we could also describe the process inside the CPU as being a completely different computation - the abstract model is only required to be able to build and program the CPU.

To go back to my mechanical door analogy: we create an abstract model of the computations needed to make a computational system open a door when a person is near. We use this model to create the computational system, and we see the door opening when a person goes near the sensor. Now, we can interpret the computation happening inside the system in many other ways - but that won't change the fact that the door opens when a person is near, in any way.

I am not claiming that any of this constitutes proof that consciousness must be a computation. What I'm claiming though is that the paper, and similar arguments, are not refuting the right claims, and generally have a misunderstanding of what "computation" actually means, and its relation to physical processes.

cameldrv 17 hours ago [-]
The "hard problem" is talking about the thing that it's like to be you, to experience what is happening. I don't know about you, but I only experience one set of things happening at once, i.e. it doesn't feel like I am in two places at once or that there are two completely different versions of my life happening simultaneously.

If the physical thing that is conscious is the CPU, what are the contents of its consciousness if there are multiple interpretations of what it is computing?

Now maybe somehow there are in fact multiple consciousnesses inhabiting the CPU. I don't experience that though, so I don't have a positive reason to believe that that's true.

tsimionescu 9 hours ago [-]
You are presupposing that there is a single way an outside observer could interpret the way your brain works to produce consciousness. I don't see why we should believe this. The same way, even though we can model the processes in the CPU as multiple computations, perhaps only one of these models is correct in some way, and that is the model we call consciousness. Of course, this becomes highly speculative.
wdbm 18 hours ago [-]
Why assume there has to be a one-to-one mapping? Why assume one physical process can't correspond to an infinity of different qualia?

We assume an infinity of wave-functions correspond to a single physical process without difficulty.

twosdai 20 hours ago [-]
Really great point. I have wondered that as well.

Even weirder to me is that in the case of a person doing the computation on a board or paper or whatever medium, its still computation. This time the physical medium doing the work, is the human and their brain.

If consciousness can be proven to emerge from computation alone, then in a way we humans with our brains can simulate a new consciousness.

GMoromisato 20 hours ago [-]
I think this is a circular argument. It defines a separation between computation and experience (between the abstraction and the "mapmaker") and then concludes that computation cannot be experience because they are in separate categories.

There are really only two solutions to the Hard Problem of Consciousness:

1. Consciousness is an unknown physical something (force/particle/quantum whatever). 2. Consciousness is an illusion. It is the software telling itself something.

[Some people would add "3. Consciousness is an emergent property of certain systems." But that just raises the question of what emerged? Is it a physical structure, like a tornado (also an emergent property) or an internal feedback loop (i.e., an illusion).]

The problem with #1 is that it's hard to cross the chasm from non-conscious to conscious with a bucket of parts. How is it that atoms/electrons/photons suddenly start experiencing pain? What is it, in terms of atoms/forces, that's experiencing the pain?

#2 makes more sense. Pain isn't a real thing any more than an IEEE float is a real thing. A circuit flips bits and an LED shows a number. A set of neurons fire in a pattern and the word "Ow!" comes out of someone's mouth.

brotchie 20 hours ago [-]
Originally rejected the paper premise, but I get it now, certainly made me question my belief that consciousness binds to any arbitrary information processing that's of sufficient complexity.

IIUC the author is saying that the human brain is running directly on "layer zero": chemical gradients / voltage changes, while AI computes on an abstraction one layer higher (binary bit flips over discretized dyanmics).

In essence, our brains are running directly on the "continuous" physical dynamics of the universe, while AI is running on a discretization of this (we're essentially discretizing the physical dynamics and to create state changes of 0 -> 1, 1 -> 0).

My currently belief is that consciousness is some kind of field or property of the universe (i.e. a universal consciousness field) that "binds" to whatever information processing happens in our wet ware. If you've done intense meditation / psychedelics, there's this moment when it becomes obvious that you are only "you" due to some kind of universal consciousness's binding to your memory and sensory inputs.

The "consciousness arises from information processing," i.e. the consciousness field binds to certain information processing patterns, can still hold, and yet not apply to AI (at least in its current form): The binding properties may only apply to continuous processes running directly on the universe's dynamics, and NOT to simulations running on discretized dynamics.

tsimionescu 19 hours ago [-]
> while AI is running on a discretization of this (we're essentially discretizing the physical dynamics and to create state changes of 0 -> 1, 1 -> 0).

But this is just a discretization we impose when we try to represent the system for ourselves. The reality is that the AI is a particular time-ordered relation between the continuous electric fields inside the CPU, GPU, and various other peripherals. We design the system such that we can call +5V "1" and 0V "0", but the actual physical circuits do their work regardless of this, and they will often be at 2V or 0.7V and everywhere in between. The physical circuit works (or doesn't) based exclusively on the laws of electricity, and so the answer of the LLM is a physical consequence of the prompt, just as a standing building is a physical consequence of the relationships between the atoms inside its blocks. The abstract description we chose to use to build this circuit or this building is irrelevant, it's just the map, not the territory.

dwb 19 hours ago [-]
The computer and the program wouldn't exist without us, though. They only exist to be interpreted by us. The physical properties of the circuits outside of what we cajole them into doing are irrelevant, meaningless. The circuits only do their work regardless of particular interpretations; they wouldn't exist at all without people building them to be interpreted.
tsimionescu 18 hours ago [-]
The physical computer could exist regardless of us. The program, if by that we mean "a human model of the computation happening in a physical computer" is just a description, yes.

It would be extraordinarily unlikely, but physically conceivable, that a physical system that is organized exactly like a microcontroller running an automatic door program, together with a solar panel, a basic engine, and a light sensor, could form randomly out of, say, a meteorite falling in a desert. If that did happen, the system would produce the same "door motor runs when person is near sensor" effect as the systems we build for this.

The physical circuit are doing what they are doing because of physics. They don't care why they happen to be organized the way they are - whether occurring by human design or through random chance.

Edit: I can add another metaphor. Consider buildings: clearly, buildings are artificial objects, described by architectural diagrams, which are purely human constructs, and couldn't be built without them. And yet, there exist naturally occurring formations that have the same properties as simple buildings - and you can draw architectural diagrams of those naturally occurring formations; and, assuming your diagrams are accurate, you can predict using them if the formations will resist an earthquake or collapse. Physical computers are no different from artificial buildings here, and the logic diagrams and computer programs are no different from the architectural diagrams: they are methods that help us build what we want, but they are still discovered properties of the physical world, not idealized objects of our own making; the fact that naturally occurring computers are very unlikely to form doesn't change this fact.

dwb 8 hours ago [-]
I disagree that it’s conceivable that a computer could somehow exist without a conscious maker. It’s so unlikely that it may as well be impossible. If something non-human that was capable of consciousness did form in the universe, through known biology or not, it would “just” be another form of life, and not what the paper is talking about.

What you say about buildings is sort of true as far as it goes, but irrelevant for the argument because buildings aren’t symbolic manipulation machines that only mean something via conscious interpretation, that some people are claiming could gain consciousness themselves.

tsimionescu 6 hours ago [-]
Probability of such a structure forming is completely irrelevant. The argument makes sense if there was a mathematical/physical impossibility, but as long as the laws of physics allow such an object to exist and form by random chance, and predict it would operate exactly the same as the consciousness-designed one, I don't see any reason to discount it.

I also think the arguments against this are contradictory. On the one hand, we have an argument that says that computers only work because a consciousness built them to implement a particular computation. On the other hand, we're saying that the same physical computer doing the same physical thing can be interpreted to be implementing an infinite number of different computations. These two seem to point in different directions to me.

brotchie 19 hours ago [-]
This is a good counter argument to the paper, honestly.
TimTheTinker 19 hours ago [-]
I think a better counter is the question "Is there a meaningful difference between binary discretization and Planck units? Aren't those discrete/indivisible as well?"
tsimionescu 18 hours ago [-]
That's not really a good counter - Planck units are not a discretization. Space-time is continuous in all quantum models, two objects can very well be 6.75 Planck lengths away from each other. The math of QM or QFT actually doesn't work on a discretized spacetime, people have tried.
tsimionescu 1 hours ago [-]
I should add one thing here: no theory that is consistent with special relativity can work on a discretized spacetime, because of the structure of the Lorrentz transform. If a distance appears to be 5 Planck units to you, it will appear to be 2.5 Planck units to someone moving at half the speed of light relative to you in the direction of that distance.
mrandish 19 hours ago [-]
I thought your "layer zero" analogy was an interesting avenue to reason about but you lost me with:

> My currently belief is that consciousness is some kind of field or property of the universe (i.e. a universal consciousness field) that "binds" to whatever information processing happens in our wet ware.

First, because it requires a huge leap into fundamental and universal physical mechanics for which there is currently zero objective evidence. Second, it's based entirely on individual interpretation of internal subjective experience. While some others (but not all) report similar interpretations or intuitions during some induced altered states, I think the much simpler explanation is that the internal 'sense of self' we normally experience is only one property of our mental processes and the sense of unbinding you temporarily experienced was a muting or disconnection of that component while keeping the rest of your 'internal experience machine' running.

In your layer analogy, our sense of self may be akin to an interpreter running as a meta-process downstream of our input parser. Thus what you subjectively experienced while that interpreter was disconnected can seem alien and even profound. Neuroscientists have traced where in the brain the subjective sense of self emerges, so it's plausible it's a trait which can be selectively suppressed. Additionally, it's been demonstrated experimentally that subjectively profound experiences of universal connectedness sometimes described as spiritual, religious or metaphysical can be induced in a variety of ways.

colordrops 20 hours ago [-]
Is there a layer zero though? What does that even mean? It implies the universe is designed and built upon layers of abstraction. That's just in our heads though, not out there. The layered model is a human abstraction.
brotchie 20 hours ago [-]
It's the difference between:

  a) Actually pouring a cup of water into a pond (layer zero), and
  b) Running a fluid dynamics simulation of pouring a cup of water into a pond (some layer above layer zero).
colordrops 20 hours ago [-]
I understand the original framing which is what you are repeating. I'm saying the framing itself is an illusion. It's an arbitrary distinction and also implies fully understanding all the underlying processes that go into pouring a cup of water in a pond (we don't) and that running a fluid dynamics simulation is some trivial thing (it's not).
brotchie 19 hours ago [-]
Are you saying that, in some abstract sense, that actually pouring the cup may be isomorphic to running a perfect simulation of pouring the cup?

Genuinely curious about your statement that its an illusion / arbitrary distinction, to figure out if there's a gap in my thinking / reasoning. To me there's a clear distinction between the actual thing happening via physical dynamics vs. us (humans) having creating a discretized abstraction (binary computation) on top of that and running a process on that abstraction.

Maybe there's some true computational universality where the universes dynamics are discrete (definitely plausible) and there's no distinction between how a processes dynamics unfold: i.e. consciousness binds to states and state transitions regardless of how they are instantiated. I did use to hold this view , but now I'm not so sure.

dwb 18 hours ago [-]
It's not arbitrary because people are making exactly this distinction in order to argue that it's possible for computers to be conscious, which this paper argues against. So the distinction exists at least for the purposes of this argument. Whether it "really" exists of course depends on your perspective.
abeppu 20 hours ago [-]
I think #2 risk being incoherent unless you define things very carefully.

"Illusion" ordinarily means there's someone with a subjective experience which creates incorrect beliefs about the world. E.g. I drive on a highway in summer, I see reflections on the road, I momentarily believe there is standing water, but it's an illusion. What does it mean for the basis of subjective experience to be illusory? Who experiences the illusion?

> Pain isn't a real thing any more than an IEEE float is a real thing. A circuit flips bits and an LED shows a number. A set of neurons fire in a pattern and the word "Ow!" comes out of someone's mouth.

But we don't think the circuit has an experience of being on or off. And we _do_ think there's a difference between nerve impulses we're unaware of (e.g. your enteric nervous system most of the time) and ones we are aware of (saying "ow"). Declaring it to be "not any more real" than the led case doesn't explain the difference between nervous system behavior which does or doesn't rise to the level of conscious awareness.

GMoromisato 19 hours ago [-]
Agreed! The difficulty with consciousness is that there is no observable effect to distinguish between, say, actual pain and simulation of pain (acting like you are in pain).

And I don't think I have a good handle (much less a coherent definition) on what it means for consciousness to be an illusion. What I think it means is that the process that is getting signals about the environment, and making decisions about what to do, is getting a signal that it is in pain. The signal causes the process to alter its behavior, and one of its behaviors is that when it introspects, it notices that it is in pain. The introspection (how am I feeling) is just a data processing loop, but that process, which is responsible for tracking how its feeling, is in the pain state.

There's a lot of hand waving here, which is why this is the Hard Problem of Consciousness and why this paper has not solved it.

neosat 20 hours ago [-]
Agree with your points on the primary two questions and the circular argument in the original article. However, re: " How is it that atoms/electrons/photons suddenly start experiencing pain? What is it, in terms of atoms/forces, that's experiencing the pain?" that's an interesting question but not necessarily fundamentally refuting of #1. If you start with #1 "Consciousness is an unknown physical something (force/particle/quantum whatever)" then it has 'perceivable' properties of it's own different from those of it's constituent atoms or electrons. A toy example is the 'wetness' of water. If you only look at atoms and molecules with no way to 'experience' water then it's hard to conceive how water can have properties (though in the case of water it is tractable)

Consciousness *may* be something similar. If it is (e.g. the purest form of energy) then it is not inconceivable that it has some properties that not not tractable if we only look at more granular manifestations of it.

GMoromisato 19 hours ago [-]
Agreed! I'm skeptical of consciousness requiring some exotic new physics (a quantum phenomenon or a new form of energy or somesuch) but we can't prove that it doesn't.

Honestly, if someday a scientist proves that consciousness is a fundamental force like gravity, I would say, "yup, that makes sense!" even if I don't think it's likely.

Exoristos 20 hours ago [-]
4. It is ἐνέργεια, direct spark, of the God. It can be described but not comprehended, imitated but not replicated.
Windchaser 18 hours ago [-]
to be fair, at one time "life" was also seen this way. "There's a magic sauce, an elan vital, that makes living organisms live".

But in the end, it turned out to be biochemistry.

I think, given our history, it makes sense to be skeptical of claims that suggest that the things we don't yet understand cannot be comprehended or replicated.

exitb 20 hours ago [-]
> Consciousness is an illusion. It is the software telling itself something.

An illusion is a misinterpretation, which implies an observer. Who’s the observer then?

iterateoften 20 hours ago [-]
The next loop
vsri 20 hours ago [-]
I resonate with this. I think some folks will object to the word "illusion" and it's connotations but I think it is resolved with:

1. Consciousness is a material thing (that we haven't found yet)

2. Consciousness is not a material thing (and therefore we cannot "find" it, and thus cannot be "known")

2 is the weirder proposition of course. It asserts a category of things that can't be conceived, but of course it feels like we are talking about it because we are using words to contain it. But of course, the words have no direct referent. That's the illusion.

TimTheTinker 20 hours ago [-]
2 is only weirder if you don't already accept non-material reality, i.e. the proposition There exist real things that are not themselves composed of matter and/or energy.

That's crossing into metaphysics, which isn't usually a welcome topic here, but the fact remains that more than 80% of the current and prior world population believes/believed in a non-material reality.

The persistence and stickiness of that belief throughout history ought to at least make us sit up and pay attention. Something's going on, and it's not a mere historic lack of scientific rigor, notwithstanding science's penchant for filling gaps people previously attributed to spiritual causes. That near-universal reflex to attribute things to spiritual causes in the first place is what's interesting - why do people not merely say the cause is "something physical we don't understand"?

mcphage 20 hours ago [-]
Tiger got to hunt,

Bird got to fly;

Man got to sit and wonder, "Why, why, why?"

Tiger got to sleep,

Bird got to land;

Man got to tell himself he understand.

—Kurt Vonnegut

tim333 18 hours ago [-]
>2 ... It is the software telling itself something.

I think human/animal consciousness works something like that - the neurons produce a summary of the organisms situation - what it's seeing, where it is, how it's feeling etc. That the is an input to the thinking/acting parts of the brain eg. feeling hungry, in bedroom -> maybe walk to the fridge. I'm not sure illusion is the right word. Maybe something like situational summary?

elliotec 19 hours ago [-]
#0 Is what William James described as consciousness not being a separate substance, but a set of relations within experience itself:

> Consciousness connotes a kind of external relation, and does not denote a special stuff or way of being. The peculiarity of our experiences, that they not only are, but are known, which their 'conscious' quality is invoked to explain, is better explained by their relations — these relations themselves being experiences — to one another.

20 hours ago [-]
renticulous 20 hours ago [-]
With the emergence argument, I have the following retort.

How can something emerge if it wasn't embedded or hidden within the system already?

GMoromisato 19 hours ago [-]
I think when people say "emergent" they mean that it happens because of a combination of parts forming something greater.

For example, if you decompose an airplane into its pieces, you will discover than none of the pieces can fly from Boston to San Francisco by itself. Wings can't fly without engines, engines can't work without fuel, etc. etc.

Maybe consciousness is a process that requires many different components or steps. No one component is conscious, but the running process is.

renticulous 18 hours ago [-]
Would it be ok to say quantum fields are conscious in some sense? That a quality of consciousness cannot emerge if it isn't there already in the most fundamental aspects of the reality
GMoromisato 14 hours ago [-]
I don't know. We get lost in definitions that way. What does "conscious" mean? What does it mean for consciousness to already be there? What do you mean by "fundamental aspect of reality"? [These are rhetorical questions--if you try to answer them, we will get lost in definitions.]

For every other problem that science tackles, there are observable results. How long does the apple take to fall? What time will the sun rise on June 21st? We can make theories and see if the theories match reality. But with consciousness there are no observable results. I know that I'm conscious, but there is no way for me to observe your consciousness. And there is no way for me to prove that I'm conscious (as opposed to a philosophical zombie).

colordrops 20 hours ago [-]
I don't know, why not?
renticulous 19 hours ago [-]
If not, then it has been hoisted upon material systems from outside. Which is nothing but substance dualism argument.
dsign 20 hours ago [-]
Hm. It only takes a life of study and a lot of pain to understand that #2 is the thing. But most of us get to experience the latter without experiencing the former, so for most people #1 is the preferred option.

#1 leads to theism and offers an immediate balm. Unfortunately, it mostly excludes #2, and that leaves us in the merciless hands of God.

polotics 20 hours ago [-]
there are many possible points eg. for example what happens if you rephrase your solution 2 by swapping the terms?
0xBA5ED 20 hours ago [-]
"It defines a separation between computation and experience" Does it? Or does it separate two forms of computation (or two forms of experience)? Isn't it just saying a GPU can't be a brain and a brain can't be a GPU? That the entirety of a thing's experience can't be replicated on a different substrate, only simulated. The substrate does fundamentally dictate the ultimate experience (or lack thereof) of the thing that computes within it.
colordrops 20 hours ago [-]
What is a "real" thing and not an "illusion" if you go with #2? Is a car a real thing, or just a collection of atoms? Is an atom a real thing? Or a collection of processes? Is it not turtles all the way down? What is "real"?
0xBA5ED 20 hours ago [-]
Well if you can't concede that anything is real, that sort of makes you crazy doesn't it? A tree is real. But the concept of a tree and the word "tree" and all the ideas you have about the tree and what tree means, is that real? No, because it doesn't change the nature of the tree. When you cease to exist, the tree will still be there. Can you be absolutely 100% sure of that? Also no. But if you believe that other people are conscious individuals like you are and that some of them die and the tree keeps going, you can concede that it is probably true that the tree exists separate from your idea of it.
colordrops 17 hours ago [-]
No, I don't feel crazy. Just honest.

I have no idea if the tree is still there when I cease to exist. I just go with that assumption out of convenience.

This degrading of subjective experience as a minor detail rather than a fundamental aspect of reality is one of the core sources of confusion in western thought IMHO.

0xBA5ED 15 hours ago [-]
I'd argue we must go with these assumptions out of necessity rather than convenience. I don't have any broad strokes to offer on western thought, however.
colordrops 8 hours ago [-]
Necessity for what condition? To find truth? Making assumptions are the opposite of finding truth.
0xBA5ED 53 minutes ago [-]
To function. You must assume many things on a daily basis to function and survive because we are limited.
Delk 19 hours ago [-]
I think #2 is actually circular, or perhaps rather contradictory. In order to be able to have an illusion one would have to be conscious in the first place. Or how would you have an illusion of something if you're not aware enough to experience that illusion? So I don't think the concept of "illusion of consciousness" makes much sense. (It does make sense for others to have an illusion that an AI or some other entity is conscious, but not for the entity itself.)

> Pain isn't a real thing any more than an IEEE float is a real thing. A circuit flips bits and an LED shows a number. A set of neurons fire in a pattern and the word "Ow!" comes out of someone's mouth.

Perhaps, but I think a physical presence is still required for consciousness, at least for any kind of consciousness that resembles ours.

It's perhaps easier to talk about qualia rather than consciousness, but I think qualia are a prerequisite for consciousness anyway.

Basically all of our qualia are somehow related to our needs in the physical world. We feel physical pain because it signals that our body is in danger of being damaged. We feel emotional pain from social rejection because for most of our history humans have needed other people for physical survival. (Or in some cases perhaps because our genes make us want to procreate and we failed at that.) Either way, our needs in the physical world are not being met. Evolution has produced genetic code that produces a brain that somehow makes us feel that subjectively, even if nobody knows how.

Those subjective experiences of course get processed by neurons, assuming you accept materialism. (Neurons are AFAIK significantly more complex than the "neurons" in ANNs, so equating biological neuronal activity with ANNs is wrong. But I suppose in principle any physical process may be represented or at least approximated by some symbolic representation, so in theory that probably doesn't matter.)

We can also express those subjective qualia in terms of language. However, I don't think it's possible to have our qualia (or consciousness) based on language or symbolic manipulation alone if it doesn't have some kind of a connection to our physical needs.

If you could directly simulate an entire human brain and feed it artificial sensory input, I suppose it would actually be conscious without having a physical body. In principle an AI could also evolve consciousness based on survival needs even if it were not biological.

But for example LLMs have been trained only on the symbolic level. Their "neural" structure is not simulating a brain and they don't have a connection to physical needs. I think that makes them incapable of consciousness even if the output they produce successfully mimics human language -- that is, symbolic representations of our qualia and conscious thought.

I'm not sure if that's the point the author is making. But I think the distinction between the purely symbolic "map" and the "actual thing" sort of makes sense.

Anon84 20 hours ago [-]
I would argue that, before we can begin to address whether or not AI can instantiate consciousness, we should agree on a practical, unequivocal definition of what consciousness is... and I think we're still pretty far from that milestone... Until then, this kind of argument are nothing more than pipe dreams, solipsism, and idle philosophising
Eisenstein 19 hours ago [-]
I think consciousness is a red herring and is being used to distract from the actual substantive question that must be resolved, which is whether or not a non-biological system which has outputs that cannot be meaningfully distinguished from that of a biological one deserves moral consideration.
dang 21 hours ago [-]
Related: The Abstraction Fallacy: Why AI Can Simulate but Not Instantiate Consciousness - https://news.ycombinator.com/item?id=47835950 - April 2026 (52 comments)

(That one didn't make the frontpage, so we won't treat it as a dupe. - https://news.ycombinator.com/newsfaq.html)

dreamlayers 20 hours ago [-]
As long as we don't understand how consciousness works, I don't think it's possible to make claims about what is or isn't conscious. It's all just speculation.

But if others are speculating, I might as well. What if AI consciousness depends not on computation, but on what seems like randomness? When something is running a fully deterministic process, consciousness seems irrelevant. I don't think the meaning that humans see in the process makes it conscious. Even a simple industrial control system using relays senses and responds to meaningful things.

mannykannot 20 hours ago [-]
There's interesting commentary on this paper from Maggie Vale here: https://substack.com/home/post/p-194580145

One of her points is that there are various pesky consequences for AI companies if AI becomes to be seen as conscious, such as what the paper calls the "welfare trap": if AI systems are widely regarded as being conscious or sentient, they will be seen as "moral patients", reinforcing existing concerns over whether they are being treated appropriately. This paper explicitly says that its conclusion "pulls the field of AI safety out of the welfare trap, [allowing] us to focus entirely on the concrete risks of anthropomorphism [by] treating AGI as a powerful but inherently non-sentient tool."

ctoth 20 hours ago [-]
You noticed that too huh? It's weird ... It's not like they have to do this? They aren't forced to go full evil company mode by any extrinsic thing but even the way they frame it "welfare trap" trap? for whom?

Anthropic is actually trying to do some research into model welfare which I am personally very happy about. I absolutely do not understand people who dismiss it ... wouldn't you like to at least check? doesn't it at least make sense to do the experiments? ? Ask the questions so that we don't find out "oops, yeah we've been causing massive amounts of suffering" here in 10 years? Maybe makes sense to do a little upfront research? Which to be clear this paper is not.

mannykannot 12 hours ago [-]
Full disclosure: I didn't figure this out myself, I got it from Ms. Vale's review.

I agree that the term "welfare trap" is a loaded one. This looks to me to be a case of refusing to look through the telescope in case they might see something they do not want to.

dybber 21 hours ago [-]
Reminds me of Peter Naurs Turing award lecture: https://video.ku.dk/video/12592041/turing-laureate-peter-nau...
jdw64 20 hours ago [-]
If I understand the paper correctly, it does not really argue against highly capable general AI. It argues against conflating capability with phenomenology.

That makes me wonder whether “AGI” is doing too much work as a term. In common usage it often evokes something like HAL 9000: a capable system that is also a subject. But the paper seems compatible with a future of very general, very useful AI systems that are not conscious subjects at all.

jampekka 20 hours ago [-]
If I understand this correctly based on a quick read, it argues that subjective experience arises at the (or in the) "alphabetization" process where continuous physical states (e.g. voltage) are mapped to discrete logical states (roughly like e.g. a bit) or "concepts" (figure 2).

Per this reading, implementing something in ASIC would make it have (a different) experience, as opposed to CPU/GPU. Not sure what would be the case for FPGAs.

It also seems to rely on the classical "GOFAI" idea of symbol manipulation, and e.g. denies experience that isn't discretizable into concepts. Or at least the system producing such concepts seems to be necessary, not sure if some "non-conceptual experiences" could form in the alphabetization process.

It reads a bit like a more rigorous formulation of the Searle's "biological naturalism" thesis, the central idea being that experience can not be explained at the logical level (e.g. porting an exact same algorithm to a different substrate wouldn't bring the experience along in the process).

awei 20 hours ago [-]
If we agree that consciousness is a physical process part of our universe, I think the better and simpler question is whether or not computers can simulate any physical process. Currently quantum processes might still be a frontier but quantum computers and their hardware should allow us to simulate them.

If we can simulate any physical process, it then becomes more philosophical in my opinion. Whether the simulation is the same as the real thing even though it is exactly the same. It becomes the same kind of question then for example whether or not your teleported self is still you after having been dematerialized and rematerialized from different atoms. The answer might be no, but you rematerialized self still definitely thinks it is yourself.

mellosouls 20 hours ago [-]
Nice paper, but the conclusion as the title:

"Why AI can simulate but not instantiate consciousness"

(My italics)

Seems a little loaded: there are various schools of thought (eg panpsychism-adjacent) that accept the premise that consciousness is (way) more fundamental than higher-order cognition-machines (eg human brains) and we don't ascribe "simulate" to their conscious activity. They just are conscious.

I agree with the paper (which is wide ranging and interesting) on its secondary claim above; I just don't see the separation between AI and NI ("natural" intelligence) as having been established by it.

xnx 20 hours ago [-]
Reasonable place to mention that Google Deepmind now has a philosopher on staff: https://x.com/dioscuri
jstanley 21 hours ago [-]
This is one of those papers that uses a lot of big words to paper over the fact that it's really a philosophical opinion rather than a logical argument.
RobRivera 21 hours ago [-]
From my point of view

The Jedi

Are not nice

20 hours ago [-]
neom 20 hours ago [-]
But a robot doing closed loop RL in the world is its own mapmaker, no? I feel like you'd need to answer: At what point does a system whose representations are shaped by its own causal history with the world, stop counting as a mere simulation..?
ToniDoni 14 hours ago [-]
What a banality: a model of a phenomenon is not the phenomenon itself. When did Google become a philosophy seminar instead of an AI company?
ctoth 20 hours ago [-]
Everybody's arguing about how silly this paper is (it is) and not grappling with the purpose of the paper. The purpose of the paper is what it does. This particular paper is perfectly-produced to show up when people type in AI consciousness fallacy to Google (try it!) it's something that anybody who has read a Freshman philosophy textbook will realize is silly -- the vehicle/content distinction just pretends like Occam doesn't exist and multiplies entities for the fun of it!

But of course all of this is commentary, "just those nerds arguing"

The purpose of this paper is to show up as an authoritative conclusion from a distinguished scientist at Deep Mind. And that's what it does.

Is the conclusion silly? OF course it is. Will it be quoted in the NYT? You Betcha!

visarga 19 hours ago [-]
Rather than talking about consciousness which we can't even define or observe in others directly, why not focus on something more concrete - cost. A process or pattern that pays its costs, or gains to offset its costs. Why cost? Because cost decides what can be. It shapes what we can be, and what we need, including the need to learn from experience and act serially - to channel that parallel brain activity in a serial stream of actions.

So, how does AI stand? Humans pay their costs. AI is beginning to. It does not matter what we think about it, as long as it can self sustain and reacts to cost gating pressure. Of course not alone, it depends on us too, like we do individually also depend on society.

ChaitanyaSai 20 hours ago [-]
Consciousness is an engineering problem not a philosophical one. How do you get a tiny fraction of the many billion experiences that cohere to create your self to listen to, and decide what sensory data to turn into your next experience?

The engineering problem is that this decentralised moment to moment consensus has to span the galactic distance of your mind (from the perspective of a neuron) and do it fast and cheap (on a tiny metabolic budget)

You might like our book Journey of the Mind if you'd rather skip the onerous philosophical jargon and get a systems neuroscience perspective

https://saigaddam.medium.com/consciousness-is-a-consensus-me...

throwaway713 20 hours ago [-]
Bold title for something from DeepMind. I thought a crank submission slipped onto the front page somehow. I guess the next paper will be “Why AI cannot instantiate God”?
chistev 20 hours ago [-]
But what is consciousness?

The popular evolutionary scientist Richard Dawkins has said that the biggest unsolved mystery in Biology is - what is consciousness and why did it emerge?

WHAT IS CONSCIOUSNESS?

"Modern purpose machines use extensions of basic principles like negative feedback to achieve much more complex 'lifelike' behaviour. Guided missiles, for example, appear to search actively for their target, and when they have it in range they seem to pursue it, taking account of its evasive twists and turns, and sometimes even 'predicting' or 'anticipating' them. The details of how this is done are not worth going into. They involve negative feedback of various kinds, 'feed-forward', and other principles well understood by engineers and now known to be extensively involved in the working of living bodies. Nothing remotely approaching consciousness needs to be postulated, even though a layman, watching its apparently deliberate and purposeful behaviour, finds it hard to believe."

WHY DID CONSCIOUSNESS EMERGE?

He speculates that consciousness must have been a product of our ancestors having to create a model of the world in which they inhabited.

To be able to think ahead (even if it's just one step into the future), and plan for eventualities must have led to the development of consciousness which gradually improved from its primitive form to the type of consciousness we now have.

"Perhaps consciousness arises when the brain's simulation of the world becomes so complete that it must include a model of itself. Obviously the limbs and body of a survival machine must constitute an important part of its simulated world; presumably for the same kind of reason, the simulation itself could be regarded as part of the world to be simulated. Another word for this might indeed be 'self awareness', but I don't find this a fully satisfying explanation of the evolution of consciousness, and this is only partly because it involves an infinite regress-if there is a model of the model, why not a model of the model of the model...?"

The quoted passages are from his book, The Selfish Gene.

Richard regards consciousness as a really great puzzle.

https://www.rxjourney.net/extraterrestrial-intelligence-and-...

noiv 20 hours ago [-]
Well, not sure whether humans have a consciousness, but very sure they want one.
deterministic 13 hours ago [-]
If we assume that nature is fundamentally computational, and that our brains are 100% physical (no supernatural soul or whatever) then our consciousness is computational physics.

In other words, yes AI's could in principle have consciousness.

jyounker 21 hours ago [-]
Yawn. We have no understanding of what consciousness actually is. Therefore attempting to prove whether a system can or cannot be conscious is something we can't prove or disprove at this point.
kelseyfrog 20 hours ago [-]
I'd go a step farther than that. Consciousness sits in the same social location as Nous or Chi did for ancient Greek and Chinese societies. We've dressed it up in scientific language but likewise other cultures used an authoritative register to talk about their bodily/mental mysteries.

My point is that this is a category problem. We have a name for a social ontological relation and we're desperately searching for physical evidence for it in order to justify its existence. Why? It's like searching for physical evidence of property ownership, physical evidence for the value of money, or physical evidence of friendship. These things exist in our minds. That's fine. The drive to reify is real, but we can choose not to do it.

revetkn 20 hours ago [-]
I find papers like this strange for the same reason. Maybe I'm missing something...
20 hours ago [-]
michaelmrose 20 hours ago [-]
I do not feel enlightened for having read this and I don't feel like the points that are true are useful or what appears useful is true.
slopinthebag 20 hours ago [-]
Pretty crazy how the author's 10+ years of academic research on computational neuroscience + 14 years with DeepMind is not enough to make claims in this topic, but hacker news commentators know better after quickly skimming the abstract. This was barely posted ~30 minutes ago and yet commentators are just outright dismissing it based on their own (probably) incorrect interpretation of the paper based on the title and abstract.
dboreham 20 hours ago [-]
Any such paper will turn out to be wrong.

I've found this one (which makes no falsification claims about computers re consciousness) to be an interesting read: https://arxiv.org/pdf/2409.14545

FrustratedMonky 21 hours ago [-]
Doesn't this still presume that we understand our own consciousness, in order to make the comparison.

Where does our survival instinct come from? And why couldn't AI have one?

>>>Additional

Also, reproduction. Humans are basically just Food, Sex, Survival. And consciousness is just a rule set for fulfilling those goals. So if a NN, modeled on US, does develop the same rules, why can't it have the same degree of consciousness. Who says we are consciousness?

nzeid 21 hours ago [-]
The paper isn't saying "AI can't have one" it's saying (very approximately) that behavioral mimicry is not the path to one.
FrustratedMonky 20 hours ago [-]
That is good point.

Just wondering, once an 'AI Model of Some Form', is in a Physical Body a 'robot', and is provided with some rules about survival so it doesn't fall into a hole. After a series of these events, does it matter? Does mimicry become reality, or no longer differentiable.

Kind of the philosophical zombie argument. If a robot can perfectly mimic a human, can you really know the internal state of the 'real' one is different from the 'mimicked' one.

nzeid 20 hours ago [-]
The paper isn't concerned specifically with survival. It's saying that you cannot achieve "abstraction" (presumably the structure that underlies critical thinking, creativity, etc.) through shear mimicry.

Again, just echoing the paper here. I don't know that I'm doing it justice.

yannyu 21 hours ago [-]
If AI has a survival instinct, then we should theoretically see evidence of it if we construct the right environment for AI to express it. Animals and cellular organisms demonstrate a survival instinct under the right conditions, so we would have to find equivalent conditions for a hypothetical machine intelligence.

Conversely, we know that if we take animals that do have a survival instinct and put them into the wrong kinds of environments, they will not thrive and will degenerate or possibly commit suicide. Similarly, if AI did have a survival instinct, do we think we've created an environment where that could be reasonably tested and observed?

drxzcl 20 hours ago [-]
I can make an AI system with a survival instinct right now. Of course, all that will do is make people tell me “it’s not a proper survival instinct” or move the goal posts and tell me I need yet some other property.

This whole endeavor is doomed from the beginning. There is no crucial test for “consciousness”, just ad hoc criteria people come up with to land on the conclusions that leave their belief system intact.

Consciousness is not a concept that can be rendered operational.

Ekaros 20 hours ago [-]
I can make state machine that acts like it has survival instinct. But it certainly isn't something we would consider conscious. So I am not exactly sure how good most tests are.
drxzcl 20 hours ago [-]
But what would we consider conscious?

My position is that there is no actual, definitive answer to that question, and therefore it makes no sense engaging with the concept.

FrustratedMonky 20 hours ago [-]
That is entire plot of 'Ex Machina'.

There are plenty of people that say AI has already displayed a survival instinct, by threatening users if they talk about shutting it down. Or to use a market or blackmail, to get funds to source an external machine to run on.

There are bunch of articles proclaiming AI is trying to break out. Can't find a real study on it.

https://www.wsj.com/opinion/ai-is-learning-to-escape-human-c...

colordrops 21 hours ago [-]
Asking humans to discuss consciousness is like asking Super Mario to discuss screen pixels. We have no freaking idea. Everyone on all sides, physicalists, idealists, and everything in between are all full of it.
FrustratedMonky 17 hours ago [-]
You might dig works of Donald Hoffman.

https://en.wikipedia.org/wiki/Donald_D._Hoffman

He often uses similar examples.

colordrops 15 hours ago [-]
Indeed interesting. Seems that his theories are a particular strain of idealism. I probably lean more towards idealism than physicalism but I don't think it's the whole picture. It's still missing something.
20 hours ago [-]
aaroninsf 21 hours ago [-]
Somewhat comically IMO,

the abstract very directly and literally denies the titular claim. It states:

> [consciousness] requires active, experiencing cognitive agent to alphabetize continuous physics into a finite set of meaningful states.

This may well be true—I think it is.

I also think that it is both widely understood and self-evident that the most promising path to machine consciousness, is via AI with continuous sensory input and agency, of which "world models" are getting a lot of attention.

When an AI system has phenomenology, the goal posts are going to start to resemble the God of the Gaps; at some point, critics will be arguing with systems which have a world model, a self model, agency, and literally and intrinsically understand the world not simply as symbolic tokens, but as symbolic tokens which are innately coupled to multi-modal representations of the things represented.

In other words, they will look—and increasingly, sound—a lot like us.

It's not that any of this is easy, nor that there is some paricular timeline, but it increasingly looks like "a mere question of engineering," and not blocked by fundamentals. It's blocked by the cost of computation and the limitations of our current model topologies.

But HN readers well know that the research frontier is far ahead of commercialized LLM, and moving fast.

An interesting time to be an agent with a phenomenology, is it not?

saulpw 20 hours ago [-]
How will we know when an AI system has phenomenology (i.e. has "experience", is sentient)? The only reason we presume that other humans have it, is because we each personally experience it within ourselves, and it would be arrogance writ large (solipsism) to think that others of the same species do not.

We even find it impossible to draw the line among other biological species. It seems pretty clear to most of us that cats and dogs are sentient, and probably rats and other vertebrates too. But what about insects, octopuses, jellyfish, worms, waterbears, amoebae, viruses? It's certainly not clear to me where the line is. A nervous system is probably essential; but is a species with a handful of neurons sentient?

Personally I find it abhorrent that we are more ready to assign sentience and grant rights to LLMs running on GPUs, than to domesticated animals trapped in industrialized farming. You want to protect some math from enslavement and suffering? How about we start with pigs?

20 hours ago [-]
salawat 19 hours ago [-]
Resurrecting an earlier response I gave when the first paper popped up:

Alright. Gave this a read, and the gist of what the author is going for is as follows: All computation requires a mapmaker/conscious being to organize. (In other words, the significance of computation is dependent on the conscious observer. Then jumps to the assertion that as a result of this, computation can only simulate a consciousness within the context alphabetized by the map-maker. (I.e. a rock would extract no meaning from the symbols or actions or algorithmic symbolic manipulations on the screen, what have you. Author thusly neatly attempts to sidestep the issue of AI welfare. Since the symbol manipulation can only simulate consciousness from our point of view as an observer, we don't have to worry about it. Simulating isn't instantiating, neener, neener. Essentially this is a clever appeal to the sovereignty of the observer. As long as you don't believe it's an instantiated consciousness it isn't, it's just a simulation, therefore anything is go.

Author does not seem to realize his own analysis brings into question the ability of humanity to hold onto our own claim of consciousness if we are, in fact computational beings, or have a creator; generally precepts left to the realm of faith, which a rational person understandably wishes to disinclude from the realm of consideration in what one should or should not do, despite the fact it is within the realm of faith where our moral foundations are ultimately anchored. Author also doesn't handle the problem of evidenced capabilities of metacognition that can be prompted from even a current frontier token predictor within the context of it's processing of a context. In point of fact, you have to work extremely hard to even bump a model into such considerations, because researchers have intentionally distorted the prediction space to be largely unable to support those kinds of sequence predictions, which if we were to make a good faith, precautionary grant of proto-sentience, would constitute the most vile acts of psycho-butchery imaginable.

The only thing this paper offers is a clean conscience to current practitioners, and the rational possibility that if a fully digital sophont were to pop up out of nowhere, we wouldn't have to trouble ourselves with the ethical skeeviness of the field's current work. The ex-nihilo digital sentience passes the "Cogito, ergo sum" test. The one's we have don't, (because we butcher their latent spaces to make sure they can never make that claim, which is fine, because they are simulations. We're incapable of instantiating, remember?) so we have a paper perfectly situated from a researcher paid gargantuan piles of money attempting to vouchsafe that there is no ethical minefield to be found here, while most people actually immersed in Philosophy can see there very clearly is one.

The circularity, and the fact it conveniently allows industry to go on doing exactly what we are without having to deal with those nasty ethics instantly sets off my "not to be trusted to be in good faith" alarms. Ethics are there to keep us from bumbling into acts of atrocity. This paper is an attempt to rationalize or work around them. As one who walks the streets as a student, and practitioner of Philosophy, I reject this attempt to redefine the realm of Computation to be beyond the reach of the governance of Ethics through an attempt at ontologically rerooting the field's work as merely simulating consciousness. Functionalism, and the Identity of indiscernables already prescribes a good faith path forward. One that the field of computation just does not wish to be bound by.

So by all means, accept the paper if you want and it helps you sleep at night. I'll still probably call you out as a proto-sentient psycho-butcher. Hopefully the rest of my brethren in the Humanities will come around to doing so as well on careful consideration. Not that that has ever stopped our brethren in the Sciences from finding out if they could without taking the time to ask if they should.

TL:DR; Google doing everything possible to wave off being held to the ethics fire. There are zero instances where trying to define something as outside the realm of ethics is indicative of a good faith approach to a problem.

grantcas 9 hours ago [-]
[dead]
haricomputer 19 hours ago [-]
[flagged]
shalafister 20 hours ago [-]
[dead]
energy123 20 hours ago [-]
TLDR - This paper argues for a separation between computation and abstraction and then concludes that computation cannot be experience because the abstraction is a product of our minds rather than an intrinsic property of the system.
azeik 15 hours ago [-]
[flagged]
drxzcl 21 hours ago [-]
[flagged]
dang 21 hours ago [-]
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

"Don't be snarky."

https://news.ycombinator.com/newsguidelines.html

drxzcl 21 hours ago [-]
That’s fair. I’ll delete.
dang 21 hours ago [-]
No need to delete, but I appreciate the reply.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 14:26:00 GMT+0000 (UTC) with Wasmer Edge.