1. consciousness, freewill, AI
1.1. my thoughts on consciousness
I look at consciousness as the thing that gives us a point of view, or
point of experience, which is more fundamental than feeling pain or joy.
We may call it the feeling of feeling at all, including the simple feeling
of observing things.
To demonstrate this further: if we imagine a robot with sensors, we can
programme it to read sensory data, process it, and react according to the
processed data in a seemingly sophisticated way that resembles the
behaviour of what we call an animal, or a form of life.
However, the question would remain: does the robot really have its sensory
input connected to a point of experience? Or is it simply a blind
action-reaction machine similar to the domino effect (but more complex)?
Why would the increased complexity suddenly grant the robot point of
experience? I call an action-reaction blind if I want to claim that it
is not attached to any point of view/experience/feeling.
Based on the fundamental particles in the standard physics model, I cannot
find any way that complexity could suddenly give rise to the feeling of
feeling, or the feeling of having inputs attached to a point of view. I
can only find a collection of particles that explain the blind
action-reaction.
Since it seems that we cannot explain the existence of the point of
experience from the standard physics model, I'm lead to the following
hypothesis:
- If the point of experience is not explainable by the existing particles
in the standard physics model, then we have to introduce a new
fundamental particle. Let's call this new fundamental particle $x$.
- $x$ might be randomly roaming in space, in an attempt to find a
complex-enough action-reaction machine, that is likely blind, in order
to attach to it.
- Once $x$ attaches to an action-reaction machine, it ends up connecting
machine's blind input to $x$'s input.
In this view, the reason we (e.g. humans) have a point of experience (or
feel of feeling, at all), is because each of us is an $x$ that is attached
to a complex enough action-reaction machine that is our physical/material
human body. In a sense, we are an $x$ that's controlling seeing through
the sensors of this body, which is why we feel that we have a point of
experience.
1.2. my thoughts on freewill
If we look at how machines react based on observations, from the
perspective of physics standard model, we could say that there is no source
of unpredictability (i.e. no choice) except if quantum randomness is
real. So, I think, for an observer in this universe, our freewill (if a
thing) comes from quantum randomness.
If there is no such a thing as quantum randomness, then we are effectively
saying that everything is explainable as a reaction to a previous action,
with no new decisions to be made, as everything is already decided by
previous actions. Therefore, freewill necessarily requires quantum
randomness to be true.
If we analyse randomness, fundamentally, it is essentially when we fail
to explain an outcome due to us lacking information. Such lack of
information could possibly be due to our ignorance of the laws of the
universe, or be due to the source of it being outside of what we can
observe in this material universe.
1.3. how might consciousness relate to freewill
Once $x$ attaches to the particles that make up our body), $x$ is able to
read our sensory data and feel it from its view (hence we get the point of
view).
Then, $x$ could manipulate the particles in our body to behave in some way,
to cause us to react to some events in ways that $x$ wants. Here, we can
think that $x$ is the source of our freewill.
Since $x$ is not observable by our sensors, all we can see is some
unexplained randomness at the quantum level.
This way, we can see the idea of $x$ and freewill can be consistent, and
offer an explanation on why we might have a seemingly absolute
randomness at the quantum scale.
In other words, introducing $x$ as a fundamental particle can explain:
- Why we experience having a point of experience (consciousness).
- Why we might have freewill.
- Why we might be unable to explain at least part of the quantum randomness
(because it is the freewill of $x$ manipulating our particles to make our
material body react to events that $x$ has observed by getting attached
to our material sensors).
1.4. does AI have consciousness?
It might if it gets an $x$ attached to it. But since we are unable to
observe $x$ by our material sensors, we might never know, as all the output
behaviour that we observe from $x$ is what appears to us as quantum
randomness.
Can we design a physical experiment to set $x$'s freewill responses apart from
the other unexplained quantum randomness? I don't know.
A related concept to this test might be: does $x$ behave in a way that
relates to what it observed in the past? I.e. does $x$ even have memory?
We know that we feel that we have a memory, and the memory that we feel is
entirely in our material brain. E.g. if we remove bits of someone's brain,
the person would start to lose his memory as we remove more of his brain
(this is what happens to people with brain strokes).
However, does this mean that $x$ also lacks a memory? It is possible that
$x$ lacks a memory, and that all it used our material organs (mainly
brains' neurons) to store all its memory.
1.5. a possible relation to simulated universe
$x$ could be a normal life form with consciousness and freewill in a
different parent universe that got attached to this simulation (our
universe) somehow, and was given control of the particles that make up our
body.
Then, in order to, say, test $x$'s behaviour in this simulation, it got its
native memory (in the real universe) erased (or disconnected temporarily).
Instead, it was given the ability to build a new memory in the simulation
(the material brain of the body that $x$ go attached to in this universe).
Why was $x$ attached to this simulation in this way? There could be
several reasons, one of which could be that such simulations can be cheaper
and more accurate than the legal system. For example, maybe $x$ committed
a crime, and in order to find out whether he is guilty, he got attached to
a simulation of a life similar to the crime scene, that is realistic
enough, in order to test his response. In order to make the simulation
work as a fair test, $x$'s memory had to be erased.
2. what is the universe optimising?
We are on the living branch of evolution, billions of years deep since the
Big Bang. As such, we have received billions of years worth of parameter
tuning to make us like to live more and more.
But who is the us? Is it me? You? Humans? All animals? Plants?
2.1. maximising survival
To find who is the us, let's look at natural selection's survival
maximisation:
$$\sum_{l=0}^n s_i$$
Where, for any living organism $i \in \{0, 1, \ldots, n\}$, $s_i$ is the total number of
seconds $i$ lives.
That's all what survival maximisation is doing. The universe doesn't care
who is us. The us could be anything. Humans may not be the end goal,
but might be mere tools to lead to yet another living organism, such as
inter-galactic robots, that can maximise that can maximise $\sum_{l=0}^n
s_i$ real good.
There are two commonly discussed schools of thoughts to maximise
$\sum_{l=0}^n s_i$:
- Maximise quantity only — make $n$ grow really large, even if $s_i$,
for any $i$, is small. Rabbits and cockroaches are often used as
examples for this.
- Maximise quality only — make $s_i$, for any $i$, grow really large,
even if $n$ is small. Humans in first world nations are often used as
examples for this.
But this is a contradictory view, as quality is not necessarily
independent of quantity. In fact, it's rather very common that quality
is achieved by having the right quantity of each ingredient at the right
place.
2.2. maximising exploration
If all what each lifeform $i$, for any $i \in \{0, 1, \ldots, n\}$, did
was just to to sit and blink, then the quantity $\sum_{l=0}^n s_i$ will
have an upper bound cap that it won't exceed. Because such lame life
cannot explore the environment to enhance its survival against upcoming
challenges.
This is why almost all living beings perform some kind of research. E.g.
ants look around to find better sources of food. Common apes try to invent
some tools, although not as good as the bi-pedal apes (humans). Etc.
In other words, based on the laws of physics of this universe, it seems
that in order to maximise $\sum_{i=0}^n s_i$, we need not to simply live,
but to actually explore new things.
I.e. it seems that, under the laws of physics of this universe, when
$\sum_{l=0}^n s_i$ is maximised, the following is also necessarily
maximised:
$$ \sum_{i=0}^n \sum_{j=0}^m \Pr\!\left(\text{$e_{i,j}$ will maximise
$\sum_{i=0}^n s_i$}\right)$$
Where $e_{i,j}$ is the $j^{th}$ idea that the $i^{th}$ lifeform explored.
Exploring good ideas (i.e. survival maximising ideas) are better than
otherwise. This is why it's good to explore physics, while it's bad to
explore, say, drunkiness.
Theorem 1. Studying physics is good.
Theorem 2. Drunkiness is bad.
2.3. maximising freedom
Definition 1. Freedom is the ability to do things.
Definition 2. More freedom is the ability to do more things per second.
Conjecture 1. The living seems to be able to do more things per second than the
dead. E.g. a living being, such as a human, could fly all over the
world, play football and land rovers onto Mars in $300,000$ years. On
the other hand, a dead being, such as a rock, is spending millions of
years just sitting there.
Conjecture 1 implies that in order for to have more freedom, we will
have to be living. I.e. the more we live, the more things we can do (i.e.
more freedom).
We already know from Section 2.2 that survival maximisation necessarily
implies exploring good science.
Therefore, we now know that these three have to be maximised together
(i.e. one cannot be maximised while not maximising the other), as they're
all essentially synonyms of some kind:
- Survival maximisation.
- Exploration maximisation.
- Freedom maximisation.
Definition 3. Slavery is that which reduces freedom.
By Theorem 2, Definition 1 and Definition 3, Theorem 3 is deduced.
Theorem 3. Promoting drunkiness is promoting slavery.
So, next time someonne tells you that he/she/xe is just being free for
getting drunk, you tell them: “Nope! Theorem 3 has actually
proven that you're rather choosing slavery!”.