"The young Austrian mathematician Kurt Gödel tackled the problem and found that Hilbert’s expectations where [sic] fulfilled for first-order logic."
This (i.e., "Hilbert's expectations were fulfilled for first-order logic") is a common misconception, but it is not the case. For instance, ZFC is an example of a first-order theory that does not satisfy Hilbert's expectations mentioned in the article (consistency, completeness and decidability). Notably, if ZFC is consistent, then it is incomplete. Another example of a first-order theory that is essentially incomplete and not decidable is Robinson arithmetic:
The confusion likely stems from the fact that Gödel proved first-order logic to be semantically complete (Gödel's completeness theorem), i.e., all its valid formulae can be derived as theorems. This does not mean that all first-order theories are syntactically complete, i.e, for every statement S, either S or its negation ¬S is in the theory. This kind of incompleteness is what Gödel's incompleteness results are about, and this phenomenon already arises in first-order logic. The common independence results of ZFC are well-known examples of this kind of incompleteness: ZFC, if consistent, does not prove them, and also does not prove their negation. Gödel's first incompleteness theorem shows that every sufficiently powerful formal system exhibits this kind of incompleteness. Also first-order theories exhibit it if they are sufficiently powerful.
With today's terminology and models of computation, it is easy to show this, because first-order logic is sufficient to express a Turing machine as a theory, and if we let S be the (first-order) statement "The machine reaches a halting state", then, if we had a mechanism that always terminates and always correctly decides whether S or its negation ¬S is in the theory, then we would have an algorithm that solves the halting problem, which, however, is easily shown to be only semi-decidable, not decidable. Hence, not all first-order theories are consistent, complete and decidable.
Not a logician, but it’s the first time i’ve heard Gödel incompleteness being about trying to say something about syntactically invalid formulas.. Do you have more pointers on that ? it makes me curious.
Every statement S has a negation ¬S, read as "not S". If S is well-formed, so is its negation. For example, if we take the continuum hypothesis, i.e.:
"There is no set whose cardinality is strictly between that of the integers and the real numbers."
and construct S to express this statement in the language of our theory (say ZFC), then the negated statement ¬S would correspond to:
"There is a set whose cardinality is strictly between that of the integers and the real numbers."
Famously, the continuum hypothesis is independent of ZFC. This means that (assuming consistency of ZFC), ZFC does not prove S, and also does not prove ¬S. From a consistent and syntactically complete theory, we expect that it contains exactly one of these statements. Is it the case or not? The fact that ZFC does not "tell" us which it is means that it is incomplete in this sense. Clearly, a question that is important to us is left unanswered by this theory, since neither the statement nor its negation is provable with the given axioms. A syntactically complete theory is also called deductively complete, or negation complete.
An example of a first-order theory that is consistent, complete, and also decidable is Presburger arithmetic:
> We have already noted that quantum physics offers a world view in which the future is not entirely determined. This indeterminacy is not practical but fundamental. It permits us to think that our experience of free will may be real instead of merely subjective. If the outcome of a quantum process is undetermined, there is no fundamental reason to deny that possibility in some neural events which may be a macroscopic amplification of microscopic processes where quantum indeterminacy plays an essential role.
I don't see how free will follows from indeterminacy. Just because you can't predict future outcomes from initial values doesn't mean that there is "choice" involved in the evolution of a system.
Right, this is no different than having a random die roll in our brain have a chance of influencing every decision we make. Randomness washes away agency, it can’t add it. I'm not denying there's any randomness at all, quantum mechanics is a thing.
For me free will us about agency. The important thing about a decision I freely make is that I as a free agent am the cause of the decision, not external influences or random effects. That actually demands a high degree of determinism, because it means I, my state, my memories skills preferences and biases determine the outcome. Otherwise it’s not _my_ choice.
The catch 22 on that is that I don’t get to decide my state. I can’t choose my genetics, the way my neurons connected when I was a foetus, etc. We are all stuck with who we are. It’s this line of thinking that for example converted me from a retributionist to a reformist on the issue of criminal rehabilitation. I just can’t stand firm against the implacability of the logic. I have no choice.
Indeterminacy here isn't "a roll of the die" in some significant sense.
All these things, "genetics, neurones" etc. are chaotic systems. Thus there is only /necessarily/ probabilistic "conditions" on who you are; event at the classical level.
It's classical physics which is necessarily indeterminate here /because/ we cannot measure precisely enough.
I think this is significant enough to somehow fit it into the picture on free will. Not least, because as you say elsewhere, free will actually requires a level of determinism which may actually be absent from the world.
The meaning of the future being "open" here isnt reducible to "a roll of the die". Almost all significant macroscopic systems have a "deterministic gap" between their state today and their state tomorrow.
In some sense chaotic systems must be able to "forget their history", as at some point, their historical state sequences will be insufficient to determine their present state. There may be something in this which enables a certain sense of freedom, and a certain lack of it.
I don't see how these are all chaotic systems. There are plenty of systems in the universe that are not chaotic. In fact living things, and brains in particular expend a lot of energy expelling entropy. These systems are complex for sure, but that's not at all the same thing. They are very highly ordered, and could not produce consistent repeatable outputs if they weren't.
We can't predict very much about the state of a hurricane a week in advance, but my calendar at work is full of events weeks and months in advance that experience tells me have a very high likelihood of happening much as expected. Our projects are extremely complex and can take years to complete. All of that doesn't look much like a chaotic system to me.
I need to do more research on the topic, but I think this is a little bit of an illusion.
A chaotic system, K, is one in which the probability of measuring a "high order digit" (eg., 10,000) in the output is sensitive to a "low order digit" in the input (eg., 0.000001).
Something like: P(log_o K(state) | log_i state) < 1 where o >> i
It seems true that a non-chaotic deterministic function, eg., f(t) can be created such that "K(state) is bounded by f(approximation)".
Eg., K(state) is the weather, and f(approximation) is the climate model. Or K(state) is the behaviour of gas particles, and f(approximation) is the gases temperature model. Or K(state) is the changing matter distribution of the universe, and f(approximation) is the stable gravitational field model in our solar system.
f() is long-run predictable -- but K() isnt. But we know f() fails in the long-run -- K() does eventually change against the predictions of our model f().
Eg., over long periods of time, the climate changes; the non-ideal gasses may spontaneously condense; the sun "explodes".
f() is only ever a locally/apparently deterministic approximation.
If you have not already done so, you might want to look into the compatibilism stance in philosophy.
Note that I am not suggesting that this will answer your questions, or that it does so for me. I have a strong suspicion that the most it does is provide some sort of explanation for why we tend to feel we have free will (or, equivalently I think, to define it in a non-intuitive way.)
Free will is the real 'hard problem' for materialists who think they have it. Dualists will not even get to that question until they can offer some affirmative hypotheses about whatever is going on in minds.
Interesting, and as the authors have given reason to doubt the "free-will-no-matter-what" explanation of the responses, I withdraw my intuition about what counts as intuitive.
I doubt, however, that this will be the last word on the issue. Indeed, in "Questions for a Science of Moral Responsibility", Fischborn writes, about this paper,
"In fact, people might even consider that free will is compatible with fatalism, a stronger version of determinism, as long as fatalism does not preclude an agent's actions to derive from their desires and values (Andow & Cova, 2016)" - my emphasis, as that seems to be begging the question in a way.
Yes, I think this just confirms the Frankfurt cases. Suppose the Holocaust was fated to happen. It seems reasonable to say that Hitler carried some responsibility for it regardless, while Germans who fought the Nazi party did not.
But, I'm perhaps not a typical Compatibilist, as I think in a meaningful sense that we are more blameworthy the more deterministic we are [1].
I'm not sure that's true anymore. Both truly unpredictable and "truly random" phenomena (whatever that means) are incompressible and thus unpredictable. I think they are largely interchangeable, formally.
> The digits of pi are incompressible and unpredictable, yielding only to computation, but definitely not random
The BBP formula can be used to calculate the Nth digit of pi without computing all preceding digits. I'd say that's pretty successful compression.
> The word random implies a lack of order that we can't rightly claim knowledge of.
That we have no knowledge of it does not entail that no such knowledge exists. That's the point I'm trying to make: we have no evidence of such a distinction, we only have evidence of our ignorance of the sequence's cause. The universe could very well be completely deterministic, in which case the set of "random" sequences is empty.
I'm not sure if there is a rigorous definition of what "choice" means, but couldn't we define it as "the microscopic quantum processes that occur inside a person's head (and/or body) which lead to particular outcomes which the person has some expectation of and desire for"?
It has an ordinary language definition. I would frame it thus: The selection or indication of one class of alternatives as distinct from another class.
I am not convinced of their compatibility, as I explained in my sibling post to yours. I think free will of an agent means the choice is determined by the agent's state, because as a materialist I think your state and 'you' are the same thing. I am my state. Any choice that does not principally depend on my state is not my choice, because I am not the cause of it.
How can one know when a choice does or does not principally depend on one’s state?
If someone elects to have wheat toast rather than sourdough toast at breakfast, can they really know why they made that choice? It seems like an infinite regress to me.
I don’t follow. Do you think a material thing outside of the person is plausibly the main cause for the having the one kind of toast over the other?
The motion of the limbs appears to be a result of signals sent from the brain.
If one believes that one is one’s brain (which, I’m not a materialist, but if I was, it seems to me like the two plausible options would be “a person is their brain(+ body + perhaps their papers and the parts of their environment which they maintain in order to regulate their own actions)” or “people don’t exist”),
then it seems straightforward that, like, one’s actions, stemming from one’s brain activity, therefore stem from oneself.
And if that’s what one mean by “free will”, that seems reasonably coherent.
(If one’s view of what people are is that there are no people, then it also follows that all people have free will. However, this conflicts with my observation of having an internal experience, so I reject the “there are no people” position. Though, seeing as I am not a materialist in part for similar reasons, perhaps people committed to materialism wouldn’t see this as that much of a reason not to conclude that there are no people/selfs, only things (like oneself) which appear to behave like people)
> I don’t follow. Do you think a material thing outside of the person is plausibly the main cause for the having the one kind of toast over the other?
Is there a main cause? What makes one cause a main cause and some other cause a secondary cause?
Can we somehow separate people from the environment in which they exist?
To my knowledge, reality is one big quantum field, or something similar. I don't know what's underneath quantum mechanics, assuming quantum mechanics is currently the best explanation we have for sweeps arm all of this.
> The motion of the limbs appears to be a result of signals sent from the brain.
I agree. So how did the brain that sends the signals come into existence? Is the brain that sends the signals able to send the signals, because the body that supports it provides what it needs to operate?
> If one’s view of what people are is that there are no people, then it also follows that all people have free will.
Re “main cause” : I admit I didn’t have a particular precise idea of what I meant by this. I suppose I meant, some kind of combination of proximate cause and cause most useful to consider as “the cause”? So, not facts that are so complicated that people cannot comprehend them, and also things that are, in terms of causes people can understand, relatively direct ones? But yes, this is a rather imprecise idea, which could stand to be improved.
As for quantum mechanics and drawing boundaries between things, I do think quantum mechanics allows boundaries to be drawn. I don’t understand quantum field theory, and apparently the “Schrödinger picture” doesn’t (seem to?) exist for quantum field theory, which makes it hard for me to think about,
but for quantum mechanics, aiui, you can split up the Hilbert space into tensor products of Hilbert spaces for different parts of the system (e.g. “all the stuff going on in this region of space” vs “all the stuff going on in that other region of space” , or, “all the stuff with electrons” and “all the stuff with protons”, etc.)
And, while again, I don’t understand quantum field theory as well as I understand quantum mechanics (and, note: my understanding of that is also limited!), my impression is that the observables in quantum field theory, uh, when they are about causally separate regions of spacetime (meaning, neither can influence the other, though they may have common past influences and may both influence the same later thing), that the observables in question commute, and that this is how locality is implemented,
And so, this seems to fit reasonably well with the concept of there being “separate things”.
Then, (getting now into things I understand even less still, possibly even less than QFT), I think there is a quantum version of the idea of a Markov boundary, or Markov blanket, or something, such that the causal connections between two regions on opposite sides of this boundary, can be entirely accounted for by the connections between each of the two sides with the boundary.
So, I think this allows for like, drawing an envelope around an object and a given period of time (so, a cylinder-ish shape, where at the start and end of the time period there is a Cauchy surface covering all the object, and during the duration, the boundary of the region, overall making a kind of cylindrical 3D hyper surface?), and like, splitting up the world into the part inside and outside of this region, and talking about how two two sides evolve over time themselves, and then how the two parts interact?
So, all that to say, my impression is that it is possible within QM and QFT to define, like, objects. Not that there is a definition for “an object”, but that it should be possible to define particular objects.
> So how did the brain that sends the signals come into existence? Is the brain that sends the signals able to send the signals, because the body that supports it provides what it needs to operate?
I’m not really understanding your point here. The brain came into existence while the fetus/baby was in its mother’s womb. And without the support from the body (with nutrients and oxygen etc. being supplied to the brain through blood and such), the brain would fail to function/die, yes.
But I don’t see how this connects to the question of “does what we are (or, rather, what we are supposing that we are) ‘the’ cause of what it is that we do?”
I guess those questions are both about “why is the thing (or things) that we are supposing to be us (namely, the brain or brain+body or brain+body+papers etc.), the way that it is?”,
but, if “free will” is just “our actions are due to us” / “our actions are due to us being the way we are”, then I don’t think “why are we the way we are?” really poses an issue for that?
I suspect I have misunderstood your point and as such responded to something other than what you meant. If so, could you clarify / correct my misunderstanding?
That last bit I said of “if there are no people” was just addressing part of my earlier parenthetical where I gave “there are no people” as an alternative account of “what are people” under the assumption of physicalism. It was a minor (vacuous) point I was making for sake of completeness, and because I have a fondness for statements that are vacuously true.
I’m not saying I do know what the specific causes of this or that choice are. I’m just saying that if the principle cause is my state then it’s my choice.
However as I have already pointed out on this thread, we don’t get to choose our state. We can’t decide to be other than ourselves.
1. Predicting long-run evolution of chaotic systems require more measurement precision in the initial conditions than quantum mechanics allows
2. With sufficient density, a discrete system cannot be simulated with less energy than the original system possess
3. With sufficient complexity, (as above)
4. The mass required for measurement, and energy required for computing, the structure of ordinary objects (eg., chairs) from their parts eclipses anything we could build
Is there a formal definition of randomness that is just an information gap (like Shannon), and the predictability and measurability of complex systems breaks down because the chaos is just a loss of information in each iteration of its function? If so, where does the information go?
I, for one, am unconcerned that we don't live in a clockwork universe. Or, at the least, one so complex that the initial conditions can never be known. Because that leaves open the possibility that all isn't determined, and that we can finally take responsibility for our own actions.
> Because that leaves open the possibility that all isn't determined, and that we can finally take responsibility for our own actions.
I'm not clear why your path in life being determined precludes you from taking responsibility for your actions.
Presumably if the universe is deterministic, then the consequences of your actions literally follow from your choices. Therefore the proximal cause of those consequences can be attributed to you, ie. you are responsible for causing those outcomes.
That you were determined to make those choices doesn't erase this basic fact. That you can learn from mistakes and make different choices in the future given similar circumstances is exactly why holding you responsible for your choices works as a form of moral feedback.
If your choices were not deterministic, then this implies that they were partly random, in which case holding you responsible is no longer a valuable tool for moral feedback, because no amount of responsibility or feedback can influence a random variable. Therefore, the degree to which you should be held responsible is arguably proportional to the degree in which your choices are deterministic; we see this in every day life, where babies and the clinically insane who act for no apparent reason are not held responsible for the actions, while people who are mentally competent and can operate within systems of rules are held responsible.
So if we really assume determinism, does that mean that we'll have to make some changes to the legal system? For example, getting rid of the insanity defense[0]?
I don't see why. Whether someone is responsible includes a consideration as to whether their behaviour is governed by somewhat coherent and articulable reasons. If they have reasons for why they acted a certain way, that indicates a responsiveness to feedback that changes those reasons, such that they may not make the same choice again in the future. Clearly the insane don't satisfy this requirement.
"The young Austrian mathematician Kurt Gödel tackled the problem and found that Hilbert’s expectations where [sic] fulfilled for first-order logic."
This (i.e., "Hilbert's expectations were fulfilled for first-order logic") is a common misconception, but it is not the case. For instance, ZFC is an example of a first-order theory that does not satisfy Hilbert's expectations mentioned in the article (consistency, completeness and decidability). Notably, if ZFC is consistent, then it is incomplete. Another example of a first-order theory that is essentially incomplete and not decidable is Robinson arithmetic:
https://en.wikipedia.org/wiki/Robinson_arithmetic
The confusion likely stems from the fact that Gödel proved first-order logic to be semantically complete (Gödel's completeness theorem), i.e., all its valid formulae can be derived as theorems. This does not mean that all first-order theories are syntactically complete, i.e, for every statement S, either S or its negation ¬S is in the theory. This kind of incompleteness is what Gödel's incompleteness results are about, and this phenomenon already arises in first-order logic. The common independence results of ZFC are well-known examples of this kind of incompleteness: ZFC, if consistent, does not prove them, and also does not prove their negation. Gödel's first incompleteness theorem shows that every sufficiently powerful formal system exhibits this kind of incompleteness. Also first-order theories exhibit it if they are sufficiently powerful.
With today's terminology and models of computation, it is easy to show this, because first-order logic is sufficient to express a Turing machine as a theory, and if we let S be the (first-order) statement "The machine reaches a halting state", then, if we had a mechanism that always terminates and always correctly decides whether S or its negation ¬S is in the theory, then we would have an algorithm that solves the halting problem, which, however, is easily shown to be only semi-decidable, not decidable. Hence, not all first-order theories are consistent, complete and decidable.