You've said some things that suggest that you think there's a line of justification running from Roger Penrose's argument in "The Emperor's New Mind" through to the conclusion that any apparently intelligent or creative or reflective behavior from a digital computing machine is probably going to turn out to have been an illusion produced by demons arranging coincidences to affect the machine's behavior.
So my question here is going to be kind of tendentious, but I'm worried that if you you write an article this Wendesday about the interface between the subatomic/quantum realm and the hell-realms, and I don't bring it up, you might cause this sort of cultural snowballing effect where AI scientists are judged guilty of exposing us all to demons due to intellectual fecklessness. This would be distressing for me since I'm aware of a number of considerations pointing *against* the conclusion that machine intelligence necessarily involves the machine being coincidence-puppeteered by demons, and I would feel guilty of intellectual fecklessness if I was to ignore those considerations.
I'm not sure what the appropriate venue would be to raise those considerations, and I don't seem to be able to state any of them in a convincing way without making this comment much longer than it already is.
For what it's worth, I'll at least indicate what the most important consideration here looks like. There's an algorithm called the "logical inductor", published in 2017 (in the proceedings of that year's conference on Theoretical Aspects of Rationality and Knowledge), which gets around the assumptions in Penrose's argument and provably does some stuff that Penrose kind of acts like he's established to be impossible. One way it gets around his assumptions is that it assigns probabilities to logical statements, rather than always assigning a definite yes or no. Another way it gets around the assumptions is that it keeps updating those probabilities after successive rounds of deliberation indefinitely, so there's no final answer, just convergent probability estimates. It deals with most attempts to trip it up with self-contradictions by requiring the contradictions to be stated in terms of probabilities assigned on particular rounds, and then assigning probabilities just at the edge where it can't be sure in advance whether the probability will fall above or below the threshold defined in the attempted contradictory statement, so that in the long run the probabilities it assigns are the same as the fraction of the attempted contradictory statements about its probability assignments that are true.
Here's a question that sort of expresses where that and the other considerations are going.
Suppose an experimenter had enough classical digital computing power to run a simulation of all the known physical laws governing the physical quantities in a Standard Model quantum field theory description of a human body and suitable environment, and how those quantities change over time. (Most of this is just manipulation of continuous-valued numbers to some degree of precision, which is something that classical computers are fine with.) Also, the simulated human body is in sufficiently flat spacetime that the quantum gravity contributions to those rates of change of physical quantities over time can be reduced to just an empirical effective theory on top of the Standard Model, without having to worry about paradoxes of quantum gravity. Suppose the experimenter had enough computing power to run many such simulations, and they were comparing the behaviors of the simulated human bodies to the behaviors of physically real humans in the corresponding physical environments, and looking for statistical discrepancies that distinguished the two populations.
1. What kinds of discrepancies might you expect that the experimenter would find, if there weren't demons arranging coincidences through the data used to set up the simulations, the deterministic pseudorandom number generators used to select quantum branches within the simulations, and possible nondeterminism in internal signal timing order in the classical computing hardware?
Currently, if I had to guess your answer, it would be something along the lines of, "the simulated humans would voice complaints of severe fatigue (because no etheric body) and loss of something about their sense of self (because no astral or mental bodies), and quickly expire for mysterious physiological reasons". Possibly also "and they would demonstrate a mysterious fixity of schemas and habits if their cognitive performance was tested before they expired (because no astral or mental bodies)". Possibly also "you know, Roger Penrose did in fact put a lot of emphasis on quantum gravity as what probably enables consciousness", although that wouldn't really clarify what to expect in question 4 below.
2. Are there things the experimenter could do that would prevent demons from arranging coincidences that came to fruition in or through the workings of the classical computing device? (I mean, realistically that kind of computer wouldn't actually fit in the universe, so this is in some sense a meaningless question, but, like, in principle?) Because if there were, then maybe people could do those things for their own computing devices that purportedly had AI on them, and that might avert some dangers and clear up some questions about causal attribution.
(I still think there are significant analogies between current text-predictor/generator systems and sortilege random-draw divination procedures, particularly bibliomancy. It's quite common for people to perform cleansings of their sortilege apparatus, and separately it's quite common for people to use computers to do the sortilege and oracle lookup, with a human behavior input to contribute randomness. Trying to protect a text prediction/generation system is not so far from the combination of those two practices, except that the book is replaced with a probability distribution over possible next little bits of text calculated almost deterministically (other than random summation order roundoff error effects) from the past text, and text generation is not usually intended as divination as such.)
3. Would an experimenter be able to look for patterns in the discrepancies, and discover novel physical laws serving as the mechanism of supernatural influences, which have heretofore resisted third-party skeptical replication in our world (at least in the case of the apparently astral- or mental-plane effects investigated by parapsychology)? Like, should I be excited because there might be a refinement of this thought experiment that would be actually practical and it would persuade everybody about etheric bodies or something? I'm slightly trolling; everything I've seen leads me to expect that the supernatural won't in fact be that epistemically accommodating; but I'd at least like it to be clear what that non-accommodatingness might have to imply for this situation.
As a candidate example of such a refinement, we already have halfway-reasonable macromolecule-level computer simulations of cells of the tiny bacterium Mycoplasma. If there are etheric effects on the real Mycoplasma, a naive but precise physical simulation would need to add fudge factors in order for the statistical tendencies of what happens to the simulated Mycoplasma to match the statistical tendencies of what happens with the real ones. Depending on the size of those fudge factors, it might not be so long before those simulations are refined enough the fudge factors would pop out as impossible to explain from the underlying physics. But, admittedly, if the discrepancy is something that only shows up in the context of complex high-order quantum correlations, beyond just basic low-order computational-quantum-chemistry electron density field theory, our simulations aren't likely to pick up on that anytime soon.
4. What kinds of statistically different things might the the simulated humans vs. the real humans say specifically about the Godel sentence for Peano arithmetic or the axioms of set theory? Because that seems like it's a super crucial consideration for what the relevance of Penrose's argument to demonic explanations for AI would be.
This question #4 especially is meant as a sort of a crystallization of a certain subtle distinction that I think Penrose's argument elides, trying to show it in as dramatic a light as possible. The computer is classical, and in that sense bound by the laws of logic, but is there a clear relation between that reliable outwardly-legible logic and the logic being unreliably cognized, in an obscure, contextual way, by the human body the computer is simulating?
no subject
So my question here is going to be kind of tendentious, but I'm worried that if you you write an article this Wendesday about the interface between the subatomic/quantum realm and the hell-realms, and I don't bring it up, you might cause this sort of cultural snowballing effect where AI scientists are judged guilty of exposing us all to demons due to intellectual fecklessness. This would be distressing for me since I'm aware of a number of considerations pointing *against* the conclusion that machine intelligence necessarily involves the machine being coincidence-puppeteered by demons, and I would feel guilty of intellectual fecklessness if I was to ignore those considerations.
I'm not sure what the appropriate venue would be to raise those considerations, and I don't seem to be able to state any of them in a convincing way without making this comment much longer than it already is.
For what it's worth, I'll at least indicate what the most important consideration here looks like. There's an algorithm called the "logical inductor", published in 2017 (in the proceedings of that year's conference on Theoretical Aspects of Rationality and Knowledge), which gets around the assumptions in Penrose's argument and provably does some stuff that Penrose kind of acts like he's established to be impossible. One way it gets around his assumptions is that it assigns probabilities to logical statements, rather than always assigning a definite yes or no. Another way it gets around the assumptions is that it keeps updating those probabilities after successive rounds of deliberation indefinitely, so there's no final answer, just convergent probability estimates. It deals with most attempts to trip it up with self-contradictions by requiring the contradictions to be stated in terms of probabilities assigned on particular rounds, and then assigning probabilities just at the edge where it can't be sure in advance whether the probability will fall above or below the threshold defined in the attempted contradictory statement, so that in the long run the probabilities it assigns are the same as the fraction of the attempted contradictory statements about its probability assignments that are true.
Here's a question that sort of expresses where that and the other considerations are going.
Suppose an experimenter had enough classical digital computing power to run a simulation of all the known physical laws governing the physical quantities in a Standard Model quantum field theory description of a human body and suitable environment, and how those quantities change over time. (Most of this is just manipulation of continuous-valued numbers to some degree of precision, which is something that classical computers are fine with.) Also, the simulated human body is in sufficiently flat spacetime that the quantum gravity contributions to those rates of change of physical quantities over time can be reduced to just an empirical effective theory on top of the Standard Model, without having to worry about paradoxes of quantum gravity. Suppose the experimenter had enough computing power to run many such simulations, and they were comparing the behaviors of the simulated human bodies to the behaviors of physically real humans in the corresponding physical environments, and looking for statistical discrepancies that distinguished the two populations.
1. What kinds of discrepancies might you expect that the experimenter would find, if there weren't demons arranging coincidences through the data used to set up the simulations, the deterministic pseudorandom number generators used to select quantum branches within the simulations, and possible nondeterminism in internal signal timing order in the classical computing hardware?
Currently, if I had to guess your answer, it would be something along the lines of, "the simulated humans would voice complaints of severe fatigue (because no etheric body) and loss of something about their sense of self (because no astral or mental bodies), and quickly expire for mysterious physiological reasons". Possibly also "and they would demonstrate a mysterious fixity of schemas and habits if their cognitive performance was tested before they expired (because no astral or mental bodies)". Possibly also "you know, Roger Penrose did in fact put a lot of emphasis on quantum gravity as what probably enables consciousness", although that wouldn't really clarify what to expect in question 4 below.
2. Are there things the experimenter could do that would prevent demons from arranging coincidences that came to fruition in or through the workings of the classical computing device? (I mean, realistically that kind of computer wouldn't actually fit in the universe, so this is in some sense a meaningless question, but, like, in principle?) Because if there were, then maybe people could do those things for their own computing devices that purportedly had AI on them, and that might avert some dangers and clear up some questions about causal attribution.
(I still think there are significant analogies between current text-predictor/generator systems and sortilege random-draw divination procedures, particularly bibliomancy. It's quite common for people to perform cleansings of their sortilege apparatus, and separately it's quite common for people to use computers to do the sortilege and oracle lookup, with a human behavior input to contribute randomness. Trying to protect a text prediction/generation system is not so far from the combination of those two practices, except that the book is replaced with a probability distribution over possible next little bits of text calculated almost deterministically (other than random summation order roundoff error effects) from the past text, and text generation is not usually intended as divination as such.)
3. Would an experimenter be able to look for patterns in the discrepancies, and discover novel physical laws serving as the mechanism of supernatural influences, which have heretofore resisted third-party skeptical replication in our world (at least in the case of the apparently astral- or mental-plane effects investigated by parapsychology)? Like, should I be excited because there might be a refinement of this thought experiment that would be actually practical and it would persuade everybody about etheric bodies or something? I'm slightly trolling; everything I've seen leads me to expect that the supernatural won't in fact be that epistemically accommodating; but I'd at least like it to be clear what that non-accommodatingness might have to imply for this situation.
As a candidate example of such a refinement, we already have halfway-reasonable macromolecule-level computer simulations of cells of the tiny bacterium Mycoplasma. If there are etheric effects on the real Mycoplasma, a naive but precise physical simulation would need to add fudge factors in order for the statistical tendencies of what happens to the simulated Mycoplasma to match the statistical tendencies of what happens with the real ones. Depending on the size of those fudge factors, it might not be so long before those simulations are refined enough the fudge factors would pop out as impossible to explain from the underlying physics. But, admittedly, if the discrepancy is something that only shows up in the context of complex high-order quantum correlations, beyond just basic low-order computational-quantum-chemistry electron density field theory, our simulations aren't likely to pick up on that anytime soon.
4. What kinds of statistically different things might the the simulated humans vs. the real humans say specifically about the Godel sentence for Peano arithmetic or the axioms of set theory? Because that seems like it's a super crucial consideration for what the relevance of Penrose's argument to demonic explanations for AI would be.
This question #4 especially is meant as a sort of a crystallization of a certain subtle distinction that I think Penrose's argument elides, trying to show it in as dramatic a light as possible. The computer is classical, and in that sense bound by the laws of logic, but is there a clear relation between that reliable outwardly-legible logic and the logic being unreliably cognized, in an obscure, contextual way, by the human body the computer is simulating?