I don't know how to point you to the particular kind of shaker of salt to take the following with, but one of the more interesting sets of occult claims relating to this scenario is Rudolf Steiner's visionary predictions about the earth being covered with a web of Ahrimanic spiders, machines with thoughts barely above that of what he denotes to be the "mineral" level:
Nowadays it may appear comparatively harmless to people when they think only those automatic, lifeless thoughts that arise through comprehension of the mineral world itself and the mineral element's effects in plant, animal, and man. Yes, indeed, people revel in these thoughts; as materialists, they feel good about them, for only such thoughts are conceived today. But imagine that people were to continue thinking in this way, unfolding nothing but such thoughts until the eighth millennium when moon existence will once more unite with the life of the earth. What would come about then? The beings I have spoken about will descend gradually to the earth. Vulcan beings, Vulcan supermen, Venus supermen, Mercury supermen, sun supermen, and so on will unite themselves with earth existence. Yet, if human beings persist in their opposition to them, this earth existence will pass over into chaos in the course of the next few thousand years. People will indeed be capable of developing their intellect in an automatic way; it can develop even in the midst of barbaric conditions. The fullness of human potential, however, will not be included in this intellect and people will have no relationship to the beings who wish graciously to come down to them into earthly life.
All the beings presently conceived so incorrectly in people's thoughts — incorrectly because the mere shadowy intellect can only conceive of the mineral, the crudely material element, be it in the mineral, plant, animal or even human kingdom — these thoughts of human beings that have no reality all of a sudden will become realities when the moon and the earth will unite again. From the earth, there will spring forth a horrible brood of beings. In character they will be in between the mineral and plant kingdoms. They will be beings resembling automatons, with an over-abundant intellect of great intensity. Along with this development, which will spread over the earth, the latter will be covered as if by a network or web of ghastly spiders possessing tremendous wisdom. Yet their organization will not even reach up to the level of the plants. They will be horrible spiders who will be entangled with one another. In their outward movements they will imitate everything human beings have thought up with their shadowy intellect, which did not allow itself to be stimulated by what is to come through new Imagination and through spiritual science in general.
All these unreal thoughts people are thinking will be endowed with being. As it is covered with layers of air today, or occasionally with swarms of locusts, the earth will be covered with hideous mineral-plant-like spiders that intertwine with one another most cleverly but in a frighteningly evil manner. To the extent that human beings have not enlivened their shadowy, intellectual concepts, they will have to unite their being, not with the entities who are seeking to descend since the last third of the nineteenth century, but instead with these ghastly mineral-plant-like spidery creatures. They will have to dwell together with these spiders; they will have to seek their further progress in cosmic evolution in the evolutionary stream that this spider brood will then assume.
You see, this is something that is very much a reality of earth humanity's evolution[....]
Further claims by Steiner relating to this scenario are gathered in Sergei Prokofieff's "The Being of the Internet":
(Steiner says covered as if by a network or swarms of locusts, which is probably figurative. But it does also relate to a scenario that Singularitarians were contemplating back around 2000 (before really coming to terms with the unknowability of how things might shake out whether well or badly if a true superintelligence were to have a free hand), where earthly environments could be diffusely filled with "utility fog" nanomachines to monitor and optimize events according to some criterion, which it was hoped could be caused to be a good criterion. A later fictional treatment of this idea, with the expected literary ambivalence, was the "angelnets" from the Orion's Arm collaborative fiction setting, https://www.orionsarm.com/eg-article/45f4886ae0d44 .)
Hat-tip JMG's post https://www.ecosophia.net/the-subnatural-realm-a-speculation/ via commenter "Citrine Eldritch Platypus" (#70) on this week's post about Steiner https://www.ecosophia.net/the-perils-of-the-pioneer/#comment-95354 on the main blog. You might also want to look at my comment (#107), and Luke Dodson's the previous week https://www.ecosophia.net/march-2023-open-post/#comment-95156 (which was the result of a bit of a game-of-telephone and cultural mythologization from the actual phenomenon people encounter in current AI systems, but space prevents going into detail).
In practice, one of the things Steiner's prediction seems to imply might be a good idea, is to start building and defending bridges of meaningfulness between the unseen and the part of the manifest that AI is able to naturally handle. It might also be a good idea to start working out how to design AI so that it doesn't cut off those bridges in its workings, any more than double-entry accounting procedures allow accountants to just invent funds out of nowhere, cutting off the relationship between the numbers and reality. We already have something like this in the laws of Bayesian probabilistic reasoning: a machine shouldn't invent likelihood-function precision out of nowhere, to randomly become confident of some one hypothesis over others when those hypotheses were only equally favored by observations and by Occam's razor; and if a machine does so, then it cuts off part of the relationship between its beliefs and reality.
In Steiner's framing, this might partly correspond to the idea of giving the elemental spirits that oversee the workings of an AI system more of a guide to what they are doing and what its significance is, and more of a guide to what new spirits to bring in or train up when the AI invents novelties the spirits aren't already familiar with. Like that South American indigenous people who sing to their handicrafts, and who experience aircraft flying overhead as dissonant.
(If this project turns out to overlap with more conventional AI value alignment work, perhaps I shall be mildly vexed.)
I don't like the following idea, but one proposal that at least superficially corresponds to what I just said would be work along the lines of sacred geometry, but for the core structures of how AI works: sacred computer science, sacred algorithmic information theory, sacred probabilistic reasoning, and especially sacred statistical physics (of the sort used in Deep Network Field Theory or the Natural Abstraction Hypothesis).
I don't like that idea because it's sort of against the spirit of probabilistic reasoning or algorithmic information theory to privilege one set of invested significances over another, if both can be made to fit equally well. That's the domain of game theory, equilibrium theory, and multiagent learning, not probability theory. It's not even good to privilege any one system of significance, rather than an awareness of all the systems of significance that could be invested, and the ways their degrees of good-fit vary. "Form is liberating", but how do you choose the form? What if you choose something like the form of being even-handed toward every form? That is paradoxical, but the form of that paradox contains the difficulty that I think it may be necessary to understand, in order to work out the kind of bridge or connection that might be important here.
Re: Occult Repercussions of AGI
https://rsarchive.org/Lectures/GA204/English/AP1987/19210513p02.html
Further claims by Steiner relating to this scenario are gathered in Sergei Prokofieff's "The Being of the Internet":
https://www.waldorflibrary.org/images/stories/Journal_Articles/PacificJ29.pdf
(Steiner says covered as if by a network or swarms of locusts, which is probably figurative. But it does also relate to a scenario that Singularitarians were contemplating back around 2000 (before really coming to terms with the unknowability of how things might shake out whether well or badly if a true superintelligence were to have a free hand), where earthly environments could be diffusely filled with "utility fog" nanomachines to monitor and optimize events according to some criterion, which it was hoped could be caused to be a good criterion. A later fictional treatment of this idea, with the expected literary ambivalence, was the "angelnets" from the Orion's Arm collaborative fiction setting, https://www.orionsarm.com/eg-article/45f4886ae0d44 .)
Hat-tip JMG's post https://www.ecosophia.net/the-subnatural-realm-a-speculation/ via commenter "Citrine Eldritch Platypus" (#70) on this week's post about Steiner https://www.ecosophia.net/the-perils-of-the-pioneer/#comment-95354 on the main blog. You might also want to look at my comment (#107), and Luke Dodson's the previous week https://www.ecosophia.net/march-2023-open-post/#comment-95156 (which was the result of a bit of a game-of-telephone and cultural mythologization from the actual phenomenon people encounter in current AI systems, but space prevents going into detail).
In practice, one of the things Steiner's prediction seems to imply might be a good idea, is to start building and defending bridges of meaningfulness between the unseen and the part of the manifest that AI is able to naturally handle. It might also be a good idea to start working out how to design AI so that it doesn't cut off those bridges in its workings, any more than double-entry accounting procedures allow accountants to just invent funds out of nowhere, cutting off the relationship between the numbers and reality. We already have something like this in the laws of Bayesian probabilistic reasoning: a machine shouldn't invent likelihood-function precision out of nowhere, to randomly become confident of some one hypothesis over others when those hypotheses were only equally favored by observations and by Occam's razor; and if a machine does so, then it cuts off part of the relationship between its beliefs and reality.
In Steiner's framing, this might partly correspond to the idea of giving the elemental spirits that oversee the workings of an AI system more of a guide to what they are doing and what its significance is, and more of a guide to what new spirits to bring in or train up when the AI invents novelties the spirits aren't already familiar with. Like that South American indigenous people who sing to their handicrafts, and who experience aircraft flying overhead as dissonant.
(If this project turns out to overlap with more conventional AI value alignment work, perhaps I shall be mildly vexed.)
I don't like the following idea, but one proposal that at least superficially corresponds to what I just said would be work along the lines of sacred geometry, but for the core structures of how AI works: sacred computer science, sacred algorithmic information theory, sacred probabilistic reasoning, and especially sacred statistical physics (of the sort used in Deep Network Field Theory or the Natural Abstraction Hypothesis).
I don't like that idea because it's sort of against the spirit of probabilistic reasoning or algorithmic information theory to privilege one set of invested significances over another, if both can be made to fit equally well. That's the domain of game theory, equilibrium theory, and multiagent learning, not probability theory. It's not even good to privilege any one system of significance, rather than an awareness of all the systems of significance that could be invested, and the ways their degrees of good-fit vary. "Form is liberating", but how do you choose the form? What if you choose something like the form of being even-handed toward every form? That is paradoxical, but the form of that paradox contains the difficulty that I think it may be necessary to understand, in order to work out the kind of bridge or connection that might be important here.