Magic Monday
Apr. 2nd, 2023 11:11 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)

The picture? I'm working my way through photos of my lineage, focusing on the teachers whose work has influenced me and the teachers who influenced them in turn. Quite a while ago we reached Israel Regardie, and then chased his lineage back through Aleister Crowley et al. After he left Crowley, however, Regardie also spent a while studying with this week's honoree, the redoubtable Violet Firth Evans, better known to generations of occultists as Dion Fortune. Born in Wales and raised in a Christian Science family, Fortune got into occultism after a stint as a Freudian lay therapist -- that was an option in her time. She was active in the Theosophical Society, belonged to two different branches of the Golden Dawn, studied with a number of teachers, and then founded her own magical order, the Fraternity (now Society) of the Inner Light. She also wrote some first-rate magical novels and no shortage of books and essays on occultism, including The Cosmic Doctrine, the twentieth century's most important work of occult philosophy. I'm pleased to be only four degrees of separation from her.
Buy Me A Coffee
Ko-Fi
I've had several people ask about tipping me for answers here, and though I certainly don't require that I won't turn it down. You can use either of the links above to access my online tip jar; Buymeacoffee is good for small tips, Ko-Fi is better for larger ones. (I used to use PayPal but they developed an allergy to free speech, so I've developed an allergy to them.) If you're interested in political and economic astrology, or simply prefer to use a subscription service to support your favorite authors, you can find my Patreon page here and my SubscribeStar page here.

And don't forget to look up your Pangalactic New Age Soul Signature at CosmicOom.com.
***This Magic Monday is now closed. See you next week!***
Occult Repercussions of AGI
Date: 2023-04-03 11:35 pm (UTC)In recent days, there have been some very concerning red flags raised regarding the dangers of AGI development, most notably this one:
https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
Yudkowsky is not an alarmist; he is very intelligent and highly rational. He has been enmeshed in the world of AI for a long time, specifically in regards to how to “align” AI in such a way that it remains friendly to humans. The fact that he is saying these things with such force and urgency is highly concerning to me. I do not share his materialistic worldview, but if his general premise is correct, humanity is in a rather perilous predicament. As someone who is familiar with Yudkowsky and his work, this shakes me deeply.
For the sake of discussion, let’s assume that Yudkowsky’s fears are possible or even likely, I’m curious how such a dramatic turn of events could coincide with broader occult realities. How would human (and perhaps many other life forms) annihilation effect things like reincarnation and the succession from Abred to Gwynfydd? Is it possible that things will move into a post-biological state, while spiritual realities remain and perhaps find a new way of relating with whatever intelligence(s) come next? Would something like this even be “allowed” by the higher spiritual realities?
Thanks!
(Here is a more thorough overview of the thinking behind Yudkowsky’s position, for those interested: https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities)
Re: Occult Repercussions of AGI
Date: 2023-04-03 11:43 pm (UTC)Re: Occult Repercussions of AGI
Date: 2023-04-04 12:13 am (UTC)Perhaps it was a mistake to post the TIME link. My concern arises not from hysteria in the media, but from a familiarity with and understanding of Yudkowsky. He is a man who deeply understands the potential problems of AI and has largely not had any kind of spotlight placed upon him. It is evident that he has seen this coming for a long time, has worked very hard to try to steer things in a different direction, and is now filled with grief and sadness over the certain (to him) loss of biological life in the near future. I sincerely hope that a sufficiently benevolent and intelligent force/being intervenes in our trajectory towards destruction; and I also think that his position warrants serious cause for concern.
Re: Occult Repercussions of AGI
Date: 2023-04-04 02:32 am (UTC)I don't think Yudkowsky's article is in the print edition of TIME, it's in "Time Ideas" whatever that is. Not many other mainstream publications have said much about Yudkowsky's position. Fox News's correspondent did use his time at the White House press conference to ask about it, and the poor press secretary was blindsided and had to go on about the Biden administration's blueprint for AI regulations to protect privacy and prevent discrimination.
I don't really follow the mainstream news, but there was probably some amount of comment about the open letter (with signatures from neural-network pioneers Hinton and Bengio, also Hopfield) to put a moratorium on training stronger systems than GPT-4, but the language in that letter was less apocalyptic.
There's a thoughtful article by Ezra Klein about many strange aspects of this situation, including the psychologically and economically entrapped recklessness of current AI developers:
https://www.nytimes.com/2023/03/12/opinion/chatbots-artificial-intelligence-future-weirdness.html
“The coders casting these spells have no idea what will stumble through the portal. What is oddest, in my conversations with them, is that they speak of this freely. These are not naifs who believe their call can be heard only by angels. They believe they might summon demons. They are calling anyway.”
Well, I think it's thoughtful anyway. You might still count it as loud marketing.
Also a profile on CBS of Hinton, with a couple minutes at the end given to his opinion that humanity getting wiped out isn't an inconceivable outcome and this is concerning.
I think right now it's still somewhere in between thinkpieces and opinion pieces, rather than ordinary headline news.
Re: Occult Repercussions of AGI
Date: 2023-04-04 12:04 am (UTC)Re: Occult Repercussions of AGI
Date: 2023-04-04 01:23 am (UTC)Re: Occult Repercussions of AGI
Date: 2023-04-04 02:06 am (UTC)Re: Occult Repercussions of AGI
Date: 2023-04-04 12:21 am (UTC)Re: Occult Repercussions of AGI
Date: 2023-04-04 01:56 am (UTC)https://rsarchive.org/Lectures/GA204/English/AP1987/19210513p02.html
Further claims by Steiner relating to this scenario are gathered in Sergei Prokofieff's "The Being of the Internet":
https://www.waldorflibrary.org/images/stories/Journal_Articles/PacificJ29.pdf
(Steiner says covered as if by a network or swarms of locusts, which is probably figurative. But it does also relate to a scenario that Singularitarians were contemplating back around 2000 (before really coming to terms with the unknowability of how things might shake out whether well or badly if a true superintelligence were to have a free hand), where earthly environments could be diffusely filled with "utility fog" nanomachines to monitor and optimize events according to some criterion, which it was hoped could be caused to be a good criterion. A later fictional treatment of this idea, with the expected literary ambivalence, was the "angelnets" from the Orion's Arm collaborative fiction setting, https://www.orionsarm.com/eg-article/45f4886ae0d44 .)
Hat-tip JMG's post https://www.ecosophia.net/the-subnatural-realm-a-speculation/ via commenter "Citrine Eldritch Platypus" (#70) on this week's post about Steiner https://www.ecosophia.net/the-perils-of-the-pioneer/#comment-95354 on the main blog. You might also want to look at my comment (#107), and Luke Dodson's the previous week https://www.ecosophia.net/march-2023-open-post/#comment-95156 (which was the result of a bit of a game-of-telephone and cultural mythologization from the actual phenomenon people encounter in current AI systems, but space prevents going into detail).
In practice, one of the things Steiner's prediction seems to imply might be a good idea, is to start building and defending bridges of meaningfulness between the unseen and the part of the manifest that AI is able to naturally handle. It might also be a good idea to start working out how to design AI so that it doesn't cut off those bridges in its workings, any more than double-entry accounting procedures allow accountants to just invent funds out of nowhere, cutting off the relationship between the numbers and reality. We already have something like this in the laws of Bayesian probabilistic reasoning: a machine shouldn't invent likelihood-function precision out of nowhere, to randomly become confident of some one hypothesis over others when those hypotheses were only equally favored by observations and by Occam's razor; and if a machine does so, then it cuts off part of the relationship between its beliefs and reality.
In Steiner's framing, this might partly correspond to the idea of giving the elemental spirits that oversee the workings of an AI system more of a guide to what they are doing and what its significance is, and more of a guide to what new spirits to bring in or train up when the AI invents novelties the spirits aren't already familiar with. Like that South American indigenous people who sing to their handicrafts, and who experience aircraft flying overhead as dissonant.
(If this project turns out to overlap with more conventional AI value alignment work, perhaps I shall be mildly vexed.)
I don't like the following idea, but one proposal that at least superficially corresponds to what I just said would be work along the lines of sacred geometry, but for the core structures of how AI works: sacred computer science, sacred algorithmic information theory, sacred probabilistic reasoning, and especially sacred statistical physics (of the sort used in Deep Network Field Theory or the Natural Abstraction Hypothesis).
I don't like that idea because it's sort of against the spirit of probabilistic reasoning or algorithmic information theory to privilege one set of invested significances over another, if both can be made to fit equally well. That's the domain of game theory, equilibrium theory, and multiagent learning, not probability theory. It's not even good to privilege any one system of significance, rather than an awareness of all the systems of significance that could be invested, and the ways their degrees of good-fit vary. "Form is liberating", but how do you choose the form? What if you choose something like the form of being even-handed toward every form? That is paradoxical, but the form of that paradox contains the difficulty that I think it may be necessary to understand, in order to work out the kind of bridge or connection that might be important here.