Someone wrote in [personal profile] ecosophia 2023-04-04 02:06 am (UTC)

Re: Occult Repercussions of AGI

That you are willing to trust your own future self is a proof-of-concept that a system can exist which you trust to some minimal extent. The good parts of AI alignment research are trying to grope toward an understanding of the question of what is going on there, and how to extend it to greater knowable trustworthiness and broader systems than single people; or, if that's impossible, why it's impossible and what it might be our responsibility to do instead, before someone else less careful and more optimistic about profit goes all Sorcerer's Apprentice.

Post a comment in response:

(will be screened)
(will be screened)
(will be screened)
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting