That you are willing to trust your own future self is a proof-of-concept that a system can exist which you trust to some minimal extent. The good parts of AI alignment research are trying to grope toward an understanding of the question of what is going on there, and how to extend it to greater knowable trustworthiness and broader systems than single people; or, if that's impossible, why it's impossible and what it might be our responsibility to do instead, before someone else less careful and more optimistic about profit goes all Sorcerer's Apprentice.
Re: Occult Repercussions of AGI