2023-04-18

AGI Safety is Important

Update

I feel the need to respond to some criticism: You are right and I am wrong. The statement “nothing else feels important in comparison” is not healthy. I’ll try to be more nuanced both for myself and for others sake in the future!

Original Post

I’ve over the last couple of weeks become totally engulfed in the AI safety space. I’m not sure why it’s taken me so long. I think GPT-3/4 capabilities shined a new light on what is possible with current state of the art systems, and it somehow intrigued me enough that when I stumbled on this blog post, I became completely hooked.

The blog post above explains it better than I ever could, but the most convincing thought to me was:

  • There does not seem to be any inherent restrictions on AI becoming smarter than humans.
  • When this happens, we better make sure they value humans and their wants and needs - if not, why would this super-intelligent AI ever treat us better than chickens in factory farms?

This is definitely out there, I know. But all ideas are out there until they become reality. If you want to debate me on this, I would LOVE to. Please convince me this is not a potential problem.

After being convinced by the arguments above, another consequence is that everything else feels very unimportant in comparison. The only developments that “matter” are the ones that matter before this potential AGI shift, which very well could be within the coming century.

I’m currently speed-running the AGI Safety Fundamentals course, and am thinking of applying to the MATS Program for this summer. I’m not a believer in fate, but I am incredibly happy I studied applied mathematics! Certainly helps with getting up to speed in the space.

I’m very interested in hearing from others that are interested in or working in the area.