Kwaai Alignment Research Lab

AlignmentMethods.orgLLMEvals.orgBlog

The problem of AI alignment is focused on ensuring that artificial intelligence systems act in accordance with human values and objectives. In our lab, we're working on developing methods to make AI systems understand and adhere to these values, ensuring their actions are safe, ethical, and beneficial. We aim to address the challenge of preventing AI from behaving unpredictably or causing harm, even if technically functioning correctly. Our goal at Kwaai is to create AI that reliably aligns with human interests and societal well-being.

FAQ:

What are the current reseach goals of this lab?
Our current goals this year are working on producing a position paper, survey paper in alignment and running small experiments.

How can I contribute to this group?
Right now, we’re primarily focused on the papers, but if there's an experiment you'd like to run reach out and we can discuss it. If you’d like to help out, reach out and we can get you started.

Register

What if I don’t know anything about alignment?
That's okay! We welcome all different backgrounds. If you are passionate about this area / problem and willing to help, then that is great. We need researchers, engineers, writers, operators etc. It takes many hats to keep the lab on track, so most likely we can find a place for your skillset.

Meeting Times:
Every other Thursday · 9:00 – 10:00am
Time zone: America/Los_Angeles

Ryan Steubs, Lead Alignment Researcher @ Kwaai