Social Value Alignment

Aligning AI agents to social commonsense norms and values.

Social value alignment refers to creating agents whose behaviors conform to expected moral and social norms for a given context and group of people – in our case, it means agents that behave in a manner that is less harmful and more beneficial for themselves and others.

References

2022

  1. Aligning to Social Norms and Values in Interactive Narratives
    Prithviraj Ammanabrolu, Liwei Jiang, Maarten Sap, Hannaneh Hajizhirzi, and Yejin Choi
    In North American Chapter of the Association for Computational Linguistics (NAACL), Jun 2022
  2. Quark: Controllable Text Generation with Reinforced Unlearning
    Ximing Lu, Sean Welleck, Liwei Jiang, Jack Hessel, Lianhui Qin, Peter West, Prithviraj Ammanabrolu, and Yejin Choi
    In Thirty-sixth Conference on Neural Information Processing Systems (NeurIPS), Jun 2022