OpenAI is funding academic research into algorithms designed to predict human moral judgments, according to a report by TechCrunch based on documents filed with the IRS.
The startup has awarded a grant to researchers at Duke University for a project titled “AI Morality Research.”
Details about the work remain scarce, and lead researcher Walter Sinnott-Armstrong declined to comment on its progress. The grant is set to expire in 2025.
Previously, Sinnott-Armstrong and another project participant, Jana Borg, co-authored a book exploring the potential of AI as a “moral GPS,” assisting individuals in making more informed decisions.
Their team has also developed a “morally-guided” algorithm to help determine who should receive donor kidneys. Additionally, they examined scenarios where individuals might prefer to delegate decision-making to AI.
The goal of the OpenAI-funded project is to train algorithms capable of “predicting human moral judgments” in conflict-prone situations in healthcare, legal, and business domains.
Reminder: OpenAI, led by Sam Altman, is reportedly preparing to launch an AI agent codenamed “Operator.”