The alignment problem in artificial intelligence raises important questions about ensuring that AI foundational models and systems align with human values. This panel will examine how evolving safety research can be effectively integrated into AI governance and look at the current state of play of the Artificial Intelligence Act (AIA) with regard to foundational models. The panel will discuss how codes of conducts, standards and oversight can support alignment. What can we learn from other relevant frameworks such as GDPR or the novel systemic risk mitigation and co-regulatory approach in the Digital Services Act.