How might we (HMW)
- HMW create aligned AI systems developed within the context of institutions (old and new) and economic and regulatory incentives?
- HMW create good institutional design and economic governance for the time of AI robustness?
- HMW create good goals for AI knowing they will be influenced by established corporations, and institutions?
- HMW understand goals of institutions and companies?
- HMW align optimizers that may become misaligned?
- HMW stop the following dynamic
- HMW we better align both optimizers both from present-day and future ?
- HMW align markets and other optimizers?
- HMW scale cooperation with AI assistance?
- HMW preserve attention and individual epistemic security?
- HMW understand goals externalities of a system (institution, company, country, project)?
- HMW simulate the different public personas
- HMW understand and compare different kind of optimizers and apply tools to align for aligning one to aligning another?
- HMW use AI to build better institutions?
- HMW help governments know how market corrections work in practice?
- HMW design better feedback mechanisms for institutions to understand their positive and negative effects on others, and whether or not those effects align with their values?
- HMW (if we want to improve coordination mechanisms for institutions or projects with the help of near-term AI, or invent new ones entirely) HMW know what existing successes or fresh ideas can we draw on for inspiration?
- HMW improve ways to cooperate entirely enabled by advances in AI?
- HMW prevent coordination failures often occur as the result of cascading knowledge misalignments?
- HMW improve coordination via AI facilitated conversations (cultural translations, filling knowledge gaps, avoiding biases)?