Deadline: December 31, 2025
Applications are open for the Foresight Institute AI for Safety & Science Nodes 2026. Artificial intelligence is accelerating the pace of discovery across science and technology. But today’s AI ecosystem risks centralizing compute, talent, and decision-making power – concentrating capabilities in ways that could undermine both innovation and safety.
To counter this development, Foresight Institute is building a decentralized network of nodes dedicated to AI-powered science and safety. Each node combines grant funding with office and community spaces, programming and in-house compute to accelerate project development. The goal is to empower researchers with a mission-aligned ecosystem where AI-driven progress remains open, secure, and aligned with human flourishing.
They are excited to fund and support work in the following areas:
- AI for Security: Traditional security paradigms, often reactive, piecemeal and human-driven, cannot scale to match the speed, scale, and complexity of AI-supported attacks. They seek to support self-improving defense systems where AI autonomously identifies vulnerabilities, generates formal proofs, red-teams, and strengthens the world’s digital infrastructure.
- Private AI: To ensure that AI progress occurs openly without sacrificing privacy, they want to support work that applies AI to enhance confidential compute environments, scale privacy mechanisms for handling data, and design infrastructure that distributes trust.
- Decentralized & Cooperative AI: They fund work that builds decentralized intelligence ecosystems – where AI systems can cooperate, negotiate, and align – so societies remain resilient in a multipolar world. We are especially interested in projects that enable peaceful human–AI co-existence and create new AI-enabled mechanisms for cooperation.
- AI for Science & Epistemics: In addition to applying AI to specific problems, they need better platforms, tools and data infrastructure to accelerate AI-guided scientific progress generally. Similarly, to get our sense-making ready for rapid change, we are interested in funding work that applies AI to improve forecasting and general epistemic preparedness.
- AI for Neuro, Brain-Computer Interfaces & Whole Brain Emulation: They are interested in work that uses frontier models to map, simulate, and understand biological intelligence – building the foundations for hybrids between human and artificial cognition, from brain-computer interfaces to whole brain emulation. We care about this domain specifically for its potential to improve humanity’s defensive position as AI advances.
- AI for Longevity Biotechnology: They want to fund work that applies AI to make progress on scientific frontiers in longevity biotechnology – from biostasis and replacement, to gene therapy and exosomes.
- AI for Molecular Nanotechnology: They support work that uses AI to make progress on scientific frontiers in molecular nanotechnology – from design and simulation, to construction and assembly of nanomachines.
Grant
- They award around $3M in total funding annually. Grants typically range from $10,000 to $100,000, with higher amounts being awarded to the AI safety-oriented focus areas, and smaller to longevity biotech and molecular nanotech projects.
Eligibility
- They accept applications from individuals, teams, and organizations.
- Both non-profit and for-profit organizations are welcome to apply, but for-profits should be prepared to motivate why they need grant funding.
Evaluation Criteria
- Impact on reducing existential risks from AI: the extent to which the project can reduce existential risks associated with AI, focusing on achieving significant advancements in AI safety within short timelines.
- Feasibility within short AGI timelines: the project’s ability to achieve meaningful progress within the anticipated short timeframes for AGI development. They prioritize projects that can demonstrate concrete milestones and deliverables in the next 1-3 years.
- Alignment with our focus areas: the degree to which the project addresses one or more of the focus areas outlined on this page.
- Capability to execute: the qualifications, experience, and resources of the applicant(s) to successfully carry out the proposed work. Strong teams with proven expertise in AI safety or related fields will be prioritized.
- High-risk, high-reward potential: the level of risk involved in the project, balanced with the potential for substantial, transformative impact on the future of AI safety. They encourage speculative, high-risk projects with the potential to drive significant change if successful.
- Preference for open source: They prefer open source projects, unless there are specific reasons preventing it.
Application
Application deadlines are on the last day of every month. They review applications on a monthly basis until the nodes are at capacity, so they recommend that you apply as soon as you are ready.
For more information, visit Foresight Institute AI for Science & Safety Initiative.

