Future Impact Group

Future Impact Group

Non-profit Organizations

Creating impactful research while helping students and early career professionals increase their future impact.

About us

Creating impactful research while helping students and early career professionals increase their future impact. Co-founded by Suryansh Mehta and Callum Evans.

Website
https://futureimpact.group
Industry
Non-profit Organizations
Company size
51-200 employees
Headquarters
Oxford
Type
Self-Employed
Founded
2023

Locations

Employees at Future Impact Group

Updates

  • View organization page for Future Impact Group, graphic

    228 followers

    Excited to see how our researchers' work with Elliott Thornley at GPI might influence the trajectory of AI development! This project builds technical solutions on philosophical foundations to make progress on difficult real-world problems.

    The newest working paper by Elliott Thornley, Alex Roman, Christos Ziakas, Leyton Ho and Louis Thomson, Towards shutdownable agents via stochastic choice, is now available to read here: https://lnkd.in/eEVnMsCk Abstract: Some worry that advanced artificial agents may resist being shut down. The Incomplete Preferences Proposal (IPP) is an idea for ensuring that doesn’t happen. A key part of the IPP is using a novel ‘Discounted REward for Same-Length Trajectories (DREST)’ reward function to train agents to (1) pursue goals effectively conditional on each trajectory-length (be ‘USEFUL’), and (2) choose stochastically between different trajectory-lengths (be ‘NEUTRAL’ about trajectory-lengths). In this paper, we propose evaluation metrics for USEFULNESS and NEUTRALITY. We use a DREST reward function to train simple agents to navigate gridworlds, and we find that these agents learn to be USEFUL and NEUTRAL. Our results thus suggest that DREST reward functions could also train advanced agents to be USEFUL and NEUTRAL, and thereby make these advanced agents useful and shutdownable.

    Towards shutdownable agents via stochastic choice - Elliott Thornley, Alexander Roman, Christos Ziakas, Leyton Ho, and Louis Thomson

    Towards shutdownable agents via stochastic choice - Elliott Thornley, Alexander Roman, Christos Ziakas, Leyton Ho, and Louis Thomson

    https://globalprioritiesinstitute.org

  • View organization page for Future Impact Group, graphic

    228 followers

    POLITICO has featured our newest publication in their must-read opinion pieces of the past week! Click through to read a succinct analysis of how automation feeds far-right radicalization, and how public interventions can dispel this effect. A commentary by Julian Jacobs, Francesco Tasin, and Ajraf Mannan, featured by The Brookings Institution: https://lnkd.in/eypENZ9s

    Does automation increase support for the far right? | Brookings

    Does automation increase support for the far right? | Brookings

    https://www.brookings.edu

  • View organization page for Future Impact Group, graphic

    228 followers

    Presenting a critical distinction in AI Alignment, 𝐒𝐭𝐚𝐭𝐢𝐜 𝐯𝐬 𝐃𝐲𝐧𝐚𝐦𝐢𝐜 𝐀𝐥𝐢𝐠𝐧𝐦𝐞𝐧𝐭, the newest working paper from FIG Labs, is now available to read online at https://lnkd.in/eauRtz3U In this paper, Gracie Green (supervised by Elliott Thornley) outlines the following claims: • 𝐖𝐞 𝐜𝐚𝐧 𝐝𝐢𝐯𝐢𝐝𝐞 𝐀𝐈 𝐀𝐥𝐢𝐠𝐧𝐦𝐞𝐧𝐭 𝐢𝐧𝐭𝐨 𝐭𝐰𝐨 𝐭𝐲𝐩𝐞𝐬: "𝐬𝐭𝐚𝐭𝐢𝐜" 𝐨𝐫 "𝐝𝐲𝐧𝐚𝐦𝐢𝐜". Static alignment (reflecting desires at training) and dynamic alignment (adapting to current desires) are often conflated. • 𝐒𝐭𝐚𝐭𝐢𝐜 𝐚𝐥𝐢𝐠𝐧𝐦𝐞𝐧𝐭 𝐢𝐬 𝐦𝐨𝐫𝐞 𝐥𝐢𝐤𝐞𝐥𝐲. Static alignment requires a simpler training procedure and less situational awareness. • 𝐁𝐮𝐭 𝐝𝐲𝐧𝐚𝐦𝐢𝐜 𝐚𝐥𝐢𝐠𝐧𝐦𝐞𝐧𝐭 𝐢𝐬 𝐛𝐞𝐭𝐭𝐞𝐫. Dynamically aligned agents would be corrigible, stay value-aligned in real-time, and adapt to changes in human preferences. • 𝐖𝐞 𝐬𝐡𝐨𝐮𝐥𝐝 𝐭𝐫𝐲 𝐡𝐚𝐫𝐝𝐞𝐫 𝐭𝐨 𝐮𝐧𝐝𝐞𝐫𝐬𝐭𝐚𝐧𝐝 𝐚𝐧𝐝 𝐚𝐜𝐡𝐢𝐞𝐯𝐞 𝐝𝐲𝐧𝐚𝐦𝐢𝐜 𝐚𝐥𝐢𝐠𝐧𝐦𝐞𝐧𝐭. Paying more attention to the static/dynamic distinction can help labs develop and adapt their training methods, ultimately creating better AI agents. This paper might be particularly interesting to anyone thinking about the philosophical foundations of alignment. Gracie hopes that others generate governance and technical strategies to solve the issues she identifies. Interested in doing similar research, guided by an experienced researcher in academia, industry, or government? Fill out our 30-second expression of interest form: https://lnkd.in/edFgTmXy

    Static vs Dynamic Alignment — LessWrong

    Static vs Dynamic Alignment — LessWrong

    lesswrong.com

  • View organization page for Future Impact Group, graphic

    228 followers

    A short, insightful, evidence-based commentary on the effects of current AI systems on labour! It's exciting to see one of Julian Jacobs and Francesco Tasin's FIG projects featured by OMFIF. If you're excited to do similar work, applications close on Wednesday (28 Feb) for our research groups (futureimpact.group), including the Oxford Group on AI Policy: https://shorturl.at/aenG2

    View profile for Julian Jacobs, graphic

    Political Economy, University of Oxford

    Artificial intelligence has long been theorised as a cure for the West’s ailing productivity growth. As the McKinsey Global Institute has shown, workplace productivity growth has been stagnant for about 40 years. But the discourse on the productivity effects of AI has been almost entirely speculative. Until recently, evidence of large-scale exposure to AI was absent from data. https://lnkd.in/ehzdCZ7h

    Could artificial intelligence really boost labour productivity? - OMFIF

    Could artificial intelligence really boost labour productivity? - OMFIF

    omfif.org

  • View organization page for Future Impact Group, graphic

    228 followers

    *Applications are open for Philosophy for Safe AI and Global Priorities Research Groups* The Future Impact Group at the University of Oxford is currently seeking research associates to work with our project leads from the Global Priorities Institute (Oxford University): Brad Saad, Julian Jamison, and Tomi Francis. Specific projects include: - Assessing the evidence for AI sentience. - Investigating the strategic implications for AI’s moral patienthood.  - Examining the relationship between partial aggregation and the far future. - Evaluating the moral relevance of animals from first principles.  - Undertaking an empirical approach to population ethics. - Exploring the non-identity problem’s relevance for the moral status of future generations. This research group is a great fit for current students interested in a career in either global priorities research or philosophy related to AI. We have projects suitable for a wide range of backgrounds and are excited for both: - Candidates with relevant research experience who want to expand their research portfolio and improve their professional network.  - Candidates with no directly relevant research experience who are excited about testing fit, especially those with relevant skills and experience in research and adjacent areas. Find out more about Brad's AI moral patienthood projects and apply here: https://shorturl.at/sCMUY Find out more about our global priorities projects and apply here: https://shorturl.at/fqTVZ Applications close end of day February 28th!

    • No alternative text description for this image
  • View organization page for Future Impact Group, graphic

    228 followers

    *Applications are open for AI Policy Research Groups* The Future Impact Group at the University of Oxford is currently seeking research associates to work with our project leads from the Centre for the Governance of AI (GovAI) and the Centre for Security and Emerging Technologies: Ben Bucknall, Claire Dennis, and Konstantin F. Pilz. Specific projects include: - Advising the United Nations on the potential creation of a UN AI Agency. - Investigating the concentration of power in large AI firms. - Scaling AI infrastructure in emerging economies. - Creating technical proposals for creating secure audit access to AI models. - Monitoring and defending against the proliferation of dangerous AIs. This research group is a great fit for current students interested in a career in AI Policy. We have projects suitable for a wide range of backgrounds and are excited for both: - Candidates with AI Policy experience who want to expand their research portfolio and improve their professional network.  - Candidates with no AI Policy experience who are excited about testing fit, especially those with relevant skills and experience in research and adjacent areas. Find out more on our notion and apply here: https://shorturl.at/aenG2 Applications close end of day 28th February.

    • No alternative text description for this image
  • View organization page for Future Impact Group, graphic

    228 followers

    *Applications are open for International Development Research Groups* The Future Impact Group at the University of Oxford is currently seeking research associates to work with our project leads from the G20 research group and the Global Priorities Institute (Oxford University): Jess Rapson and Julian Jamison. Specific projects include: - Analyzing data from large global wellbeing surveys. - Monitoring the spread of global antimicrobial resistance.  - Developing technical models for the impact of Housing and UBI policies.  - Simulating the conditions necessary for meeting Climate goals. This research group is a great fit for current students interested in a career in International Development. We have projects suitable for a wide range of backgrounds and are excited for both: - Candidates with International Development research experience who want to expand their research portfolio and improve their professional network. - Candidates with no International Development experience who are excited about testing fit, especially those with relevant skills and experience in research and adjacent areas. Technical skills in Python, R, or data manipulation will be particularly useful. Find out more and apply here: https://shorturl.at/ackKZ Applications close end of day February 28th.

    • No alternative text description for this image
  • View organization page for Future Impact Group, graphic

    228 followers

    We’re excited to announce that applications are now open for our Spring 2024 research groups! These groups are an opportunity for students and graduates to connect with University of Oxford academics and professional researchers working on some of the world’s biggest problems, such as risks from AI, global priorities research, and international development. With an expected commitment of 5-10 hours per week, you can balance this research with other commitments such as study or work. 📖 Gain research experience — understand how to make a difference in your field, and build your research portfolio. 🧠 Learn from expert project leads — receive management, direction, and career guidance from top academics and professional researchers. Find out more and apply at futureimpact.group

    • No alternative text description for this image

Similar pages