Allison Duettmann is the president and CEO of the Foresight Institute, which is a research organization and nonprofit that supports the beneficial development of high impact technologies. Foresight gathers leading minds to advance research and accelerate progress toward flourishing futures by exploring areas such as molecular nanotechnology, brain computer interfaces, space exploration, cryptocomers, and AI. In this transcript, Allison tells about the intersecting frontiers of biotech, neurotech, and AI. All of these are technologies with great potential and at least equal risks for humans. So the question she poses and will hopefully answer today is, can we overcome the institutional challenges that are slowing down progress without exacerbating civilizational risks that come along with powerful technological progress?
I do think so. I think at least you should assume that there's a chance that you can make it through.
What I wanted to delve into is some of the history of the science and tech that we at Foresight are working on. And generally, the history of science and technology is filled with ambitious technologies and scientific advances that we're trying to make progress on. Then, a few future goals and a few individual examples of applications we can already expect, with a focus on AI. And then, a few meta challenges, some general risks, and what's next. What can we make of all of this and more?
I'll start with something that I always find super interesting when delving into the history of Foresight. What I find really intriguing are the developments that have been ongoing for quite some time.
It's fascinating to note that Foresight has been around since 1986. It was co-founded by Eric Drexler and Christine Petersen, with a long-term vision of molecular and technological advancements in parallel with other technological progress, such as AI.
One organization that played a significant role in the history of Foresight is Extropy. Many people associated with Foresight were also members of Extropy, and Extropy magazine was a prominent publication at that time. The magazine featured predictions about long-term historical events and the future of various technologies. In these predictions, you can find contributions from Eric Drexler, who was one of the co-founders of Foresight, Mark Mer, an early cyberpunk who co-wrote a book, and Max Moore, the founder of our current organization. These predictions focused on the potential of technological development and specific technologies.
Now, the interesting bit that I wanted to mention is this part here. So, we actually had an Ideas Futures event at our 1999 Foresight member gathering, and it all began back then. People actually voted on future claims by sending checks to our offices. It was like a gambling hub. Nowadays, you have lots of online prediction markets for the future development of technologies, but back in the day, it wasn't a big thing yet. People sent checks to our offices and volunteers compiled checks from different years when we had the Vision Weekend and ran these Ideas Futures. This kind of thing really shows that we put our money where our mouth is and try to predict where and when we can expect different technological progress to come down the road. This is just to give you an idea of how ambitious people were back then.
This picture is from our 2007 vision weekend, and it's just to show you a bit about the history of ideas. This is now quite a long time ago, and you can see how incredibly ambitious people were already. You have sessions on cryonics, social issues in longevity, social personality strategies, virtual worlds and virtual people, and mechanist synthesis, which is an approach to molecular technology with rock fighters and Merkel trees. Many of the people who are still involved in pushing these technologies forward were already present at the 2007 vision weekend, really trying to advance these technologies in parallel breakout sessions.
If there's one thing that you can take from that is just that we're still not really there. This is from 2007. The thing I showed you before was from 1994, and some things even earlier. We’ve been here since 1986, but over time, it's not really that our ambitions or goals get more ambitious. It's just that we're barely making progress on the ambitious goals that people came up with so long ago. And I think that's a curious fact.
A few of the focus areas that we really care about and that we're trying to make progress at Forsight are the notion of intelligent cooperation. We call it "can make." It refers to the ability of humans, and ultimately humans and AI, and AI with other AI, to cooperate in a secure and intelligent fashion. The molecular machine set is one of the topics and focus areas that we are focused on. This is the dream of being able to have molecular or atomically precise manufacturing eventually. We would like to produce with atomic precision and use these applications and tools in our everyday lives.
Then we have the neurotechnology focus area, where we really focus on brain-computer interfaces and brain emulation. What if we could eventually emulate entire human brains? How could we get there? What are the implications of that? Why would we want it?
Then we have the longevity biotech sector, which I think many people are familiar with. This area has definitely exploded, but I think there's still a lot that can happen at the frontier.
Finally, we have the space track, which has always been a significant focus within Foresight. However, it was never explicitly defined as one. A few years ago, we established it as a dedicated focus area and started individual project development within it.
Existential hope is the type of hope where we contemplate the potential and positive and negative futures that these technologies can bring. We strive to make the better ones happen.
When I mention supporting these areas, what do I actually mean? Well, in all of these areas, we offer prizes, fellowships, and grants. We have our Foresight fellows working in all of these areas. If you're interested in exploring the work that these fellows are doing, it's truly fantastic. I won't go into detail because it would take a long time. We've had over 60 fellows this year, and one interesting aspect is that, for the first time ever, we received a large number of applications from individuals under 21. As a result, we had to create a new category of fellows called Prodigy fellows, for those between 13 and 21 years old. This category now has almost the highest number of applications, which is really exciting because it shows that people care about the future at a much younger age and are taking action earlier.
We also have prizes in all of these areas. Maybe the most interesting one is the Feynman prize, which we've been awarding since all the way back to 1993. This prize recognizes work in molecular nanotechnology. We also have seminars and in-person workshops where we focus on long-term projects and try to determine what lies ahead at the frontiers.
I can give you an overview of a few of the individual and exciting work that I've observed and how it interrelates.
In the nanotechnology fund, one of our fellows Stephane Radon developed a molecular simulator in which you can use new AI tools to simulate new molecular new complex molecular machine designs and figure out how to build novel compounds with them. The interesting thing is that you have an AI chatbot on the side, where you can feed in the information that you want and are able to from that have a more intuitive interface.
The interface is actually really important if we're trying to make more of these individual tools with which people can push science forward. And this one doesn't only work through chat, but it also works through voice. And so you can talk to this tool that then develops the compounds.
One really big thing that will help lots of the molecular and nano technology development forward is this notion of simulation tools, because many of these structures that we're hoping to build in this field are so complex and are so unintuitive for humans that it's really difficult to think about how to make progress. It's really difficult to have to build this all in a lab and to see what fails and it will be really nice to simulate a few of these things before we have to build them.
Another really good case in point here is, Alexis Courbet. He's focusing on building a new tool based on a few previous predecessors. He’s trying to incorporate many of the biochemical energy-driven dynamic or mechanical motions into these molecular machines. This would simulate exactly how they could behave over a long time. He's particularly focusing on this example of an approaching rotary motor. Using these AI tools, you're able to simulate these things much earlier before you can actually build them or before you have to experimentally get your hands wet. It's interesting that AI tools, at least in this area, have this multiplier effect where once you have them, it's really difficult to see which of these areas they will not be needed for.
Within molecular nanotechnology, at least on the simulation and modeling side, AI is already making really big strides.
You can also see this in the biotechnology space. Within biotechnology, we mostly focus on longevity biotechnology.A really interesting one is the biomarker of aging compliance symposium. This is a new committee established by Vadim Gladyshev, Mahdi Moqri, and Vittorio Sebastiano. It's focusing on the challenge that is holding back longevity progress, which is the fact that the field can't easily agree on what should be measured. Without that consensus, it's difficult to classify aging as a disease, potentially ever by the FDA. Therefore, without agreeing on and having consensus around what longevity or rejuvenation or aging technologies are supposed to help with, it's really difficult to make progress. The committee's goal is to assemble and use different tools, many of which are AI-based, to agree on a few biomarkers that can be used and advocated for with the FDA or other organizations.
Again you have tools in the areas in which aging is supposed to really advance, you can have basically a variety of different biomarkers that measure progress on the longevity interventions that you have. One more curious fact about this is that it is really difficult to get this type of consensus going once you have many different companies or labs working on interventions that can aid with different biomarkers differently. At that point, these companies have a local incentive to promote one biomarker over the others. So having this type of consensus early on in the field is really difficult, but it's ultimately really important. This is just one of the individual bits and pieces within the aging field that are really interesting where AI could also make significant progress on.
For examples from the Intelligent Cooperation group there are two pieces that I want to point out. One is securing machine learning. One big focus area that this group has is the ability to think about how we secure not only existing computer systems, including working with computer security of existing systems, but also really secure future machine learning systems. If the attack vector, is already pretty large with existing software systems, then it's probably just going to get larger in the future.
Hyrum Anderson has did a wonderful talk about this and he's really trying to secure more advanced machine learning systems. He's going through a few different attacks and shows how they are already problematic in the existing software space and how they will be much more problematic in the future with more machine learning-based algorithms. This one is interesting because it shows how we can learn from existing problems with computer systems and how they will probably worsen with future AI systems.
Another seminar in this topic is Lewis Hammond’s talk on “Making AI Cooperate”. When we think about AI systems and the way that AI safety often addresses them, it is almost assuming that what we're trying to secure is one individual system rather than a multitude of different systems. However, currently, the way AI development seems to be panning out is that we don't just have one company building one system, but we actually have many different companies building many different systems. It's important to think about not only trying to help one AI system cooperate, but also what if we have a multitude of different human and AI pairs in human and AI groups and how can they cooperate with each other. This looks much more like thinking about game theory and multi-agent design, thinking about how to avoid collusion not only against humans but also amongst AI systems. How can we instill better paths for the right of preferred cooperation and so on. Lewis thinks about all of these questions, and I think that is an interesting field in which AI not only intersects with economic theory and game theory but also with new cryptographic tools potentially that can help us really secure cooperation among AIs.
Moving on to Neurotech, this is the group where we've seen a lot of unpredictable progress. This one is from Bradley Love and he built a version of chat GPT that focuses on neuroscience. It aims to build a neuroscience-focused chat GPT that helps people in designing and testing experiments. It serves as a research assistant for the field of neuroscience, which can be prompted, fed, and trained with data gathered by different people in neuroscience. It's a field-specific chat GPT. Neuroscience is just one of many areas where you can build field-specific language models to assist the entire field. It has already been used to propose study designs. Once it has a specific study design proposal, it can generate data patterns that affect the scientific literature and help predict the success of experiments.
Another company, Cortical Labs, has made some interesting breakthroughs this year. They are focused on developing a more neuronal-based approach to computing. In fact, they have already achieved a significant milestone by using just two neurons to play pong in a simulated environment. This opens up the possibility of having neurons not only direct motion and process data in virtual environments, but also potentially in real-world scenarios, such as robotics. Although these advancements are currently limited to simulations, they represent a significant breakthrough that connects human biology to a simulated software environment, and potentially even to the physical world. This brings a unique perspective to the concept of cyborgs in this field.
Finally, Konrad Cording is discussing the whole brain simulation of C. elegans. C. elegans is a widely studied model organism in biotechnology, neurotechnology, and many other fields because it offers various research possibilities. Konrad and his lab, in collaboration with other labs, are working towards emulating the entire C. elegans. However, it is important to note that emulating an entire human brain is still a long way off. Studying C. elegans can provide valuable insights for making progress in understanding the brain and developing methods to reliably upload or copy it. The fact that significant progress has not been made with C. elegans indicates the challenges we face in comprehending the brain and its complexities. The focus on C. elegans in this industry is akin to a "moon shot" and demonstrates the interest and dedication of researchers in this area.
There are other folks who are also trying to shortcut going all the way to a complete human brain emulation. Robert McIntyre and a few others truly believe that we can already have moonshot products towards human brain emulation in very short time frames. However, there is still a lot of debate in this movement about how likely that is.
Those are just a few interesting examples that sit at the intersection of AI, Biotech, and Neurotech. An interesting thing is that AI is really accelerating many of these advancements rapidly. In turn, these advancements can also accelerate AI advancements. This creates a feedback loop, especially in the field of molecular nanotechnology. If we can build better materials, we can build better computers, which can lead to rapid AI advancements again. I think this feedback loop is really interesting. Additionally, we have workshops where we bring people together to not only present the work they're already doing but also to focus on what we're currently doing. These workshops can help us make more long-term advances. We have workshops in various areas across all of our technical groups.
One session that we always have at these workshops is what we call a "grieving" or "headache" session. It takes place on the morning of the second day, after a day filled with technical presentations. During this session, we ask everyone to come together and take a moment to reflect on the challenges that are holding their respective fields back.
What makes this session interesting is that there are often similarities across different fields. Therefore, we include this grieving session in every technical workshop, and there are always a few common challenges that are mentioned. These challenges are considered meta-challenges within the scientific domain, especially in ambitious and interdisciplinary areas of science and technology.
I believe it's worth highlighting a few of these challenges because making progress on them could greatly benefit multiple fields.
One challenge in these fields is their interdisciplinary nature. While building blocks and knowledge from different fields gradually emerge over the years, assembling them into meaningful progress requires lasting and streamlined collaboration. This collaboration should extend beyond individual labs within one university and encompass multiple universities and nations. Achieving progress and successful projects in these fields often requires translating different metrics among researchers from different fields, which is a difficult task. The use of technical jargon further complicates matters, as even within specific fields like chemistry, the terminology may not be widely understood. For instance, projects like Molecular Nanotechnology or Molecular Manufacturing would require collaboration between chemists, physicists, and computational experts. The strong presence of technical jargon becomes a barrier to interdisciplinary collaboration, despite the desire for it. Even in interdisciplinary workshops, it remains challenging for experts in one field to become well-versed in another. While I consider myself a generalist, I understand that experts in their respective fields may find it difficult to collaborate across disciplines.
The second issue which is closely tied to the first, is funding. Much of the work produced is a result of the available funding. In the past, organizations like DARPA, the NNI, the DOE, and the NSF took on important and long-lasting projects. However, over time, their time horizons have become shorter. They now tend to focus on more recent and less extraordinary projects, possibly due to a higher aversion to risk in the past decade. This shift has led to funding problems in academia.
Individual labs often pursue research that can secure funding. This is crucial for publication, citation scores, and tenure. This creates a challenging situation where researchers are limited by the available funding opportunities. However, if sustained funding sources were available, researchers would aim for higher-impact projects, leading to positive cascading effects throughout the field.
In the absence of that, it is interesting to note that there are now new types of funding experiments emerging. One example is convergent research, which is being initiated by focused research organizations (FROs). These FROs have the goals of a government lab but operate with the structure of a start-up. They are focused on making rapid progress. There are also spec tech projects, such as speculated technologies from Ben Reiner, that aim to tackle serious issues. Additionally, organizations like Arcadia and Astera Institute are making significant contributions. In Europe, there are organizations like ARIA, JEDI, and Sprint, which are primarily associated with the government and are working towards similar goals as APE in the US.
To further advance, we need to continue pushing for the development of alternative funding structures. DeSci plays a crucial role in this by not only improving research methods and enhancing the publishing environment but also striving to create a better funding landscape.
I believe that one thing we have been working on in the DeSci space is the development and implementation of technology trees. These trees aim to map an entire field and provide guidance on how to navigate and make progress within that field. For example, in the biotech domain, one common challenge is that people often don't know how to contribute or help advance the field. We receive numerous questions and comments from individuals seeking advice on how to contribute to a specific field, even though we may not possess the technical expertise in that particular area. To address this, we came up with the idea of creating technology trees that outline the subgoals and important areas within a field. By clicking on specific topics within the tree, such as the extracellular matrix, users can access information about companies working in that space and discover open challenges. The goal is to provide newcomers, whether they are funders or individuals interested in contributing, with a clearer path to make progress in the field. Currently, we are in the process of transitioning these trees to an AI-enabled tool developed by the team at Leto. Instead of individual tech tree leads, an AI chatbot (GPT) builds the entire tree structure based on different long-term goals. Users can access the information and ensure that it remains up-to-date. We will keep you informed about the progress of this tool.
One takeaway from my experience of monitoring these fields for the last nine years, particularly the impact of AI on them, is that we generally have a positive view of AI in most of these fields, except for the field of AI itself. Within our technological domain focused on AI, there is a tendency for people to express concerns about AI. However, in all the other technological fields, there is a multiplying factor where people are quite satisfied with the ability to conduct research much faster.
The one thing that we've started during this year is the AI safety grants. I want to note three undervalued areas that I think could currently be receiving more attention with AI. We're currently trying to make progress on these areas, and if you work on them, I would very much welcome you to apply to this grant program. We're hoping to give out a minimum of one million USD over one year maximum, and we've already made a few 250k bets in the first few weeks. So we're trying to move fast and fund this field quickly.
One of these areas is neurotechnology, specifically whole brain emulation and low fire uploading for AI safety. I already discussed whole-brain emulation with you a little bit, but especially given shorter AI timelines, there is this question of whether it's possible to speed up work on computer to brain interfaces, and work on whole-brain emulation faster than AI timelines are coming down. That is definitely an open question because neurotechnology is a very difficult field and it's hard to advance, but we are trying to make progress on this. The whole notion is, can we eventually get equally competitive? Can our intelligence eventually match up with AI? Can we merge with our AI? Can we at least not be trampled on in our intelligence by artificial intelligence? The idea is that humans are already pretty human-aligned, and by leveraging that more, perhaps we can create software intelligence or interact with software intelligence in a more aligned way.
The second topic is about the intersection of security, photography, and auxiliary approaches for InfoSec and AI security. While I briefly mentioned this earlier, the key idea here is that AI safety is advancing, but often lacks connection with the emerging security and cryptography challenges and solutions in these fields.
Infosecurity remains a significant problem even without AI. However, with the presence of AI systems that excel at breaching other systems, particularly those that hold AI, the problem becomes even more critical. On the cryptography front, many tasks that are currently addressed through legal means could potentially be achieved through cryptographic techniques. For instance, when it comes to verifying large language models from different actors, it is unlikely that they would willingly share all their private underlying data structures or model weights. However, by leveraging cryptographic techniques, it may be possible to allow them to share only the necessary information to verify the safety of the model, without revealing all the underlying structures.
In summary, any ideas or concepts related to these topics are highly welcome.
And finally, this is the kind of multipolar AI scenario space. We're looking to fund more work on systems that involve multiple agents, rather than just one agent. As mentioned earlier, Lewis's Hammond Cooperative AI presentation provides a glimpse into this concept. We aim to support work that views AI as an ecosystem, where different intelligences cooperate, similar to how human intelligence interacts within civilization. We are very interested in receiving diverse proposals in this emerging field.
If any of this piques your interest, we invite you to attend Vision Weekends, our annual end-of-year festival. Vision Weekends brings together top professionals from various domains at different locations, including a castle in southern Paris and three venues in the Bay Area. Many of the individuals mentioned earlier will be in attendance. This interdisciplinary technology festival aims to assess progress across different domains and explore how we can advance positive applications without accelerating risks.
What is your view on Organ-on-chip tech?
I'm not a technical expert in that field. I would suggest that you should talk to someone else about this. But that being said, there is a lot of research in both our biotechnology domain and our neurotechnology, with a focus on whole brain relations. I don't mean to say that it's working towards organ-on-chip specifically. And I think it has many interesting economic implications and implications for how we can conduct experiments.
Both Mike and Bert Kagan, have mentioned the ethical implications of creating life or even just organs on a chip. In their talks and research papers, they point out the potential ethical concerns. We don't yet fully understand how our internal biology works and how it interacts with each other. So we should be more curious about that aspect and conduct further research. I would definitely defer to those guys, but I believe that cortical labs published a really interesting ethics paper, the name of which I'm now forgetting, but I could probably find it. They discuss some aspects of this topic. Let me see if I can find it here. Okay, yeah, I think it's called "Neurons Embodied in a Virtual World: Evidence for All the Networks and Ethics". They delve into that topic in the paper. The labs and a few others also discuss the implications of a more collaborative approach to sentience and how different zenorobots and neuroprogram bio bots, which can be grown on a dish, may exhibit signs of sentience. I think we should definitely be curious about these aspects. It's also important to be cautious.
Can we use AI to bridge the gaps in interdisciplinarity?
We haven't tried using AI for translating technical jargon. One challenge we often face is that different labs don't use the same metrics, which makes it difficult to use AI in a disciplinary way and analyze data from various labs. This lack of standardization across fields is also a part of the replication crisis problem. Sometimes, researchers use different metrics for some reason or another, and there isn't much standardization even within a field. While there are commonly used metrics, not everyone uses them, making it challenging to combine data from different labs. The individual circumstances under which data is collected often don't align, further complicating matters. So, it's difficult to leverage AI in this context.
The interpersonal aspect of our work is both challenging and rewarding. Bringing people together in person can be difficult, but it is an essential part of our workshops. Many of our workshops have a strong interpersonal component, where we aim to connect individuals from different fields. For example, in our last workshop on Nanotech held on May 5th, 1989, we explored the synergy of molecular manufacturing and AI, as well as directed and programmable matter and energy. These interdisciplinary topics require effort to get people with different backgrounds to collaborate, as individuals in atomic precision may not be familiar with the energy domain, and vice versa.
One interesting workshop we conducted focused on the intersection of cryptography, security, and AI. It provided an opportunity for security experts to engage with cryptographers and AI safety professionals. Similarly, we have organized workshops on AGI coordination and the implications of great powers, such as China and the US, on the global stage. These workshops brought together participants from different regions and sectors, including government, non-profits, and the private sector.
It requires building a container of trust. We usually have Chatham House rules in these events, which means you're not allowed to talk about an idea without attributing it to someone without their consent. So, you're allowed to talk about the idea, but not about who said it without their consent. This usually helps. Additionally, spending a lot of time setting up a good container at the beginning of the event is useful. One thing we always point out is that we want to go back to first principles in these events, but we don't want to ignore the laws of physics. We also encourage people to point out if they think someone else is crossing their second threshold, but everything that is within the laws of physics is up for grabs.
The interdisciplinary nature of our work also extends to coordination efforts, where we face challenges in explaining the framing, constraints, and incentive mechanisms to various actors involved. Despite the difficulties, the rewards are significant when these collaborations succeed.
How does reproducibility come up in these workshops?
That is a great question and it comes up frequently. Many of the problems in our field are at an early stage, long-term, ambitious, and interdisciplinary. There isn't much existing research to replicate, especially because there aren't established journals where people regularly submit different emulated animal models or other related work. For example, there isn't even a field dedicated to the long-term goal of molecular manufacturing that has its own journal. So, the replication crisis is not yet a problem in these fields because they are still relatively young.
However, many researchers working towards these long-term goals also publish in prestigious journals like Nature. From our perspective, the focus is on foresight and long-term goals. One way we are addressing this is through collaboration with Vita DAO on a long journey prize, which we launched about a year ago. We recently awarded the first prize for a new hypothesis on longevity and are currently setting up the second prize on biomarkers. Additionally, we are exploring the possibility of a replication prize for longevity, which is one of the proposals being considered.
I hope this provides more insight.
What can a non-researcher do to contribute to DeSci?
Allison: One thing we strive to do is build a tech tree for DeSci, along with the team from Lateral. This would provide a more structured approach to answering your question. However, there is a lot of work to be done, and it depends on your skills. I believe in contributing where your skills lie. For example, if you have excellent communication skills, you could write blog posts or find established mobile communication channels to promote the cause. If you have connections to funders, you could bring in more legacy or institutional funders.
To contribute to foresight, it would be helpful to know more about your interests. You can apply to our seminars if you want to learn more and help in general. If you have technical expertise in a specific field, you can apply for our fellowship for more hands-on support. Our grants program offers funding for AI safety-related areas. Additionally, you can participate in our various prizes if you are interested in solving specific problems that we care about at first I.
To figure out the best avenue for you to contribute, I recommend attending our contribution weekends in France and the US. There are subsidized tickets available, especially for junior researchers. These events gather our multidisciplinary community and provide valuable insights into the latest developments across fields.
Overall, these events offer a comprehensive overview of technological advancements and potential future trends. They are not to be missed for those interested in the field.
Philipp: I can only echo what Alison said. There are so many opportunities and so many things that could be done where we need talent and input. It really depends on what you're personally interested in and how you can help. There are a lot of different things going on in these labs, for example, we have an open-source project for developers to apply themselves. We also have a lot of new ideas about how to shape science and allow people to participate. We have a Discord channel where you can join and explore different ways to contribute. Feel free to reach out if you have specific ideas on how to contribute or if you're just curious. I'm always happy to chat. I know we're already over time, so Alison, thank you so much for this seminar. Thank you for sharing your work at Porcelain, it was so interesting. I'm excited to dive deeper into it during our podcast recording.