Back to Archive
Seminar

The Status of Science in the Age of Advanced Automation

James Boyd
Wolfram Institute

Welcome to the Future of Science Seminar with James Boyd, CEO of the Wolfram Institute. James co-founded the Wolfram Institute together with Stephen Wolfram in order to advance mathematics, physics, and other scientific disciplines, through the strategic use of computational technology. In his abstract seminar, James focuses on the automation of science and what the potential consequences of this might be. He also talked about what we, as scientists and scientific organizations, should be doing in order to prepare for this world of fully automated science. We talked about the role of human scientists in this world of fully automated science, and covered some pretty wild concepts like cyborg scientists and transhumanism. But we also talked about the role of the decentralized science movement in fully automated science.

Welcome to the age of Automation

I'm going to be discussing the status of science and the age of advanced automation. Before I begin the talk, why don't I, as a prelude to the talk, talk about the talk itself: what to expect, what the trait of this talk will be, what the goals are, what the suppositions and motivations are behind this talk.

This talk is going to be abstract, but informed by strategic considerations. Some of you might be accustomed to thinking abstractly. Some of you might not. Some of you might view yourselves as more practical, some of you might view yourselves as being more speculative. It's my position that in order to be strategic and have an impact that is somewhat widespread, one also has to be abstract. If one's ever worked, say with military organizations, you'd be surprised by the degree of abstraction with which they are comfortable.

If you view yourself as strategic, but you often don't think in terms of grand abstractions, you might want to reconsider. If you're not thinking abstractly, you risk pursuing strategies that are too narrow in scope and are at risk of not being as consequential as they might otherwise be. This will be abstract, but also strategically informed.

Next, this talk is going to be attentive to developments characterized by extremity that manifests over the medium and long term. I have a number of colleagues with whom I discuss automation of science. Many of them say, ‘Things are interesting right now. James, why don't you talk more about what's happening today or what to expect over the short term?’ Or, ‘if you're going to consider multiple scenarios, why do you tend to focus on the ones that are somewhat extreme?’

If you're going to plan, you should plan for the extreme outcomes. Those are the ones that require great anticipation. Those are the ones whose character is so radically different from the one that we see today that it really merits planning that begins now.

That's one comment. The other comment is, if you have an organization like the one that I run, for instance, that's somewhat young, the impact that you'll have over the short term is minimal. If you're going to have an impact at all, it'll be over the medium and long term. And if you do nothing, and some extreme outcome results and you didn't do anything to prepare for it, then your impact over the long term is minimal. If you're going to have any impact, it's going to be in the extremity and it's going to be something that manifests over the longer term.

Importantly, this talk is oriented towards the promotion of the welfare of future human or transhuman stakeholders. This is not an anti-human talk. I'm not going to begin to try and convince you of the poetics of the obsolescence of humankind. I'm not going to spin rhetoric about how it's natural for machines to do better than humans in everything. I'm not going to make the argument that humans should stop doing science over the longer term. That's not the view that I take. Personally, I think that humans should be doing science indefinitely. It's something that, at least within our understanding, it's something that we've invented. If humans can continue to live longer lives, if we're healthier, if we become transhuman, cyborgic, etc., I want humans to do science indefinitely. Or, at least, I would like to do research indefinitely.

It's good to take a long-term view. If you're not thinking about something and the trajectory that it's going to take over several hundreds or thousands of years, then your contribution is already done. If you take some long-term view and you coarse-grained it, the next many decades of your activity, and it's blip, basically. You're either going to think about science as you're some blip within some long-term expanse or try and orient yourself towards acting in a way that contributes to some longer-term view of where science is going.

This talk is made in a way that's presumptive of impermanent regimes of scientific institutionalism, punctuated by the advent of non-linearly performant technological capabilities. What do I mean? Science, the way that science is done is going to continue to change. The way that we do science today is not the way that science was done decades ago. It will continue to change. I don't think the change will be gradual. There will be certain events that pertain to advancement, especially on the technological front, that open up new possibilities for science to be done. We might very well be in the midst of the eventuation of one of these punctuations at the moment.

Finally, I'm generally optimistic, though my astrological commitments to the present are somewhat minimal. I will be fine if things are radically different in the future. That's my ethic. Things continue to change. Humanity is not distinct from nature, nor are machines. Humans are going to change, society, markets, technology, this is going to change. The way that science is performed is going to change too.

Anyway, these are the traits of the talk. These are the things that we're going to discuss. By the way, this slide itself is rather exemplary of the way in which this talk will be delivered. That is informal remarks, rather fastidiously written slides. That's my style. If you're not enjoying the talk so far, maybe you can leave. This is the way that the talk is going to be run.

What is Science Automation?

I'll define the automation of science as the use of computational technology for the execution of tasks relevant to scientific research efforts, hitherto administered by humans as a deployment of their cognitive capital. I point out cognitive capital for a specific reason - which is not to make too many presumptions about this demographic, those who are watching this talk, myself, my colleagues, etc. But I'm guessing most of us are involved in cognitive labor. If you're interested in science, you're performing cognitive capital in pursuit of science.

When I say automation, I refer to automation as applied to calculation, experimentation, writing, presentation, organization, analysis, etc. There are many facets of the scientific practice that can be automated. We'll discuss some of them. That's the automation of science.

Can the automation of science yield benefits? Yes, it can. Human scientists leverage machine performance as a delegatory resource so that individual productivity can soar and research costs can diminish. If you have applied for grants or maintenance in some research organization, you know that the cost of personnel is a key cost. If you could diminish that and if you could increase the productivity of individual scientists, it would lower barriers to entry and it would make it possible for lean somewhat small scientific organizations to have an outstanding and disproportionate impact on the field, or several fields.

However, of course, this is really the point of the talk. Enjoyment of these benefits requires the design of new research practices and methods that optimize human-machine divisions of cognitive computational labor and mitigate attendant risks. Those are risks to quality, verifiability, human intelligibility, etc. and that maintain anthropic competitiveness.

Again, I'm not an anti-humanist, I'm a human scientist. I want to continue to do sciences. I don't really have much interest in these tales of human extinction. At present, you can view humanity as a monopoly. That's the Anthropocene. Scientifically, economically, politically, etc. humanity enjoys an almost absolute monopoly. In the future, you should think of humanity as a particular firm. And it won't be a monopoly. You don't want it to fail and to become insolvent. You want it to be competitive within an increasingly non-human institutional environment.

The Wolfram Institute has been established to accelerate advances in science by leveraging computational technology. We can help to facilitate the automation of science, but we must also be prepared to adapt to broader trends of scientific automation. Our long-term prospects for this institute, which is a relatively young institute, will depend on our ability to form and follow theses regarding best practices that successfully anticipate developments in scientific automation. If we don't do that, we'll probably fail. So, for me, this is a matter of exigent practical considerations.

Automation Starts Now

This is a long-term concern, but people are taking automation quite seriously now. In light of recent advances of large language model capabilities which nudges the overturned window in a future-looking direction. Punctuated episodes of unanticipated advances in technological performance, occasioned the opportunity to treat otherwise speculative questions as pragmatic.

Put differently, technological surprises instill an atmosphere of exigency and anticipatory responsibility, which assumes the role that uninspired speculation played during low-surprisal periods. So, I think it's a good time to talk about this. People are nervous, but I believe in the the impermanence of the way that we do things. And I don't mind becoming a transhumanist, I'm living the future already. So, I'm having fun.

Many people are very worried at the moment. So that's why it's a good time to talk about this. If people are worried about the future, people are thinking about it. And when people begin to become paranoid about the future, then they're willing to think about longer term considerations as practical and as urgent, as opposed to speculative. That's the point.

Next, and this is a crucial point, the future of science is insufficiently discussed. Science itself, let alone scientific topics, methods, organizational principles, etc., is rarely the topic of science fiction. I challenge you to name well-known scientific works that discuss the way that science is done, as a core theme - where it's the protagonist, where it is really the key substance of the plot. It's rare.

Next, the institutional practice of science can change as radically as changes in technology, economics, politics, social dynamics, art, etc. We certainly think about the future of technology. People think about the future of capitalism. People think about the future of democracy. People think about the future of culture, etc. Science is another one of these facets of human civilization that can and will change. And that's my focus in general and also for the purposes of this talk.

Next, the institutional landscape maps to a fitness landscape. The practice of science is changing. Many organizations are dissatisfied with predominant modes of funding, management, research, output, distribution, et cetera. But the key determinant of success for tenured and fresh organizations alike will be adaptation to automation. You have to care about it. You have to pay attention to it. You have to have some sort of a plan. If you're new, if you're old, it doesn't matter. Old, unadaptive organizations will fail, I imagine. And new organizations whose presumptions about the future of science are out of date won't make a serious contribution. That's my argument.

Finally, the trend towards automation is already underway, and it's not a matter of what will happen when we start automating science in the future. Instead, it's a trend that has already begun and is set to accelerate in its development. This raises an important question: why don't we talk more about the practice of human science under conditions of increasing automation?

The Theses of of Scientific Automation

At the Wolfram Institute, we have implemented practices informed by the way we work. These practices are motivated by the following theses:

As automation proceeds, human scientists produce value by posing scientific questions and delegating the answering of those questions to machines.

If you have software that you are skilled at using for scientific computing, you find yourself with time to ask questions that you would otherwise not ask because the costs of answering the question are quite prohibitive. So one trait of the cyborg scientist is that you become quite direct, inquisitive, and able to ask interdisciplinary questions. You're able to take questions that have been ignored and seize upon them. You're able to ask iterative questions where you ask a question, get an answer, ask another question, and so on.

With these, I don't just mean questions that are posed in natural language terms. I mean questions about physics or about an engineering project, or about a mathematical problem. Using programming language of some kind, you're able to answer these questions in an efficient way. The key task is efficiently producing and articulating the question in a way that the machine will understand. Then maybe it takes the machine a day, a week, a minute, etc. to answer the question, but you effectively delegated the computation to the machine itself.

On the interrogative side, primacy is placed on human cognitive labors that are interdisciplinary, philosophically untoward, gainfully abstract, and/or strategically guided.

In the future, scientists will be something of a philosopher with a computer.

The thing that will be most important is coming up with scientific questions that are worthwhile; producing insights that decimate or crumble presumptions that had supported previous towers of works; or finding inconsistencies, paradoxes, or methodological foibles within some given body of practice. Daring to ask questions that haven't been posed before, or taking different problems in different areas and asking if there's some sort of correspondence between them. It's this high-level thinking that will be quite crucial. If everything else, the actual crunching of the numbers or fetching of the data, becomes increasingly cheap, the thing that will be valued most in the future will be this high-level abstract thinking.

Abstraction engineering itself will become automated in the future too. That's something we'll discuss later on. This current part is what I call the interrogative side. The side of asking questions in a computational way, building models, simulations, and functionality to ultimately have your questions answered.

On the computational side, primacy is placed on designing intuitive human-machine interfaces, building efficient functionality, and curating resources for visualization and organization of computational content.

This latter thing is quite important, and somewhat under-appreciated. It's not enough to simply ask questions and get answers. You have to come up with a way to show your output to a larger cohort or contingent of peers and stakeholders who can understand what you've done, and not become stupefied by some large data set or something like that.

That's why curation, organization, and visualization are key. You have to have a way for the machine to do significant work and then give back something that you as a human can understand, as well as your peers and stakeholders. I call this the modus operandi of the cyborg scientist. If half the research you're doing is computer-based, congrats, you're a cyborg scientist. I'm a cyborg scientist, and so are the fellows at the Institute of Cyborg Scientists. You can be one too. If you want to be an effective one, start thinking this way. You should become a philosopher with a computer.

We're Automating Almost Everything

Let's talk about trends in the automation of science. These are expected trends and are mostly based on things that I'm already seeing. There are others to be discussed but I didn't want to make a list that was too long. Some disciplines to be affected include the formal sciences, natural sciences, and humanities.

Formal automation. Mathematical calculations have been automated for some time, as well as automated theorem proving. It's been quite easy to do things like integrals, plots, complicated algebraic numerical calculations for decades at this point. Waldmeister is one automated theorem prover that I have used. But I'm sure many of you have experienced using other kinds of automated theorem provers. In general, the automation of mathematical proofs itself is just going to continue to improve. When I say formal automation here, I mean doing maths. Machines will be able to do mathematics quite well in the future.

Prosaic automation. This is being automated right now with large language models. Any area of research in general that's largely prose will become quite easy to automate, including humanities, social sciences, philosophy, essays, polemics. This will be quite easy to deep fake pretty soon.

Experimental automation. I want to highlight Emerald Cloud Labs, a laboratory in southern San Francisco. You send Emerald notebooks with code, and they will perform chemical experiments with automated manipulation of instruments. You code some outcome that you want to see, and then the lab actually facilitates the execution of that experiment.

Doing the math, doing the experiment, and writing up the results are already becoming automated.

A human future depends on our relationship to automation

Here's a practical question: which human scientific practices will remain relevant and evade obsolescence over the next 50 years, given the automation that we currently see and the automation that we can expect?

This is a practical question, especially if you're building a research organization today. Do you want it to be ephemeral, have a short lifespan of 25 years, or do you want it to have a long-term impact that maintains relevance? If your intention is the latter, then this question becomes pragmatic, not speculative. If you're in your 20s, you might still be working in 50 years. Whatever you build today, hopefully it will still be running. The question is, what will happen to your staff, scientists, donors, and everyone who supported you and helped you grow? Will they be disappointed in 50 years, or will you have a plan in place?

Over the short term, there's an occasion for invention. Cyborg scientists can effectuate unprecedented advances in the sciences by successfully developing research practices and infrastructure that leverage gains in automation. Science can be better, we can learn new things, extend our reach across the knowledge landscape, deepen our insights, and climb higher in our abstraction towers.

Over the short term, it's all good. There are many gains to be had. If you're able and willing to help effectuate those advances and take advantage of this opportunity for invention.

Over the medium term, not to be too polemical, but I would say there will be something of a bursting of the academic bubble. If scientific organizations do not update their methods to keep up with computational advancements, they will forfeit their importance, lose funding, and their scientists will struggle to find new work.

Currently, international academic culture constitutes a large labour pool. General attendance at decent universities is culturally regarded as a sound investment in cultural human capital. The university is often regarded as a sanctuary of pure inquiry and a refuge from the pressures of capitalism.

However, such conditions are temporary. Many people are going to university both as a channel for investment in cultural capital and as a distinct zone, somewhat distinct from capitalism at the moment.

In the future, scientists will have to perform in a more competitive way, which will change the pressures on universities, student enrollment, and the large labour pool of international academics. I won't say what effects this will have, but I think that something will happen that profoundly changes the way that international scientific culture is run and organized.

Let me say something about the long term. Post-human science versus intellectual decimation. The technological prowess of the Anthropocene writ large is obviously supported by the institution of science. If transhuman - I'll say transhuman instead of human, because we're probably going to become increasingly equipped with machines over time - knowledge production fails to retain parity with that of machines, the Anthropocene will play a negligible role in future affairs.

Technology is the medium that propels us towards the future. It's underwritten, underscored, and supported by the sciences. If we fail to advance scientifically, while some greater post-human effort exceeds the sciences, we won't understand what's happening. We won't really have any understanding of the technology that guides or informs us, and we will become somewhat ignorant, primitive, paranoid, and ineffective within this post-human future.

I would say the future of the human sciences, of Anthropic science in the future, will dictate the role that we play in the future. It will determine our liberties, our dignity, and our influence over future affairs.

The Changes of Automation

We will attain an unprecedented degree of output. Even today, or maybe in a few months or years, it will be easy for researchers to have large language models produce preprints. Preprints are already being proliferated, even by humans and the output will just continue to grow.

With time, the machine output rate should exceed the human consumption rate. This is a significant event when it happens.

The aggregate academic output rate already exceeds the human consumption rate. In your respective scientific field, do you really know what's happening in your field? If you’re a mathematician, do you really know what’s happening in mathematics with a capital M? Do you understand the developments that are occurring throughout your field? No, you don't follow everything. Even if you spent all day reading papers, you wouldn't be able to keep up.

The capacity of the individual at present, absent technological advances, is already unable to follow scientific fields as a whole. When output is boosted by machine efficiency, the output rate will quite strongly exceed the human consumption rate. This certainly has implications for human scientists and the contributions that they can make to the field without being woefully and embarrassingly behind the state of the art.

There's something I hope doesn't actually become a term, but I've referred to them as hyperniches. We already have niches. If you're a PhD student or your child is a PhD student, or if one of your friends is doing a PhD, they're doing it in some nauseatingly specific, specialized area. I find it disappointing even when people I've known in my life are doing a PhD in math or something that I'm not really interested in.

It seems stupidly specific. Sometimes you go into a specific direction and unlock some abstract insight that has ramifications far beyond the scope of that paper.

That's somewhat rare.

It's like Peter Scholze coming up with perfectoid spaces, a somewhat specific thing that he's doing, but it becomes this broad object or area of study, in his case p-adic geometry. That's good. But in general, academic work is becoming increasingly specific. If you take a macro view, you see that efforts are splintering along these rarefied specialization pathways.

If it becomes easier in mathematical work to have machines automate this, you can imagine this rapid mathematical advancement burrowing into ever more narrow niches, with broad ramifications in all possible directions.

When that is the case, synthesis dilemmas arise. A "synthesis dilemma," is how do you extract themes, trends, and insights from the developments in any given field. This is quite important. This is really how modern mathematics works, which is a game of discovering dualities, factorialities, and correspondences. This is the name of the game in mathematics, as well as in physics. We need to see this in other areas as well. Unprecedented output and hypermnesia are some predictable consequences of scientific automation.

Finally, there is a heightened cost of verification. Let's say it becomes easy to deepfake scientific papers and preprints. Well, the deviations from veracity might become what I'll call asymptotically minute.

You've probably seen this if you played around with one of these large language models. At the moment, they just lie to you. Or they produce statements that are not true in an official way. You have to verify that they're not true.

I will not be surprised to see a complicated paper published in a medical journal where 98% of it is correct, but 2% of the details are incorrect and subtle. It might be that the molecule is slightly wrong, the wrong paper is cited, or these very small details differ.

If we're going to have automated science involved in our distribution channels, and we invest in machine science, then exclude them from distribution channels, there's no point.

As soon as we invest in machines doing science, machines will publish. I've already seen examples of people co-authoring papers with chat GPT. The cost of verification is going to become quite high, and human peer review might not be strong enough to catch some of these very small details.

We'll have to find some way to verify work that's being produced within hyper-niches at an unprecedented rate. We have to find some way to ensure quality as works are being produced quickly and as works are being produced at an accelerated pace along these new research pathways where after a few months or years, they might be unrecognizable.

We've seen this in mathematical physics. Maxim Kontsevich once said that he had done some work on a mathematical approach to string theory and mirror symmetry. He doesn't recognize what's happening in the field anymore. It's too complicated. And that's just human output.

If you've looked at some of the works that have resulted from Kontsevich's work, you can’t understand it. It becomes hyper-specialized and complicated. If this is happening under the influence of machine advances, the question of verification is going to become somewhat challenging. These are three predictable consequences of scientific automation.

Next Steps in an Augmented World

Here, there's an opportunity to discuss practices and innovations that are relevant to DeSci Labs and the greater DeSci community.

One necessary invention is new form factors for the display and distribution of results, i.e. successors to academic papers and preprints. These form factors should exhibit content with concision and include all associated code.

Why concision? Well, if the output rate is rapid, we won't have time to read preprints that are 400 pages long. We need a concise, brief, synthesized explanation of what the paper is about, even if it is just an option people can toggle. This explanation should not just be an abstract, but should provide a concise, brief explanation of what the paper really is about. Much more can be done to provide an abstract explanation of what is happening in some paper.

It’s not been done because it’s too easy to merit doing, but because it’s hard – e.g. what is this research on analytical number theory really about? That is not easy. Philosophers of course are good at doing this, and political scientists can do this successfully, but all scientists need to be able to do it. When I say "include all associated code," it's because we'll need some sort of computational way of performing verification.

In the future, I'd like to see more papers where there's a human-readable explanation of what we're doing, and if you want to inquire further, I can give you the list of all the results, all the calculations, etc. and you can check directly the computations that were used to run it. That's what we will need.

Right now, we have an advantage over machines, which is that we're able to think abstractly. We should invest in this capability. We need to be able to release scientific results that cater towards this philosophical way of performing science with computers.

What's the key point? What's the key insight? How can I distill this complicated research effort into something that is substantive and also expressed in a concise way? The concepts come into play. We also have data for specifying the computations that are used.

I like the idea of having an abstraction toggle for papers. If you want to read the 500-page version, you can, but there should also be a shorter version with just two paragraphs that extracts the key insights.

The abstracts we have today are stupid, mostly just formalities to introduce the reader to the paper. I want the key insights to be extracted in a more synthetic way than what we see in abstracts today. I'd like to be able to toggle between the two versions.

I also think it would be great to have difficulty settings for papers, like you see in video games. I also think papers should have a difficulty setting, like in video games, where you can send a technical paper to an undergraduate student and they won’t have nightmares. They'll be able to understand it if it's set to easy. And then you could show the advanced version to a postdoc and they wouldn't be bored.

Having form factors with these toggles and difficulty settings, where, if you want, like a philosopher with a computer, you can extract the key insights and get the code if you want to know specifically what computation was used. This would also be helpful for replicability, especially in light of the replicability crisis. If you have the code for everything that is being done, you should be able to run the code and replicate it in silico.

There are also opportunities for multimedia formats. And another interesting consideration, which I won't delve into significantly, is that we'll have different kinds of distribution channels. We'll have human-machine distribution, but we'll also have machine-machine distribution.

Whether or not humans try to gain any command or control over machine-machine scientific research distribution channels is unclear. I'm sure you've seen examples in the news of machines starting to make up their own languages and talk to one another. It's usually teratological, xenological, otherworldly, alien, and scary.

That's what it means to inhabit a non-human landscape.

Next, we will need new distribution channels and repositories for scientific research. They'll have to be accessible and query-efficient. If there's a concept that interests me and I want to learn about it, what are my options? Currently, I can use search engines, Wikipedia, Wolfram MathWorld, nLab... However, I cannot issue a query and see how a particular object or concept is featured across different papers.

Scientific search is still woefully undeveloped, and we do not have a repository for results, concepts, or objects that are accessible and comprehensive. One challenge in building such a repository is that researchers have to agree to use a common platform. I see many projects happening where people are building libraries, repositories, software, or websites. But if we just have this constellation of disparate efforts that aren't interoperable with one another, they do not help. Therefore, we have to agree to a common channel repository, which is a political challenge, or we need to build a platform that makes it possible to access heterogeneous channels and repositories in an interoperable and standardized way.

Automation will Decentralize Science - Can we Keep Up? 

Decentralized science and automation - this is the question behind all of this. What's the relationship between decentralized science and automation?

I will give you some definitions of decentralization I made up last night: decentralization is the facilitation of the redistribution of institutional power through novel institutional and/or infrastructural mediation. Decentralization is often liminal, interstitial, and transitional.

The traditional view is that decentralization often precedes subsequent recentralization by new institutional powers. It's like a handoff - power is being redistributed from some concentrated source outward through some new infrastructure. Historically, it'll be recaptured elsewhere and recentralized.

Let's discuss automation in science and decentralized science. Will automation facilitate decentralization? Yes, of course it will. Accelerated machine performance will make it difficult for human scientific institutions to recentralize science. If machines are performing well and it's easy to have machines performing science across the planet, I think this will disrupt the status quo and make it easy for good science to be done all over the world. It will be difficult for a human scientific institution to re-centralize.

Now for governance. In some sense, people might accuse centralizers of administering governance over science. In the future, governance of science will pertain less to the enforcement of norms and more to the execution of translations, synthesis, mediations, exchanges, and abstractions. Governance will be the transport of concepts across a compounding knowledge landscape.

Here's a provocation: machine-driven research will almost invariably decentralize science. Future human or transhuman efforts will not endeavour to re-centralize science, but instead will navigate the sprawling knowledge networks with sufficient reach and engineer abstractions. I would say that's the goal of human and transhuman sciences in the future.

This knowledge landscape is going to explode. Can we extract abstractions from it? Because that's our advantage at the moment, we think abstractly. Can we develop new technologies that allow us to still have some sort of abstraction alpha over the other trans or other machine or post-human institutions that come into play?

Abstraction is distinct from recentralisation. Abstraction is the building of pathways between knowledge domains and the exchange of knowledge capital, not necessarily the centralization of capital and not necessarily the accretion of knowledge distribution towards some particular centre.

Finally, decentralized science must be engineered as a human mobile infrastructure capable of traversing a post-human epistemological base. It must also militate against machine recentralization of science through competitive abstraction that exploits machine decentralization.

I'm not an anti-humanist. Humans or transhumans must continue to perform science in the long term. I think automation will decentralize the sciences, and in the future, the challenge won't be issues of centralizing or decentralizing science. Science will explode so the questions will be, can we keep up? Can we perform synthesis? Can we engineer abstractions? Can we navigate this knowledge landscape? Can we still gain insights as cyborg scientists, as philosophers with computers in order to yield new insights in this new regime? So with that, I'll stop sharing.

Q and A:

Philipp: What exactly are cyborg scientists? Do you consider yourself a cyborg scientist?

James: If you can't imagine yourself doing the work that you do without significant assistance from computational technologies, then congratulations, you can pick up your wristband in the mail, you're a cyborg scientist!

Erik: Humans' ability to ask questions is mainly improved by being able to answer them. Therefore, once we stop answering questions and only ask them, does our ability to ask questions worsen? Can we still direct automated science by asking good questions? Or, if we stop answering questions altogether, will we be bad at asking questions?

James: A key aspect of the practices and methods that must be developed involves delegating cognition and computation between humans and machines. If you learn to increase your confidence and interrogative abilities by asking questions and then seeing them answered, it's encouraging. You learn, "Oh, this question was too precise or too vague," and so you become better at asking questions.

And I agree with Erik that essentially, if we don't find a good way to delegate and build a nice interface between human and machine scientists, we'll just have machines gaining results. If we’re not on the other side of that, not asking questions and seeing them answered, I think it's true – we’ll just see science taking off on its own and we won't really understand what's happening or how to ask questions. Because then we're not up to date with contemporary affairs and we don't even have a good medium of communication anymore.

Maybe people will start doing things by pencil and paper again, but they'll know that they're hundreds of years behind if things compound in the future. So, developing this relationship between human and machine scientists is critical, and we have to continue it even as we both continue to change and progress.

John: A lot of these concepts you're discussing, like search, have been sought after for a very long time. Do you think we finally have the technology to make them happen, like distributed ledger technology, open state repositories, platforms, and also a bunch of the AI stuff that has been happening?

James: Well, it depends on the search that you have in mind. If it's a search that queries scientific content, I still think that it's going to require significant participation on the part of all of these scientific subcultures and communities. I was having a conversation with someone the other day. We were discussing the prospect of search in mathematics, and on the one hand, he was saying, "You know, we should have mathematical search." On the other hand, I was asking him about some recent work that he had done, and I said, "Where can I learn about this? Where can I look this up? What papers exist?" And he said, "Well, basically, this is all just in the lore."

When you know people and you chat with them. We talk about things or there are some resources that are scattered here and there. So, part of the problem is that much of scientific knowledge is held by communities, even if some of it is in papers. If you are an outsider, like a software engineer or a start-up, and you want to curate a repository or library that reflects the state of the art in some field, I think that if you do it without the participation of anyone who is involved, it will probably not be particularly useful. In the end, the end users are people in that field anyway, so you will have to involve them. There will be all kinds of information or context that you won't access or know about if you don't work with them. Also, if you produce it without them and they think, "This is very naive. This is elementary," and they don't use it, then you have wasted your time as well.

That's one challenge, and the other challenge is: there are many subfields, subcultures, and academic communities, and the question is if we're going to try to create these libraries and repositories not just for science as a whole, with a capital "S," but even for large fields that contain many subfields and subcultures, this is something of an empire-building project of getting together all of these individual communities and asking, "Can we unify them toward some common project?" It's probably going to involve not just building good products but also convincing many scientists to participate in some project. Otherwise, we'll have what we have now, which is this archipelago of various efforts. If a search engine has a very limited scope, what's the point? You want search to be broad so that you can find things through some common engine.