TMTPOST -- Artificial intelligence will only achieve true general intelligence when it can autonomously discover new scientific knowledge, according to Tianqiao Chen, founder of Shanda Group and the Tianqiao & Chrissy Chen Institute.Speaking at the Symposium for AI Accelerated Science (AIAS 2025) held on October 27–28 in San Francisco, Chen introduced for the first time his concept of "Discoverative Intelligence," which he described as the next frontier of AI—an intelligence capable not merely of pattern recognition or optimization, but of genuine discovery. He outlined a roadmap for realizing this vision, positioning Discoverative Intelligence as the foundation for a new era of human–machine collaboration in science.The two-day event convened nearly 30 of the world’s leading scholars and industry figures, along with hundreds of researchers and students, to explore how AI is transforming the process of scientific inquiry and innovation.Keynote speakers included 2025 Nobel Laureate and UC Berkeley Professor Omar Yaghi, 2024 Nobel Laureate and University of Washington Professor David Baker, 2020 Nobel Laureate and UC Berkeley Professor Jennifer Doudna, and Turing Award winner John Hennessy, former president of Stanford University and current chairman of Alphabet, Google’s parent company.Tianqiao ChenThe full text of Chen’s speech is as follows:True intelligence is “discoverative” intelligence1. Human Evolution Has Never Stopped; it has Merely Changed FormSince the emergence of Homo sapiens, our physical bodies have remained largely unchanged. Some studies even suggest that the human brain has slightly decreased in size since the Paleolithic era. Yet this does not mean that evolution has come to an end. Rather, humanity has redirected the process—transforming intelligence, science, and technology into new, external organs of evolution.We forged weapons in place of claws and fangs, wove clothing as a second skin, built cars to outrun cheetahs, and designed airplanes to soar beyond birds. Through these inventions, we have extended the limits of our species. Our average life expectancy has risen from just over twenty years to nearly eighty—a leap that, in biological terms, typically distinguishes one species from another.In other words, human evolution has not ceased; it has simply shifted from the biological to the technological. By continually uncovering the unknown, we have externalized our capabilities and expanded our presence across time and space. Today, scientific discovery and technological innovation stand as the true engines of our ongoing evolution.2. "Discoverative Intelligence" is the True AGIArtificial intelligence for science should not be viewed merely as another application of AI. Instead, it redefines the relationship between humanity and intelligence itself. The true value of AI does not lie in replacing human labor by being faster, cheaper, or more efficient. From the broader perspective of our species’ evolution, AI for Science is, in essence, AI for Human Evolution. Its highest purpose is to help humanity uncover the unknown — to extend our capacity for discovery beyond the limits of biological cognition.Today, many AI models claim to have “discovered” new structures, molecules, or even scientific theories. Yet most of these achievements remain at the level of outcomes. They identify new instances within existing frameworks — bound by known energy functions, statistical regularities, or the distribution of training data. Such results represent extrapolation within a predefined search space, not true discovery.Genuine discovery is something deeper: it is the ability to ask meaningful questions, not merely to answer them; to understand underlying principles, not simply to predict results. The kind of intelligence capable of such discovery must be able to construct falsifiable world models, propose hypotheses that can be tested and disproven, and continuously refine its own cognitive framework through experimentation, interaction, and self-reflection.This form of intelligence — self-evolving, self-correcting, and fundamentally curious — represents the true General Artificial Intelligence. We call it Discoverative Intelligence.It is distinct from other definitions of intelligence:It goes beyond imitation, since creation and discovery are the essence of true intelligence;It is falsifiable, because discovery is an observable event, not a vague philosophical concept like "consciousness";It redefines what AGI truly means—not as “replacing humans,” but as “the evolving of humanity.”3. The Scale Path and the Structure Path: Two Roads Toward "Discoverative Intelligence"With “Discoverative Intelligence” as the new standard, let’s re-examine the two major schools of thought in today’s AI development:The first is the “scale trajectory.” It emphasizes that parameters are knowledge and that intelligence is a product of scale. As long as the model is large enough, the data is abundant enough, and the computing power is strong enough, intelligence will naturally emerge. This trajectory has already achieved astonishing application results, enabling AI to predict proteins, generate chemical compounds, and even assist in scientific research. Without a doubt, it is the most successful engineering path in the history of AI.Meanwhile, another path is quietly taking shape—the “structure trajectory.” Here, “structure” does not refer to model architecture, but rather to the “cognitive anatomy” of intelligence. The brain is a system that, through neural dynamics and on the basis of memory, causality, and motivation, forms a knowledge system that evolves continuously over time. These mechanisms endow intelligence with continuity, interpretability, and a sense of direction. The essence of scientific discovery is to deduce the future. This perspective holds that only intelligence with a temporal structure can remain effective outside of the distribution it was trained on.4. Mirror of the Brain: Analyzing Temporal StructureSo, what exactly does the so-called “temporal structure of the brain” refer to?It does not refer to any specific physical region of the brain, but rather to its fundamental operational paradigm for processing information.The current AI “spatial structure” paradigm — the scaling path — is essentially instantaneous and static, fitting snapshots of the world using massive spatial parameters. In contrast, the brain’s “temporal structure” paradigm is continuous and dynamic; its very purpose is to manage and predict information flowing through time.To manage information within the flow of time, a system must possess five core capabilities. Together, these five form a complete closed loop — the temporal structure:Neural Dynamics:To exist in time — rather than merely compute instantaneously — a system must have a continuous energy foundation. The brain is a dynamically active energy system that operates continuously; even without external input, it can self-organize, self-activate, and self-correct. That’s why our brains remain active even when we daydream. This constant energy flow is what makes intelligence truly alive.Transformers, on the other hand, are discrete, static computational graphs. Once inference ends, “thinking” stops entirely; every new inference starts from zero. There is no temporal continuity. Today’s intelligence is computation, not existence. True wisdom must be alive, because the world is always changing — only systems that continuously update through time possess the capacity for scientific discovery.Long-Term Memory Systems:To accumulate past experience — rather than restart from zero every time — a system must have a plastic storage mechanism. Current large models only have short-term working memory; once the context window is cleared, the system resets entirely. Without long-term memory, there is no true learning. Long-term memory not only enables the accumulation of experience but, more importantly, allows selective forgetting, enabling efficient learning within limited parameters and the formation of hypotheses and theories.Causal Reasoning Mechanisms:To understand the sequence of events in time — that is, what causes what — a system must be capable of deriving principles. Current large models’ understanding and reproduction of known information, including causal relations, remain confined within the statistical patterns of language, not mechanistic reasoning. They perform perfectly within the training distribution but collapse when the environment changes, because they rely on co-occurrence patterns rather than the underlying structure of the world.The significance of causal reasoning in scientific discovery lies precisely in reconstructing our understanding of the world under unknown conditions. It is the first step toward out-of-distribution intelligence — the starting point of world modeling.World Models:To predict future trajectories, a system must be able to internally simulate the world. While current AI models possess multimodal perception, they still lack a unified internal model capable of forming a coherent “projection of reality.” The human brain, however, maintains an integrated world representation that combines perception, memory, prediction, and self-reflection.This allows us to simulate the world within our minds, rehearse the future, and continuously perform hypothesis testing and causal prediction at the neural level. This is the essence of scientific thinking: running experiments about the future inside the brain.Metacognition and Intrinsic Motivation Systems:To manage the complex, cross-temporal processes above, the brain possesses metacognition — the ability to be aware of its own uncertainty, adjust reasoning paths, allocate attention, and select strategies. This “thinking about thinking” is the starting point of science and creativity.Today’s AI systems mostly rely on external commands and lack intrinsic drive. Even reinforcement learning’s reward functions are externally defined. When long-term memory and causal reasoning converge within a world model, the next question arises: how to generate machine metacognition — how to let curiosity and the desire to explore emerge spontaneously.This marks the crucial leap from a passive executor to an active explorer, and the greatest challenge in the pursuit of living intelligence.These five capabilities are not parallel directions but an integrated, continuous loop of intelligence — a system capable of self-evolution through time.We call this the Temporal Structure of the Brain.5. Temporal Structure: The Entry Point for Young PeoplePrecisely because the scaling approach has achieved remarkable success in recent years, we are now, for the first time, able to clearly see its ceiling: simply stacking data and computing power cannot break through the barriers to true understanding and discovery. This is the perfect moment for the return of structuralist thinking. We are standing at a historic turning point. What we need is not just more GPUs, but new theories, new algorithms, and new imagination. This calls for interdisciplinary thinking—a fusion of neuroscience, information theory, physics, and cognitive psychology. This is precisely where young people excel.We are already prepared to support these young talents:We have computing power. Regardless of the path chosen, computing power is indispensable. We will invest over one billion US dollars to build dedicated computing clusters, providing young scientists with an immediate experimental environment. This computing power is not for competing on scale, but rather for exploring structures, testing memory mechanisms, new causal architectures, or new hypotheses in neural dynamics.We have offices. We have established R&D centers around the world, inviting young researchers from diverse disciplines to brainstorm and collaborate at the whiteboard. Currently, over 200 PhDs from world-renowned universities are working in our offices.We are establishing benchmarks. We plan to launch new benchmarks to comprehensively measure neural dynamics, long-term memory, causal reasoning, world models, and metacognition, taking “discovery” as the standard for AGI evaluation. This will allow all scientists to collaborate and compete based on SOTA (state-of-the-art) objectives.We have established mechanisms tailored specifically for young people. We are building a PI Incubator to provide an independent research track for young scientists worldwide. PhD students and postdocs no longer have to wait until graduation to receive independent funding—they can establish labs under their own names on our platform and lead colleagues in exploring the future architecture of temporal intelligence on their own terms.We believe: Scale is the path of giants, while time architecture is the opportunity for the young. Giants push boundaries with computing power; young people redefine intelligence with new structures:This is a kind of intelligence that does not simply repeat existing knowledge, but is able to propose its own hypotheses, test the world, and revise its understanding—this is the kind of intelligence that can truly discover.更多精彩内容,关注钛媒体微信号(ID:taimeiti),或者下载钛媒体App