Overview of (Some) Biotech-Based Adult Intelligence Amplification Approaches

Wait 5 sec.

Published on August 25, 2025 8:00 AM GMTDefinitionsBy "biotech-based", I mean approaches relying on life science knowledge and tools, so brain-computer interfaces, mind uploading, exocortexes, software, psychology, institutions and things like that are not included. It is simply because I thought and researched about the biotech-based methods deeply, and not about other methods, and I have something to say here. By "intelligence amplification", I mean "at least as good as existing human geniuses or people generally considered to be extremely smart". By "approaches", I mean figuring out which interventions to do rather than how to do them. So, the science of what must be done and not the technology of how it can be done. Obviously, the second question is equally important. I just did not research it.Some approaches discussed here can work only for teenagers, so not strictly speaking "adults". IntroThe need for radical human intelligence augmentation is obvious (AI alignment and many other reasons and it is simply good). We may need to focus on adults because we may not have time. There are two important posts on LessWrong addressing the problem:Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible.Overview of strong human intelligence amplification methods.How is this one different? I do not talk about gene editing (besides adding some small points not discussed in the posts mentioned above, which are mostly related to knowledge extraction). Gene editing is of course a possible way to go, but the science and tech of gene editing are already well covered; there are companies attempting to do it, and it is mostly focused on embryos/babies (so not so much to write about it). Here, I want to focus more on "everything else besides gene editing" - and it may make sense for several reasons: fewer "regulatory and bioethics issues", other dimensions to improve, potential for technologically easier interventions.I go deeper into omics, multiomics, and systems biology approaches. They are harder, less straightforward, but they are, in my opinion, not as hard as for example many since proposed in neurotech, and, more importantly, I am convinced that modern AIs are and will be accelerating progress here at a pace that allows us to achieve things that were considered intractable only recently.Also: the major caveat with all the non-genetic-editing approaches is that we do not have robust estimates of the size of IQ improvements.GenesDecades of research have firmly established that individual differences in cognitive ability are substantially influenced by genetic factors. You can read about gene editing approaches in this article. Assuming that you have read it or know its content, I would add this. We can and should move beyond individual genes, by applying a systems biology approach that analyzes the collective function of all genes associated with intelligence. Why is it important? Systems biology can act on top of IQ-genes correlations and provide more IQ improvements and make the effects more predictable (hence, better for trials) for the following reasons:Network modeling finds high-leverage regulators (e.g., pathway bottlenecks in LTP/DA/NE modules) where small perturbations produce large cognitive effects.Pathway/interaction maps predict holistic effects of the interventions (e.g., plasticity + neuromodulatory gating) that outperform any single GWAS-nominated gene or their linear combinations.Single-cell & spatial maps localize levers to specific neuronal subtypes/brain areas and brain states (arousal, attention), boosting effect sizes.Mechanistic models tell when to intervene.Multi-omic signatures stratify people into mechanistic subtypes (e.g., dopamine-limited vs plasticity-limited), so more indidualized improvements.Seeing cross-talk with growth/cancer pathways lets you push plasticity while capping oncogenic risk (transient dosing, downstream nodes, peripheral proxies).It allows faster iteration via signatures. We first capture the pattern of molecules in the brain that goes with higher performance and compare it to the pattern that goes with lower performance, ideally for the right cells and brain regions. We then search large libraries of known drug and gene effects to find options that push the “low” pattern toward the “high” pattern, choosing only those that can enter the brain and look reasonably safe, and often pairing complements like boosting learning capacity together with sharpening attention and motivation. Next, we test these choices with clear readouts—better learning, steadier focus, healthier sleep and consolidation—and keep only what actually helps. And then, we can adjust dose, timing, and combinations in a tight feedback loop so we learn quickly what works best for each person.One thing to use here is pathway enrichment analyses, which transform lists of GWAS hits into a map of interconnected biological processes, revealing the functional networks.For example, an investigation of 158 IQ-related genes, reconstructed into a core interaction pathway, found significant enrichment in several key areas. As expected, neuron-related functions were prominent, including "Neurotrophin signaling" and "Long-term potentiation (LTP)". Neurotrophins, such as Brain-Derived Neurotrophic Factor (BDNF), are essential for neuronal survival, growth, and plasticity. LTP is the primary molecular mechanism believed to underlie learning and memory at the synaptic level. The genetic association with these pathways confirms that variation in the molecular machinery of synaptic plasticity is a direct contributor to differences in cognitive ability.More surprisingly, the analysis revealed enrichment in numerous signaling pathways often associated with cancer and cell growth, such as MAPK and PI3K signaling. This overlap is not coincidental; these pathways are fundamental regulators of cell growth, differentiation, and survival, processes that are co-opted in cancer but are essential for normal neurodevelopment. Bottom line: when we know regulatory networks well, we may shift the focus from individual effector genes (or even their large sets) to the master regulatory elements that control the entire developmental and plasticity programs identified in the pathway analyses. The goal would not be to make a single, large genetic change, but to identify and modulate the "hub" transcription factors or key signaling proteins (e.g., within the Wnt or Neurotrophin pathways) that orchestrate the expression of hundreds of downstream genes. Such an intervention would aim to subtly bias an entire biological program—for instance, toward more robust dendritic arborization or more efficient synaptic plasticity—rather than altering a single component. Cells The genetic blueprint finds its physical expression in the cellular architecture of the brain, which can be affected more directly. The pyramidal neuron as a core targetThe pyramidal neuron, the principal excitatory neuron in the cerebral cortex, is the fundamental computational unit of higher cognition. Research has forged a remarkably strong and direct link between the microscopic anatomy of these cells and individual differences in intelligence. Studies combining post-mortem tissue analysis with pre-operative IQ scores and MRI scans have revealed a clear causal chain: higher IQ scores correlate with a thicker temporal cortex, and this increased thickness is, in turn, associated with pyramidal neurons that possess larger and more complex dendritic trees.Specifically, neurons from individuals with higher IQs exhibit greater total dendritic length and a higher number of dendritic branches. The magnitude of this effect is substantial; differences in dendritic morphology alone explained approximately 25% of the variance in IQ scores among the subjects. This is one of the largest effect sizes ever reported for a single biological correlate of intelligence.This structural difference has a direct physiological consequence. Electrophysiological recordings from these same neurons revealed that cells from higher-IQ individuals can sustain faster firing rates, particularly during periods of high activity. The proposed mechanism is twofold. First, larger and more complex dendrites provide a vastly increased surface area for receiving synaptic inputs. A neuron with a more elaborate dendritic arbor can integrate information from a greater number of presynaptic partners, which enhances its computational capacity. Second, computational modeling indicates that the specific morphology of these larger dendritic trees enables faster generation and propagation of action potentials, quite literally speeding up the rate of information transmission in the brain.The "what to do" is to identify and modulate the molecular pathways that govern dendritic arborization and synapse formation during development and throughout life. GWAS has already pointed toward some of these pathways, such as Wnt signaling and genes like ARPP21. The research in developmental neuroscience has identified numerous other molecular levers, including neurotrophins like BDNF, cell adhesion molecules like cadherins, and a host of intracellular kinases such as CaMKII and Cdk5. The ultimate goal of an intervention at this level would be to build neurons with a fundamentally higher capacity for information integration and faster processing speeds.GliaGlial cells (astrocytes and oligodendrocytes) are active participants in information processing, acting as dynamic managers of the neural circuits. Any radical enhancement strategy must therefore consider targeting the entire neuro-glial unit.AstrocytesAstrocytes are associated with synapses, forming a "tripartite synapse" along with the pre- and post-synaptic neuronal terminals. This anatomical arrangement allows them to actively listen to and modulate synaptic activity. They achieve this through several mechanisms: by controlling the concentration of neurotransmitters like glutamate in the synaptic cleft, by releasing their own signaling molecules (gliotransmitters) such as D-serine and ATP, and by providing on-demand metabolic fuel to active neurons in the form of lactate.Crucially, astrocytes are not just involved in the moment-to-moment regulation of synaptic transmission; they are essential for synaptic plasticity. Recent studies have identified a specific subpopulation of "learning-associated astrocytes" (LAAs) that are activated in the hippocampus during memory formation.What we could do is to identify the molecular pathways that mediate astrocyte activation and gliotransmission, with the goal of making synapses more plastic and learning more efficient.OligodendrocytesThe integrity of the brain's white matter—the myelinated axons that form the long-range communication cables—is a robust correlate of cognitive ability and processing speed. Myelin, produced by oligodendrocytes, acts as an electrical insulator that significantly increases the speed at which action potentials travel along an axon. For a long time, myelination was viewed as a largely static process completed during development. However, now people are talking about a new paradigm of "myelin plasticity" or "adaptive myelination".This research shows that oligodendrocyte precursor cells (OPCs) persist throughout the adult brain and can differentiate into new, myelinating oligodendrocytes in response to experience. Learning new skills, from motor tasks like running on a complex wheel to cognitive tasks, induces the formation of new myelin sheaths on previously unmyelinated or lightly myelinated axons. The functional consequence of adaptive myelination is the fine-tuning of neural circuit timing. By altering the thickness and spacing of myelin sheaths, the brain can precisely adjust the conduction velocity of axons, thereby ensuring that signals arriving from different brain regions at a downstream neuron are synchronized within the narrow time window required for integration. This temporal synchrony is essential for all complex cognitive functions. Therefore, a key target for enhancement is the process of adaptive myelination - modulating the signaling pathways (e.g., BDNF-TrkB signaling in OPCs) that link neuronal activity to OPC differentiation and myelination.Brain volume and cortical thicknessAt the most macroscopic level, the cumulative effect of these cellular properties is reflected in overall brain volume. A large body of neuroimaging research has established a consistent, albeit moderate, positive correlation between total brain volume and IQ, with correlation coefficients typically falling in the range of r≈0.24−0.33. This relationship appears to be primarily with the general factor, g, rather than with more specific cognitive abilities.However, the causal nature of this correlation is complex and debated. Within-family studies, which compare siblings to control for shared genetic and environmental backgrounds, have found that this correlation is substantially reduced or even disappears entirely. This suggests that much of the brain size-IQ correlation observed in the general population may be driven by confounding factors that vary between families rather than a direct, causal link where a larger brain mechanically produces higher intelligence.Therefore, simply increasing overall brain volume is a naive and likely impractical target for enhancement. A more refined and biologically meaningful target is cortical thickness in specific, cognitively relevant brain regions. Higher intelligence is associated with increased cortical thickness in the frontal and parietal lobes. These regions form the core of the Parieto-Frontal Integration Theory (P-FIT) network, a distributed set of brain areas consistently implicated in reasoning and problem-solving tasks. Furthermore, it is not just the static thickness but the developmental trajectory of cortical thickness that is associated with intelligence. More intelligent children exhibit a more dynamic cortex, with a prolonged period of thickening followed by more vigorous thinning during adolescence, primarily in frontal regions. This suggests that intelligence is related to a more extended and robust period of synaptic and dendritic development, followed by more efficient pruning and circuit optimization. The macro-scale target, therefore, is not size for its own sake, but the optimization of developmental trajectories that result in a more efficiently wired frontal-parietal network.Another thing to try is inducing new neurons (neurogenesis) or new connections in the adult brain. Adults do generate new neurons in a couple of regions (like the hippocampus), but the rate is low. Boosting neurogenesis – via growth factors or by reprogramming other cells into neurons – could potentially expand the brain’s memory capacity or adaptability. For example, scientists have found ways to convert resident astrocytes into functional neurons in vivo using gene therapy (e.g. delivering transcription factor NeuroD1 or Ascl1). In mouse models of injury or Alzheimer’s, this in situ reprogramming replenishes lost neurons. In an enhancement scenario, one might use a similar technique in a healthy brain to add extra neural circuitry for processing. However, more neurons are not automatically useful – they would need to integrate correctly into existing networks. Uncontrolled neurogenesis could even be harmful (risk of seizures or aberrant connections). Cellular and molecular targets for modulating neuronal and glial functionCellular targetSpecific processKey molecular levers (receptors, kinases, TFs)Desired outcomePyramidal NeuronDendritic arborization & synaptogenesisBDNF/TrkB, Wnt/Frizzled, Reelin, Semaphorins, Ephrins, Cadherins, CaMKII, Cdk5, CREB, ARPP21Increased dendritic complexity, greater capacity for synaptic integration, faster information processing.AstrocyteSynaptic plasticity & gliotransmissionGlutamate Transporters (GLAST), NMDA Receptors, Ephrins, Purinergic Receptors (for ATP), Cytokine ReceptorsEnhanced regulation of synaptic strength (LTP/LTD), more efficient metabolic support for active neurons, improved memory formation.OligodendrocyteAdaptive myelination & circuit timingBDNF/TrkB in OPCs, Neuregulin/ErbB3, Activity-dependent signaling pathwaysIncreased activity-dependent myelination of relevant circuits, optimized axonal conduction velocity, improved temporal synchrony of distributed networks.The primary challenge in targeting these cellular substrates is their immense regulatory complexity. The molecular pathways controlling dendritic growth, glial activation, and myelination are not simple linear cascades but vast, interconnected networks with numerous feedback loops and extensive cross-talk. For example, Wnt signaling, BDNF signaling, and Notch signaling all interact to shape the final form of a dendritic arbor.A naive attempt to promote growth by, for example, globally increasing the levels of a neurotrophic factor, could have disastrous consequences. Uncontrolled proliferation or growth could lead to aberrant connectivity, neuronal hyperexcitability and seizures, or even tumorigenesis. Therefore, the most plausible targets for intervention are not the growth factors themselves, but the molecular hubs that regulate these processes in an activity-dependent manner. The goal should not be to induce growth indiscriminately, but to enhance the brain's innate capacity for plasticity. This would involve making dendritic growth, synaptic strengthening, and adaptive myelination more sensitive and responsive to the patterns of neural activity generated by learning and experience. BioenergyThe brain's capacity for dynamic energy allocation is another limiting factor for cognitive performance. Cerebral metabolism and the neural efficiency hypothesisIt is well known that the brain's energy demands are disproportionately high. While it accounts for only about 2% of total body weight, it consumes approximately 20% of the body's resting metabolic energy, derived almost exclusively from the oxidation of glucose. An influential early theory linking this energy consumption to intelligence is the "neural efficiency hypothesis". This hypothesis, supported by early neuroimaging studies, proposed that individuals with higher intelligence exhibit lower brain glucose metabolism when performing tasks of moderate difficulty. The interpretation was that more intelligent brains are more efficient, requiring less energy to achieve the same or better cognitive output, much like a well-designed engine consumes less fuel.However, this relationship is more complex than initially thought and is highly dependent on task difficulty and context. More recent research has revealed a more fundamental principle: the brain's total energy supply is strictly limited and does not significantly increase even when cognitive demand rises. To cope with challenging tasks, the brain engages in a process of active and dynamic energy allocation. Using advanced optical imaging to measure cellular metabolism, studies have shown that as a task becomes more mentally demanding, the brain increases energy metabolism in the specific neural circuits required for the task. Critically, this increase is directly mirrored by a decrease in metabolic activity in brain regions processing information outside the focus of attention.This finding demonstrates a "zero-sum" or "limited resource" model of brain energy. The brain prioritizes, shunting its finite energy budget to the most critical computations at any given moment. This explains common psychological phenomena like inattentional blindness, where we fail to perceive even salient stimuli when our attention is intensely focused elsewhere. From a biological perspective, the neurons processing the unattended information have had their energy supply down-regulated to support the primary task. This implies that a key limitation on peak cognitive performance is rather not the total capacity for energy production, but the ability to manage and distribute this finite energy budget with maximum speed and efficiency.Key metabolic pathways for enhancementThe brain has evolved specialized metabolic pathways to manage these dynamic energy demands, which may represent prime targets for intervention.The astrocyte-neuron lactate shuttleThe classical view of brain metabolism, with glucose being delivered directly to neurons, is an oversimplification. A more accurate model involves a tight metabolic coupling between astrocytes and neurons, known as the Astrocyte-Neuron Lactate Shuttle (ANLS). In this system, cell types are specialized: astrocytes are primarily glycolytic, while neurons are primarily oxidative. When neuronal activity increases, glutamate released at the synapse is taken up by surrounding astrocytes. This glutamate uptake stimulates glycolysis within the astrocyte, causing it to break down glucose into lactate. This lactate is then "shuttled" out of the astrocyte and into the active neuron, where it is rapidly converted back to pyruvate and fed into the highly efficient mitochondrial TCA cycle and oxidative phosphorylation pathways.The ANLS is a system for rapid, on-demand energy delivery that is spatially and temporally coupled to synaptic activity. It allows the brain to bypass slower steps in neuronal glycolysis and deliver a ready-to-use, high-energy substrate directly to the synapses that need it most. Enhancing the efficiency of this shuttle is a clear target. The "what to do" involves upregulating the key transporters that mediate this process: monocarboxylate transporters MCT1 and MCT4 on astrocytes, and MCT2 on neurons, which are responsible for moving lactate between the cells.Ketogenic pathwaysUnder conditions of glucose limitation, such as during fasting or on a very low-carbohydrate ketogenic diet, the liver produces ketone bodies (acetoacetate and β-hydroxybutyrate) from fatty acids. These ketone bodies can cross the blood-brain barrier and serve as a potent alternative fuel for the brain. This metabolic switch to ketosis has significant implications for cognitive function.Studies in healthy adults have shown that a ketogenic diet can increase global cerebral blood flow and boost levels of Brain-Derived Neurotrophic Factor (BDNF), a key molecule for synaptic plasticity and neuronal health. Subjectively, periods of ketosis, such as those induced by intermittent fasting, are often associated with reports of enhanced mental clarity, reduced "brain fog," and improved focus. Mechanistically, ketones are a more metabolically efficient fuel than glucose, yielding more ATP per unit of oxygen consumed. This suggests that promoting a state of mild, controlled ketosis or enhancing the brain's ability to utilize ketones could provide a more stable and efficient energy supply, thereby supporting higher cognitive performance. The biological targets would again be the MCT transporters responsible for ketone uptake into brain cells, as well as the enzymes involved in ketone metabolism within neurons.The mitochondrial engine: a contested hypothesisGiven that mitochondria are the powerhouses of the cell, responsible for generating the vast majority of ATP through oxidative phosphorylation, it is logical to hypothesize that their efficiency could be a fundamental determinant of intelligence. This is the core of the "mitochondrial theory of g," which proposes that individual differences in the efficiency of mitochondrial functioning form the most basic biological mechanism underlying general intelligence. In this view, individuals with more efficient mitochondria produce more ATP with less resultant oxidative stress. This superior bioenergetic capacity would support all aspects of brain function, from the initial construction of neural circuits during development to their maintenance and high-energy operation in adulthood. This theory is attractive because it provides a single, elegant mechanism that could explain the well-documented correlations between intelligence, physical health, and the rate of aging.However, this theory faces a significant challenge from genetic evidence. If mitochondrial efficiency were the primary driver of variation in g, one would expect GWAS of intelligence to identify an enrichment of genes involved in mitochondrial function. This is not what the data show. The genes robustly associated with intelligence are overwhelmingly involved in processes like neurogenesis, dendritic development, and synaptic function, and are expressed specifically in brain tissues and neuronal cell types. There is no significant enrichment for genes encoding mitochondrial proteins. This finding suggests that while a baseline level of mitochondrial function is obviously essential for any brain activity, the variation in intelligence among the healthy population is not primarily explained by variation in mitochondrial efficiency.A synthesis hypothesis: the primary bioenergetic targets for enhancement are likely not the mitochondrial engines themselves, but rather the regulatory and logistical systems that control their fuel supply. The genetic evidence points to the importance of building a well-structured brain; the metabolic evidence points to the importance of fueling that brain effectively in response to dynamic cognitive demands.Bottlenecks and plausibilityA major challenge for metabolic interventions is that brain metabolism is deeply integrated with whole-body physiology, and crude manipulations can have widespread, unintended side effects. A more rigorous and plausible approach is to identify and target the rate-limiting steps within the key brain-specific metabolic pathways. Every metabolic pathway has one or more enzymatic reactions that are inherently slower than others and thus act as bottlenecks, controlling the overall flux through the pathway. These rate-limiting enzymes and transporters represent precise molecular "valves" that could be tuned to optimize energy flow.For example, the enzyme hexokinase, which catalyzes the first step of glycolysis, and the pyruvate dehydrogenase (PDH) complex, which gates the entry of pyruvate into the mitochondria, are key control points. Modulating their activity could offer a way to finely tune the balance between glycolysis and oxidative phosphorylation. Similarly, targeting the MCTs that govern the ANLS offers a specific lever to enhance the brain's on-demand fueling system. This strategy moves away from the blunt instrument of systemic metabolic change (like a strict diet) or non-specific nootropics, and toward a targeted modulation of the specific molecular machinery that governs the brain's energy economy.Critical nodes in cerebral gioenergetic and metabolic pathwaysMetabolic pathwayCritical node (Enzyme/Transporter)FunctionRationale for targetingGlucose uptake & glycolysisGLUT1, GLUT3Glucose transportersGates entry of primary fuel into the brain (GLUT1) and neurons (GLUT3). Levels are reduced in neurodegenerative disease.-Hexokinase (HK)Catalyzes first step of glycolysisA primary rate-limiting step controlling overall glycolytic flux.Pyruvate metabolismPyruvate Dehydrogenase (PDH) ComplexConverts pyruvate to acetyl-CoAA critical gatekeeper controlling entry of glycolytic products into the mitochondrial TCA cycle.Astrocyte-neuron lactate shuttleMCT1, MCT2, MCT4Monocarboxylate transportersMediate the transport of lactate (and ketones) between astrocytes, blood, and neurons. Essential for on-demand fueling.Ketone body metabolismSCOT (Succinyl-CoA-3-oxoacid CoA transferase)Key enzyme in ketone body utilizationRate-limiting step for using ketones as an alternative, efficient fuel source in neurons.More on metabolismAnother metabolic angle is reducing neural noise and improving signal clarity. The brain’s electrical signals can be “noisy,” which may limit effective working memory and processing speed. It may be the case that drugs which improve the signal-to-noise ratio in cortical networks could enhance cognitive performance. One example is low-dose l-DOPA or dopamine agonists that optimize dopamine levels in the prefrontal cortex – too little dopamine reduces working memory, but too much causes noise. Finding the sweet spot pharmacologically can improve tasks like planning and working memory. It does not look like radical intelligence enhancement, but it is a proof-of-concept. Hormonal modulation also shows some promise. An interesting case is estrogen: in females, estrogen levels correlate with verbal memory and fine motor skills. However, the complexity of hormone effects makes it tricky to gain pure cognitive benefits without other systemic impacts.Neurotransmitter tweaking is already a major avenue for treating cognitive symptoms in disorders (e.g. donepezil boosting acetylcholine for Alzheimer’s disease, or NMDA partial agonists for memory). In healthy brains, these may also provide some boost. For example, ampakines (AMPA receptor modulators) were studied for enhancing memory and attention – they showed improved synaptic plasticity in animals and slight memory benefits in early human trials. However, none have yielded dramatic improvements, and some have caused excessive excitability.A systems-level synthesisA radical enhancement strategy may require a shift from tinkering with individual parts to understanding and modulating the logic of the entire system. We can leverage high-resolution, single-cell multiomics data to build predictive models of the networks that produce cognitive function.The application of systems biology to intelligence enhancement involves several key steps. First, it requires the integration of disparate, high-throughput data sources—genomics (DNA variants), transcriptomics (gene expression), proteomics (protein levels and interactions), and metabolomics (metabolite concentrations)—to reconstruct the biological pathways relevant to cognition. By mapping these interactions, it becomes possible to move beyond a simple list of "intelligence genes" to a comprehensive circuit diagram of the underlying molecular machinery. This network view allows for the identification of critical control points, such as "hub" genes that interact with many other proteins or "bottleneck" reactions in metabolic pathways that exert disproportionate control over the system's output. These nodes represent high-leverage targets where a small perturbation might produce a large, coordinated effect on the entire network.The power of single-cell multiomicsA fundamental challenge in applying systems biology to the brain is its staggering cellular heterogeneity. The brain is not a homogenous soup of cells; it is a complex mosaic of hundreds of distinct cell types, including numerous subtypes of excitatory and inhibitory neurons, astrocytes, oligodendrocytes, microglia, and vascular cells, each with a unique molecular profile and functional role. Traditional "bulk" tissue analysis, which averages molecular data across millions of these different cells, obscures the most critical information by masking the cell-type-specific signals that drive cognitive processes.The solution to this problem lies in the technology of single-cell and single-nucleus RNA sequencing (scRNA-seq and snRNA-seq). These methods allow researchers to capture the full transcriptome of thousands of individual cells simultaneously, creating a high-resolution "atlas" of the brain's cellular composition. By applying this technology to key cognitive regions like the dorsolateral prefrontal cortex (DLPFC) across the full human lifespan, we can create a detailed map of how the brain's cellular landscape develops, matures, and ages.There are natural implications for identifying enhancement targets. By comparing these single-cell atlases from large cohorts of individuals with varying cognitive abilities, it becomes possible to ask highly specific questions. For example, is high cognitive function in late life associated with the preservation of a specific subtype of somatostatin inhibitory neuron? Is resilience to Alzheimer's pathology linked to a particular gene expression program in microglia or oligodendrocytes? Recent work has already begun to answer such questions, identifying, for example, two distinct groups of inhibitory neurons that are more abundant in individuals with preserved high cognitive function and uncovering a coordinated increase in DNA damage response factors in the excitatory neurons of those with Alzheimer's disease. This level of precision, which pinpoints specific molecular changes within specific cell populations, moves the search for targets from the level of the whole brain down to the exact cellular context where the intervention is needed, and it strongly increases the potential for efficacy and reduces the risk of off-target effects.Identifying critical network nodes and control pointsBy reconstructing the interaction networks of genes and proteins expressed within a specific, intelligence-relevant cell type (e.g., a layer 3 pyramidal neuron in the DLPFC), we can identify the nodes that are most central to the network's function.For example, a network reconstruction identified the protein kinase PRKACA and the transcription factor CREB1 as major hubs. These proteins sit at the intersection of numerous signaling pathways, including those for synaptic plasticity (LTP), neuromodulation (dopamine signaling), and neurodevelopment (GnRH signaling). Modulating the activity of a single hub protein like CREB1, which controls the expression of a whole battery of downstream genes involved in memory formation, could theoretically have a much more profound and coordinated effect on cognitive function than attempting to alter each of those downstream genes individually.Integrating multiple "omics" layers strengthens these network models. AI algorithms can be trained on multi-omics datasets (e.g., combining genomic SNP data with transcriptomic and proteomic data from the same individuals) to build models that predict cognitive outcomes with increasing accuracy. These models can then be interrogated to identify the molecular features—across all omics layers—that are most predictive of the outcome. This approach can reveal, for example, how a specific genetic variant leads to a change in the expression of a particular gene, which in turn alters the level of a key protein, ultimately impacting a measurable cognitive trait. The main idea here is to identify and target the nodes in this multi-scale causal network that have the highest degree of control over the desired cognitive phenotype.Bottlenecks and plausibilityThe primary bottleneck for a full systems biology approach is the sheer, almost incomprehensible, complexity of the brain's biological networks and the current incompleteness of our data and models. Building a truly predictive, dynamic model of even a single synapse is a monumental task, let alone an entire brain region responsible for higher cognition. However, I bet that AI can help a lot here.  Inferring novel targets with AIHere, I talk about the most speculative, yet potentially most powerful, approach: using transformer-based AI models, akin to those used in LLMs, to learn the fundamental "language" of biology from massive multiomics datasets. People are already doing it and getting extremely promising results. See, for example, here, here and here. I understand it is a cliché, but the statement "the landscape is rapidly evolving" fully fits what is going on. Overall, there are several tens of foundation models for biology, and new, larger and more multimodal ones are coming almost every month. This is a huge topic; I am publishing a book on it right now, but let me give some key points in this piece. Transformer architectures for biological discoveryPeople are applying transformers to biological data: I mean, not academic texts in biology, but raw biological data. Models like DNABERT, trained on entire genomes, can predict the functional effect of a genetic variant by understanding its "context" within the genomic sentence. Models like ProtTrans, trained on billions of protein sequences, learn the principles of protein folding and function, enabling predictions of protein structure from sequence alone. Models like Geneformer, trained on millions of single-cell transcriptomes, can infer gene regulatory networks by learning which genes' expressions tend to be co-regulated across different cellular states. Integrating multiomics data with transformersThe true power of this approach for understanding a complex trait like intelligence lies in its ability to integrate multiple, disparate data modalities, and do it on a large scale. A major hurdle for traditional machine learning has been the scalability and the effective fusion of diverse data types such as genomics, transcriptomics, proteomics, metabolomics, and neuroimaging. Transformer models are uniquely suited to this challenges.These advanced architectures can learn to "attend" to features across different data types, identifying the most salient relationships. For example, a model like MIGTrans can attentively integrate genomic data with structural and functional MRI data to capture how genetic variants relate to neuroanatomical abnormalities. More sophisticated models, such as Pathformer, go a step further by embedding multi-omics data into a unified vector space that is structured by prior knowledge of biological pathways. This allows the model to learn biologically meaningful relationships: how a specific SNP (genomics) influences the expression of a gene (transcriptomics), which in turn alters the level of a crucial enzyme (proteomics), leading to a change in a metabolite (metabolomics), and ultimately contributing to a clinical outcome like cognitive decline (phenotype).This approach directly addresses the central challenge of systems biology. The vision is to train a massive, multi-modal transformer model on single-cell multiomics data collected from thousands of human subjects who have also undergone deep cognitive phenotyping. Such a model would, in principle, learn the intricate, non-linear function that maps the complete molecular state of an individual's brain to their cognitive output. It would become a learned, computational representation of the genotype-to-phenotype map for intelligence, and that is what we eventually need!In Silico Prediction and Hypothesis GenerationThe utility of such a trained model would extend far beyond simple prediction. While it could be used to predict an individual's cognitive trajectory from their molecular profile, its value for enhancement lies in its potential as a platform for in silico experimentation and discovery.Applied to neuroscience, one could use a model like this to ask complex questions – e.g., “What gene expression changes distinguish highly cognitive-effective neurons from typical neurons?” or “Which regulatory genes, if activated, would upregulate an entire suite of pro-plasticity proteins in frontal cortex neurons?”. The model might identify a combination of transcription factors or signals that are currently limiting cognitive performance. For instance, an AI might recognize that a certain microRNA is consistently keeping a plasticity-related pathway suppressed in adults, and removing that suppression could reopen plasticity. Target discovery through virtual perturbationResearchers could use the model to perform millions of computational experiments that would be impossible in a wet lab. They could simulate the effect of up- or down-regulating any gene in the genome, within any specific cell type, and observe the model's prediction for the resulting cascade of changes across the entire multiomic network and the final impact on the cognitive phenotype. This would allow for the rapid, automated screening of thousands of potential genetic targets to identify those predicted to have the largest positive effect on cognition with the fewest negative side effects. The model would act as a hypothesis generation engine and point experimental biologists toward novel, non-obvious targets with the highest probability of success.Computational nootropic and drug discoveryThis same framework can be directly applied to computational drug discovery. The model could be used in two primary ways. First, for drug repurposing, it could screen libraries of existing FDA-approved drugs, predicting which compounds are most likely to shift the brain's multiomic state from one associated with cognitive decline to one associated with high cognitive function. This approach has already shown promise in Alzheimer's research, where an interpretable machine learning framework identified the antihistamine promethazine as a potential multi-target therapeutic, a prediction later supported by real-world pharmacoepidemiologic data. Second, for de novo drug design, generative AI models trained on chemical structures and their biological effects can design entirely new molecules with specific, desired properties, such as the ability to bind to a key hub protein in a cognitive network without affecting other targets.Bottlenecks and plausibilityThis vision represents the far edge of what is currently possible and faces immense challenges.Data requirements: The single greatest bottleneck is the need for the right kind of data at an unprecedented scale. Currently, I am doing research on scaling laws for omics transformers, which may also help in identifying the level of entropy of omics data. It can give some hints on what is possible and not possible with this approach.Interpretability: This is obvious; overall, the same story as with LLMs. However: my intuition and early experiments tell me that mechanistic interpretability should be easier for such models. Also, an important difference: we don't need mechinterp here for safety (probably), but to extract knowledge.Plausibility assessment: This AI-driven approach is yet speculative. However, it represents the most promising, and perhaps the only, path toward truly unraveling the full complexity of the biology of intelligence. The "what to do" at this frontier is to begin the arduous process of building the necessary datasets and developing the next generation of biology-informed AI models. Currently, besides scaling laws research, I am attempting to do mechinterp on such models. ComparatisonAccording to my estimates, I grouped everything into 4 tiers by the difficulty and potential impact (gene editing not included):Tier 1 (high hardness, moderate impact): Metabolic optimization through targeted pharmacological modulation of rate-limiting enzymes and transporters is the most plausible near-term strategy. The targets are well-defined proteins, but achieving brain-specific effects without systemic side effects is a major challenge.Tier 2 (very high hardness, high impact): Targeting cellular plasticity by modulating the signaling pathways that govern activity-dependent dendritic growth and adaptive myelination holds greater promise. This is significantly harder, as it requires influencing complex, dynamic processes, but could yield more profound enhancements by improving the brain's ability to learn from experience.Tier 3 (very high hardness, radical impact): Genetic modulation of core neurodevelopmental pathways. It holds the potential for the most fundamental enhancements by altering the brain's foundational architecture.Tier 4 (extreme hardness but may be lower than expected, radical impact): The development of an AI-driven, multi-scale predictive model of the biology of intelligence is probably the most challenging but also the most valuable objective. It is not an enhancement strategy in itself, but the enabling platform that would make all other strategies much more precise, safe, and effective.What to doThe easiest and most immediate thing to do is to assess the technological feasibility (the prospects of delivering the interventions to the brain), assuming it is known what to do scientifically. In terms of scientific research, for me personally, omics transformers are one of the major bets. It is still in the "fundamental science" phase, but the progress is very fast, and it may still happen faster than waiting for genetically modified babies to grow up. However, none of the things I described has the same level of certainty as gene editing. It may also be the case that many more hints are available in the existing research; hence, it makes sense to review it more comprehensively.The ideal scenario is, of course, if we first work on identifying the most plausible approaches and then someone funds them. Discuss