mathematician vs computationalist, a critique of the modern system
as someone who grew up in the 2000s.
Something that I’ve realized is important is to create a plan of attack for symbolic overhead (long justification as concise that I can make it; mathematics has an number of symbols that are super dense: they have rules, a specific behavior or behavior it will enforce, and a specific overall understanding of why or the symbol is sequence of values deemed important, or it’s both a sequence of values that have rules and behavior), which I realized is quite a lot and any issues with working memory the accomodation is a reference sheet (or as public school systems like to to call it a cheatsheet) which the pattern lookup can and usually falters for some under pressured constraint.
Update Log
- 2026-04-11: One thing worth making explicit is that this distinction is not me inventing a flattering story after the fact. The support-path problem was real. By 3rd grade I was already reading well above grade level at a college level (see a further side-note where I explained why below), and in 2014 I had scholarship traction to the J.B. Speed School of Engineering at the University of Louisville before family caregiving obligations cut that branch off. Part of that path involved direct scholarship correspondence with Andrew L. Wright, now Chair of Information Systems, Analytics, and Operations at UofL, whose own background is in engineering mathematics, computer science, and engineering. That matters because a lot of this post is really about developmental support, interrupted trajectories, and what happens when systems-oriented cognition is present early but not scaffolded correctly.
- 2026-04-11: It is also worth stating plainly that I have been in abject poverty my entire life. I grew up under Section 8 housing with SNAP assistance. A lot of the support I needed early on was not abstract or luxurious. It was basic: tutoring, glasses much earlier, structured academic scaffolding, and access to activities like sports that would have helped with confidence, routine, and development. So when I talk about missed support, I do not mean elite optimization. I mean missing baseline conditions that would have let existing ability stabilize instead of constantly being fought through stress.
Some information to help with some caveats. One thing that helps validate another form that Terrance Tao mentioned, albeit due to who he is I also take his musing and such with a grain of salt, is that math is a language. I do know that implicitly and explicitly; but the point was more that if you’re great or even phenomenal at language, math should by all means be more than doable. His writing, I can understand super easy for the most part - that is until it’s a specific term I haven’t come across or it’s a noun that has a specific equation, problem and such behind it. This is more of a deep-dive where I gathered sources to understand what exactly can strive and continue to increase competency in maths, then it became more.
I’ve read these:
- https://terrytao.wordpress.com/career-advice/theres-more-to-mathematics-than-rigour-and-proofs/
- https://terrytao.wordpress.com/career-advice/does-one-have-to-be-a-genius-to-do-maths/
- https://terrytao.wordpress.com/career-advice/advice-on-mathematics-competitions/
Then I’ve scoured these:
- https://www.youcubed.org/resource/depth-not-speed/
- https://deepmind.google/discover/blog/advanced-version-of-gemini-with-deep-think-officially-achieves-gold-medal-standard-at-the-international-mathematical-olympiad/
- https://www.microsoft.com/en-us/research/publication/the-impact-of-generative-ai-on-critical-thinking-self-reported-reductions-in-cognitive-effort-and-confidence-effects-from-a-survey-of-knowledge-workers-2/
- https://www.frontiersin.org/journals/education/articles/10.3389/feduc.2025.1597249/full
- https://apcentral.collegeboard.org/courses/ap-united-states-history
- https://teaching.uic.edu/cate-teaching-guides/inclusive-equity-minded-teaching-practices/note-taking/
- https://pmc.ncbi.nlm.nih.gov/articles/PMC4060654/
Then there are points I take (regarding Terrance Tao’s blog) with a grain of salt like the second one because of WHO he is, and especially to contention. It is informational, but it’s basically someone on par with Feynman or Einstein or such basically going “genius” is not a thing, and especially not in isolation. When historically speaking it’s true and the lone thing does happen, what he’s talking about is actually more indicative of the fact that he’s been in institution organisms since he was a child. So it’s not a myth; more such that society has improved in some ways and learning is given a chance from the beginning by federal (and/or state laws).
If he was the same person, but never had the support, this part changes by a lot. This is someone who’d end up being that type (more so an isolated genius) if he never had the same support because he seems to have already had the quick understanding, adaptability, and overall pattern recognition to thrive. That only is magnified by hard work and determination which he clearly has.
The difference is that Terrance Tao is someone who’s been academic, or forced into academia, since he likely recognized and understood consciousness around 7-8. Which is generally when people start remembering and committing memories with compression. So with his sources, it’s great to go to for general understanding and comprehension; but overall he represents a standard that relies on continued and in-depth support and fostering of work ethic since childhood.
And to be fair to him, Tao also literally supports part of my distinction in his own post on mathematics competitions. He outright says that mathematical competitions are “very different activities from mathematical learning or mathematical research” and that Olympiad problems have a “cut-and-dried, neat flavour” because they are built for that environment and time limit. Real research, by his own admission there, is much more patient and lengthy. So even he is basically admitting that competition fluency is not identical to mathematical research, which is part of my exact point. So… moving on from my slight critique and now onto something that I realized while reading through his blog and also some in general learnings; as the prior was motivation and a brief analysis on a surface level.
In my honest opinion, it feels as though Terry Tao is using computation and mathematics as a similar concept. For example, computation is technical fluency and such in math, and practice helps with computation and recall; this is number solving. However, a mathematician requires number sense, theory, relationships, and structure. A mathematician uses computation to prove concepts and create proofs to share, dissemination of knowledge and use for possible discourse. A mathematician is also not necessarily worried about optimization of any problem that is solved; that is more near the end, and especially when the structure that was being examined with various methods.
So, the issue is conflating Computationalist with Mathematician, and a Computationalist doesn’t mean Computer Scientist.
I should also be explicit that I am using Computationalist here in a deliberately extended and nonstandard sense. In philosophy of mind and cognitive science, computationalism usually refers to the view that cognition or mind works computationally. That is not my use. My use here is a role-and-institution label for people, filters, and systems optimized around procedural fluency, symbolic handling, and constrained execution rather than deeper structural authorship.
The current point in society relies on and produces Computationalists in mass via adherence and compliance with rote recall, fast and precise symbol manipulation, and memorization of patterns to quick-recall and use over understanding. This is why things such as standardized tests and pattern-solving tests like LeetCode filter for pure computationalists, which is why often someone may excel in a LeetCode programming trial and pass the on-sites but, on the job if hired, have later issues. This is compressed retrieval under time constraints as a skill. If it were to be a job description, flashcards are cue-response conditioning. Computationalists are moreso the skilled machinist, of factory labor in regards to the labors of the mind in an institutional or private setting. It’s someone who excels at manipulating and solving math, and adjacencies that use math.
This also gets backed by pedagogy research pretty directly. Jo Boaler’s “Depth Not Speed” is basically saying the same thing from the education side. Her point is that a lot of people incorrectly believe being good at mathematics means being fast at mathematics, when really prioritizing speed mainly rewards one subset and pushes out the slower deeper thinkers that mathematics still needs. She even says outright that we do not need students to compute fast because we have computers for that, we need them to think deeply, connect methods, reason, and justify. That is almost a one-to-one validation of what I’m saying here.
And this also gets at something I noticed back in AP history in high school. Officially, AP history is supposed to be about historical thinking skills, analyzing sources, making connections, and crafting arguments. But in practice, part of my grade was also compliance: outlining and taking notes on chapters, then being graded on how well the notes were taken. To me, that was mostly busywork. The actual learning was from reading and comprehending the relations, dates, names, systems, and governance structures in the history being studied. The discussions, debates, and worksheets were better measures of understanding. Even the note-taking literature itself makes this distinction in a softer way: note-taking can help with encoding and later review, but some forms also overemphasize writing and real-time capture over actual processing and understanding. And that also lines up with the broader active-learning literature, which tends to find that discussion- and engagement-based learning outperforms passive lecture capture by a pretty wide margin.
Ngl, if I had the tools, including AI/ML, and the knowledge I have now back then plus funding, I probably would have by spite tried to make a model-guided machine that wrote with a pencil on my damn composition notebook just to do that part of the class for me.
Ironically, that spite-bot probably would have pushed me into robotics and electrical engineering, which I wanted but did not have the knowledge for. And a lot of that goes back to the fact that growing up, especially around 7-8 and pre-teen wise, math became a literal point of math trauma plus a survival aspect based on grades. In a literal sense it was stressful. Math got made into something compliance-based and scary when it really should have been structure, language, and relation. So the issue was never that I did not love math. I always did. I have always had great number sense and pattern recognition. I just had no one teaching the mathematician in me.
One of the clearest math traumas I had, and one that honestly carried with me for years, was getting an answer right but it not being “their” method. Because I could not show my work in the expected way, it got marked as a zero. The teacher also never thought to help me figure out how I got the problem right. I do not remember the method now, but I know I used it on at least two problems and showed the teacher. It stuck with me because that was one of the moments where I became the most disillusioned and genuinely felt dumb and incompetent in math. I took the zero, was angry about it when I got home, and after that did my best to coast by and only do what I could to pass. The lesson I took from it was that there was no real problem solving or creativity in math, only following orders.
In a very real sense, elementary school is where I learned what compliance and conformity meant, and math was one of the places that made that lesson stick.
And it got bad enough that at some points I would just shut down doing math. Or I would know most of it, or half of it, or the underlying why in comprehension, but the tests, quizzes, and exams felt like survival. It was survival for my brain.
As in, I would literally just write my name, do the bare minimum, and then sit and stare at the paper until time was up.
In middle school, I realized that I genuinely did want to try and get into Physics or be a Neuroscientist. I was already doing my own research on the net by then and reading well above grade level long before that. By 3rd grade it was already obvious that the books for my grade level felt dull and boring.
There was an incident where I got in trouble because I went to the back and read about medical systems and also psychology / serial-killer stuff, which is still funny to me in retrospect. My beef with that then and now is simple: if we are not supposed to read it, why the fuck is it in the damn library.
When we went to the library to pick out books, I would browse through, look at the front and back, read a bit in the middle, go by genre, non-fiction or fiction, and then at some point I would find myself in the back where it was slightly darker and where the more interesting stuff was. I also definitely stole extra books and put them back when I was finished.
But even though I was clearly pulled toward harder subjects, I distanced myself from a lot of that because of the perceived dumbness in math. The experience of mathematics in the public school system was humiliating. That is part of why I think tutoring in middle school would have helped immensely, and elementary school support would have been foundational.
And the AI point is honestly the kill shot to this whole distinction. On July 21, 2025, Google DeepMind reported that Gemini Deep Think achieved officially graded gold-medal standard performance at the IMO. Putnam-style benchmarks like Putnam-AXIOM also show models getting much better at bounded competition-style mathematics, even while those same papers still point out contamination and memorization problems. But that is exactly why this matters. If compressible, elegant, time-bounded math is the layer AI is increasingly able to do well at, then that supports the idea that competitions are much closer to testing the Computationalist slice than the full Mathematician one. A machine can increasingly become a very strong Computationalist. That is not the same thing as being a Mathematician in the deeper sense.
Computation should be used to temper and refine understanding, which helps and fuels formalization.
For example, Terrance Tao is most certainly a Mathematician who excels at computation. But it’d be incorrect to call him a Computationalist by this metric and distinction.
I think it’s a good distinction between the two because I do believe these are two different titles. A Computationalist is not lesser than a Mathematician; they simply operate in different domains.
Domains
Accounting is the ultimate Computationalist field. You do not want your accountant to be a Mathematician inventing novel geometries for your taxes. You want a highly skilled machinist who can perfectly execute a rigid, deterministic set of rules with zero truncation errors.
A Mathematician is the superset for fields like nonlinear modeling and quant finance. These fields deal with dynamic, chaotic systems where standard rules constantly break down. You cannot memorize a flashcard to predict a market crash or model fluid dynamics. You have to understand the invariant structures underlying the chaos. You have to build the engine, not just turn the crank.
This brings us to Data Science, which is the exact battleground where this identity crisis is happening. The industry title “Data Scientist” currently masks a massive divide between machinists and architects.
The Computationalist Data Scientist is the person who imports scikit-learn or PyTorch, drops standard data into a standard Transformer or Random Forest, types model.fit(), and brute-forces the hyperparameters until the accuracy goes up. They are operating the machinery. They do not know why the latent space is shaped the way it is; they just know how to run the pipeline.
The Mathematician Data Scientist is the person who looks at a problem, realizes the standard linear-algebra abstractions will fail or explode in computational cost, and builds a custom, spatially bounded architecture to capture the exact invariant geometry of that specific dataset. They are not just fitting a model; they are defining the topology of the solution space.
A Mathematician does not strictly require Data Science, a pure topologist might never touch a dataset in their life, but pushing the boundaries of Data Science strictly requires a Mathematician. When a company hires a LeetCode-filtered Computationalist to do a Mathematician’s job, they end up with a bloated 45-million parameter model that collapses in production because the machinist did not know how to architect a structural boundary.
Example Roles
So, let’s examine a few different job titles and determine which bucket they should fall in. All of these overlap with computation, mathematics, and technical skill. All of these can also generally be worked out on paper before implementation in code, especially with Mathematician-involved reasoning.
Graphics Programmer
A Graphics Programmer is someone who understands algebra and often geometry enough to manipulate pixels and create shaders, which are pixels and behavior that require non-complex, or if needed complex, physics for close-enough behavior. This is a Computationalist role. It requires the recall and the constraints, but generally speaking, as long as the programmer can find the correct research paper via SIGGRAPH, CHAOS Journal, or some other relevant place, they can use that and then optimize it for games. They need to know how the graphics work with the used library: DirectX, Vulkan, OpenGL, etc.
Systems Engineer
A Systems Engineer requires understanding of the structure as a whole and the relationships of interconnecting parts. This is a Mathematician role. It’s heavy in problem solving and devising ways to create behavior for that exact system. To be a good systems engineer paradoxically does not require extensive coding knowledge, because that is generally what helps with the relationship and patterns involved in how the system behaves. A good systems, or even autonomous, engineer should understand the kind of issue where someone says they are examining a one-to-many system that’s server-authoritative and concurrent with global single variables for that one in the one-to-many system, and immediately understand where the issues and bottlenecks come from.
ML Optimization Engineer
An ML Optimization Engineer is a Computationalist. They need to use mathematics they know and number-crunch to find a better learning rate and squeeze out the most. This is an accountant for deep learning and small-to-mass systems. An ML Optimization Engineer, for instance, does not need to know the past or deep learning theory or even explore it deeply; they just need to understand how GPUs work, parallelize and brute-force, interact with CUDA, and how numbers are affected during the dynamic training process.
Game Mathematician
A Game Mathematician is an easy one: a Mathematician. However, this explicit role is more used in Casino and Gambling industries, which is a distinct differentiation from a Game Designer, which is also a mathematician-imbued role. Regardless, the Game Mathematician must adhere to payout rules and understand behaviors and relationships, and test their slot game, for example. A payout example is moreso that from 0.1 cent to more than $10+, the payment of the smaller rate has better odds of breaking even rather than just losses, while the higher end should usually be attributed to more overall losses but the payment psychologically feels more worth it and the odds are even lower. The payout is dependent on the set odds and adheres to regulations.
A Game Designer, since it was mentioned, is also a Mathematician, but this is more in systems modeling and creating feedback loops that pertain to an intended behavior or goal. Such as: if a game is a horror game, and it’s meant to be scary, tension, an abstraction created via the game mechanics itself, is the main feedback loop used to drive the goal of fear, which is compounded and reinforced by the art direction. It’s not as math-heavy as the others but math understanding absolutely can make it better.
Eidos
Anecdotally, my designed Eidos Architecture, which got a 3/4 in originality at ICML, and at least 2/4 in significance and originality by two other reviewers before rebuttals, still in process, is an algebraic-geometric and set-valued based ML architecture that as far as I’ve researched does not quite exist as it is. I literally did not need to do any optimization until I removed the constrained matmul, which was the last standard-ML part. It inherently regularizes, no dropout, the LR is connected to a scheduler and the loss navigates it automatically. It’s only deep learning theory in how it’s made. An ML Optimization Engineer using their own standards and such would actually end up breaking the architecture to the point of collapse and instability.
It was created as its own neural network, then expanded to a convolutional and transformer hybrid architecture. It was quickly evident through ablation testing that regularization was harming it and when it was removed the accuracy gradually got better, and so did it as the parts got replaced so that later it became more and more Eidos rather than a hybrid architecture. Which means that any optimization needs to adhere to the architecture rules and that standard ML practices should be considered as a basis in which to test against. With mathematical understanding, it allows for testing the logic of if this does this, what does this mean if this process interacts with this, or do we need to replace it with a like-minded solution. It’s distinct enough that the seeds are always random and it does not matter which is used, with using control test benchmarks such as IMDB and MNIST, as the behavior is generally always the same and is dataset-bound due to the mathematical principles used to create the architecture.
To also be clear, LLM engineering assistance was greatly used to help create the Eidos architecture. But it cannot create Eidos alone. This required an immense amount of effort in correction and steering it away from using proven standards as if they were the only truth. Which also meant allowing the AI to try the standard, and then use that against my guidance, where my guidance would end up being right. For instance, at a specific point higher precision was needed, but as the system got better overall this became somewhat less necessary. So, as the architecture became more correctly Eidos, some of the past hybrid-architecture learnings were no longer a true observed rule to go by.
So some rules were specific architecture quirks that were observed at that point in time and setup. However, the more Eidos the architecture became, the more I also had to set guidelines and constraints of how the LLM agent should assist with problem solving. Each part has specific observable behaviors that it is more correct to call machine learning architecture modular engines in deep learning.
Which means in its current ICML form, I need to redo some prior tests, not benchmarks but isolated components, and evaluate if previous issues or transitional rules are still correct as claims. To readjust known assumptions, and make new estimated guesses how a specific thing should work.
At ICML, the main reviewer who had a confidence of 4 was primarily concerned with whether it scales more than anything else. So, in this case, if this was a foundational model family and I were building a base model, I would need to basically train the person to optimize towards the rules of the architecture versus the standardization of what they know. As of the ICML variant, the only thing that needs optimization is computation time itself. Regardless, with benchmarks generally speaking the first 10 epochs is where it reaches peak saturation of some observed datasets.
This should also help with defining roles for a job; where the question is simply: are you a Computationalist, a Mathematician, or do you wish for a truly hybrid, which needs to be distinguished. A Game Designer isn’t a hybrid role. A Game Mathematician is closest to a hybrid but not quite. A literal Mathematician is a hybrid.
And this is exactly why the hybrid needs to be separated out instead of lazily collapsed into “does some of both.” A hybrid isn’t just someone who touches mathematics and also touches implementation. That’s too weak. By that standard almost every technical role becomes hybrid and then the distinction means nothing.
A true hybrid is someone who can move between structure and execution without losing the logic of either one. They can do the machinist part when needed, but they also understand the architecture of the thing, the relationships, the invariants, the failure cases, and when the standard procedure should be broken because the structure of the problem itself demands it. They are not just switching tasks. They are translating between domains.
That is why a Game Designer is not really hybrid by this distinction. They may absolutely use mathematical thinking, abstractions, balancing logic, progression curves, and feedback loops. But that is still not the same thing as being the person who has to operate as both structural architect and computational executor in a formal sense. A Game Mathematician gets closer because the role is forced to deal with payout structures, constraints, probabilities, and system behavior all at once. But even that is still not quite the same as the full hybrid case.
A literal Mathematician, in the way I mean it here, is hybrid because they strictly include the computational layer but are not reducible to it. They can compute, formalize, derive, test, and execute, but they also have to understand why the structure is what it is in the first place. So the hybrid is not some middle-tier compromise role. If anything it is the most demanding one, because it requires both symbolic competence and structural authorship.
So when defining roles, disciplines, and filters, the real question is not just “can this person do math?” It’s more like:
Is this role mainly asking for procedural execution under constraints? Is it asking for structural reasoning and model-building? Or is it asking for someone who can design the system, understand the mathematics of why it behaves that way, and still drop down into the computational layer without breaking the logic of the whole thing?
Those are not the same targets. And a lot of institutions, companies, schools, and interview pipelines still act like they are.
There is also another caveat here, which is that a Systems Thinker, Designer, or Systems Architect is more of a designer category than either a Mathematician or a Computationalist. A systems thinker needs logic, Socratic reasoning, and the ability to ask why a thing behaves the way it does. Math can absolutely help expedite that or test it, but systems thinking does not inherently need to be math-based, and it definitely does not need to be computational in the computer-science sense at all.
So if this was a Venn diagram, a Mathematician is nearly always also a Systems Thinker and a Computationalist. A Computationalist does not need Systems Thinking, but it does need mathematical computation, recall, and logic. A Systems Thinker does not necessarily need either formal mathematics or computation as the main mode at all, because they can still be operating at the level of relationships, incentives, causality, and behavior.
That is part of why this gets confusing in modern life. People use these as if they are all interchangeable labels for “smart technical person,” but they aren’t. They are overlapping modes, not identical ones.
At this point, this is less a Venn diagram and more three supersets, with branches and overlaps that connect outward from them.
This is also why it is worth separating a Computationalist from just “someone who uses a computer.” A Computationalist is more like someone who becomes expert in computational tools, procedures, and machinery for solving problems under constraints. That can be a computer, yes, but it can also be Excel, a literal grid or lined paper, LaTeX, Lean, a program, a model, or some other formal instrument used to calculate, check, refine, or verify.
So when I say machinery, I do not just mean a machine in the literal hardware sense. I mean the procedural instrument layer itself. The Computationalist is expert at using that layer cleanly and effectively. But that still does not mean the machinery is the essence of every role that happens to use it.
Programming is probably one of the purest examples of Computationalist labor. That is not an insult to programming, it is just what programming as such is: symbolic handling, procedural execution, constrained logic, debugging, exactness, syntax, interfaces, and getting the machinery to do the thing correctly. A systems-oriented programmer is already moving away from programming in the narrow sense and toward systems engineering or systems architecture. Programming is not the same thing as systems.
To me, this also became really evident when a lot of programmers back in 2023 were saying AI can’t make programs or code. I was mainly using it at that point to debug and assist with integrating plugins. And in Unity, a lot of the time, I found out I did not really want someone else’s whole toolbox. I usually only needed one tool from it that would have taken too much time to set up on my own even without AI.
It was not until a few months later that I decided to quickly make a Unity mobile game, Syntax, as my personal case study. It is basically a hangman game played using binary input, though you can switch between binary, normal, and phone numpad as the way to enter things. It sounds simple, but hooking that up, debugging it, and making it actually work in Unity meant UI state switching, making sure the input system worked for mobile but could also work for a WebGL compile if I wanted that, event flow, scene behavior, racing managers, and all the other moving parts.
That is an entirely different beast than making a simple hangman.py or hangman.cs where you can get away with an array of words, randomly pick one, and then do “while tries > 0, keep guessing, otherwise the man has been hung.” Honestly the hardest part of the hangman learning program I did back in 2021 was literally the ASCII, which was unneeded complexity for a learning program.
So to me the question became less “can AI code?” and more “are you an effective and adaptable humanoid when it comes to utilizing AI?” And the general consensus seems to be that people are not, and it shows. The onus is still on me to keep track of the moving and communicating parts, ensuring that the generated code explicitly does what is required for the space-time complexity and needed implementation. Especially with games, it is still about knowing what piece of the puzzle in function or class is needed for a specific problem: solo, co-op, client vs server. The inventory design itself changes completely based on solo or co-op, and what type of inventory system it is, whether Tetris-style or one-item-per-slot.
That is also why I am not fully convinced by the broad “AI makes people less critical” framing unless the measurement is careful. There are actual studies now raising that concern, and not without reason. The Microsoft Research / CHI 2025 paper found that higher confidence in GenAI was associated with less critical thinking, while higher self-confidence was associated with more. But even that paper also says the critical thinking did not disappear so much as shift toward verification, integration, and stewardship. And on the other side, more education-focused work is already arguing that structured, reflexive, and pedagogically grounded AI use can foster critical digital literacy and epistemic agency rather than just replacing thought. So to me the real question is not “does AI reduce critical thinking” in some flat absolute sense. The real question is: are you using it as a crutch, or are you using it as an instrument that you still actively interrogate? Because I have always used AI critically and argued with it. The code itself is not the whole system, and the AI is not the one keeping the system coherent.
Something funny to me in retrospect is that in middle school and high school, when I was trying to make sense of my own cognition, I remember landing on this phrase: the answer is more RAM.
For AI infrastructure now, that sounds almost prophetic in a stupidly literal way because so much of the current boom runs into memory walls, bandwidth limits, and brute-force scaling of working state. But when I said it back then, I meant something much more human. I meant enough working memory to hold more tasks, track more moving parts, and not burn all my cognition on bookkeeping.
Even with pen and paper, standardized methods still demanded too much compute, too much holding, too much tracking. I had already figured out that was one of the core problems: calculating, holding, and tracking all at once is a genuine strain. If I tried to brute force it, I would literally get frontal head pressure.
So for AI, “more RAM” became a brute-force infrastructure answer. For me, it was more like: how do I get enough cognitive headroom to actually think instead of just survive the computation?
Something else that keeps hitting me is that there is roughly something like 30 years between the last major AI boom and this current one. And this one feels more stable to me for a very simple reason: AI now exists as augmentation of human knowledge and capability. That means competent people, and especially experts in their fields, can suddenly make massive progress and advancements much faster than before.
So one of the obvious outcomes to me is more breakthroughs, more experimental tech, and more democratization of capability. That, to me, is what the singularity always actually meant. Not the sci-fi version. More like a point where advancement and research start happening in bursts or chunks because tooling amplifies what already exists in people.
The obvious downside is that institutions and academia are still massively underprepared for it. A lot of assumptions, behaviors, and filters should be getting readjusted and they mostly are not. Tightening everything only toward industry when industry itself is bottlenecked is almost a tragedy-of-the-commons problem. It also ignores the fact that institutional actors can be incompetent too.
Even Tao’s framing points to part of this in a different way: higher education is often much more about work effort, surviving the structure, and having people vouch for you than some pure throughput measure. It is learning the field, yes, but it is also learning how to be a researcher inside that institution. And it is also an environment that seems pretty prone to verbal and interpersonal abuse from authority figures. I have read enough of that in places like /r/PhD and sometimes /r/GradSchool that it is hard not to notice the pattern, even if that is still just social proof and not a pure measure of competence.
Then that creates another problem. Once your college is not regionally accredited, you can get locked out by merit-signaling even if you could still be exceptional. My own degree is in Human-Computer Interaction with a focus in Game Design. I actually compared the credits and coursework to a regionally accredited school offering a Game Design undergrad and grad path, and my conclusion was that what I obtained was objectively more rigorous in breadth even if it was project-module based. It was HCI in Game Design as an undergraduate degree, and course-wise it outperformed both comparison programs. But that does not matter as much as people want to pretend if the institution signal is treated as the primary gate.
And that is also why a Computational Physicist is not actually a Computationalist role in the core sense, despite the name. A computational physicist is primarily a systems thinker operating through mathematics, with computation serving as instrument and verification machinery rather than as the role’s core epistemic mode. There is too much uncertainty, approximation, noise, model fit, physical constraint, and numerical method for the role to be reduced to machinery alone.
The second problem is the constant flattening itself:
And what did I learn that started me writing this for 3+ hours? It was noticing that anything that’s worked for me, regardless of symbolic overhead, is when I understood it by having to explain it or produce it myself, with someone or something that understood what I actually was asking. Khan Academy is generic advice that fails because I learn by discovery, experimenting, still discovery, and practice. Reception models are more of a quick lookup for a reference rather than a learn-and-commit mode, and continue using it.
A better mode of learning for me based on this is dialectical and construction-oriented: via dialogue, reasoning, and constructing, without overcorrection; allow me to make errors and to learn why this is an error, or prove if it’s not an error. So, symposium-style learning: what is this? Bring a concept, throw out a hypothesis, use logic, and work towards it. Consider what should be correct for it to be true. Argue the structure, then test. Calculators can be used for checks to proceed, and for correcting assumptions. Any thesis or essay is meant to provoke discourse and discussion, not be pacified. The exception is with research articles and specific types of publications, obviously.
That includes this. Which is a synthesis of understanding and that three modes systems came out of it. I decided to write for a while in a blog post, because this is what helps me prove and contextualize. Consider teh following format. You give me a problem, and I do my best to work on it. Pencil, graph paper (or lined) and a simple calculator do quickly check recall and mental computation. I did this to get better at understanding square roots and ended up doing this.
Which is A Geometric Derivation of Square Root Approximation via Residue and Iterative Refinement, which is the end result of that and made me genuinely understand what exactly a square root is and why squaring a factor cancels it when solving for y.
Which made me realize that unless it’s an exact computational problem that demands high precision (doubles, floats, e-18f) and such, in schooling saying that the square root approx of “sqrt(6) = lower than 2.5 but greater than 2.4, and about 2.45” should be equally correct until it again, is a high precision subject. Conceptual math understanding.