Strangely enough, I tend to place the writer in the manager category - he effectively manages the actions of his characters and strives to produce certain impressions on his superiors, meaning his audience and readers.
The programmer ideally wants a precise formal solution that can be quickly and cheaply verified in debugging.
The manager needs to improve something in his system, and life will be the judge of that - slowly, tediously, expensively, and often with ambiguous interpretations.
The student needs to go deep into knowledge, and the scientist needs to go deep into knowledge and also do the manager's job - searching for ways to apply and extend that knowledge.
This is of course a very incomplete list of the meaningful variations. But it will be enough for you to start understanding that there are very different worlds in the use of AI - and this is true even within a single profession, from junior all the way up to investor, senior, architect, and so on. Programmers who build AI or build agents on top of it often have no idea about these other possibilities. They often don't even know the names of the most important fields of knowledge required for that other kind of work with AI.
Here you will find notes on these topics, fragments of various files that were meant to become secret but somehow never did: https://zenodo.org/records/18824868 "These are notes that are not obliged to be true, let alone universally true."
It affects them dramatically, often tragically! Think about how optimistic the people who started learning programming were - 15 years ago, 5 years ago, and right now people are still studying under curricula that are five years old. And that's the core problem: "You got your education - now it's time to retrain yourself." This applies to a huge number of professions.
The education system is a truly remarkable sphere right now, because the crisis touches everyone simultaneously: teachers, students, bureaucrats in ministries, parents and employers. Artificial intelligence has crashed into this sphere in a deeply dramatic way - and from six directions at once. It seems to me that education and re-education will now be one of the main bifurcation points.
These problems in education are sharply redirecting the development trajectory of AI at the leading companies, and they will become a massive labor market for programmers (new software products are needed). At the same time, these problems are putting enormous pressure on the entire programming industry.
Every idea and piece of research about what children should be taught in schools and universities directly speaks to how and what you yourself will need to retrain for tomorrow - or already needed to yesterday.
And often you don't understand, within your own profession, how much of your knowledge will need to be relearned - and how much of who you are will need to change. So how are you feeling right now, and what do you sense is coming?
The educational system is currently being offered something fundamentally unclear in relation to AI. A system that makes mistakes. Imagine an honest dialogue between a minister of education and the chief developer of a company working in artificial intelligence.
Yes, the system can sometimes say things that are not true. It can sometimes inadequately praise the user, reinforcing incorrect assumptions. How capable it is of making reasonable judgments — we do not know. In principle, this is not clearly defined. What will happen tomorrow is unknown, but everyone expects very large progress. And what happened yesterday is: “we made two revolutionary steps one after another over the last two years.”
“Over the last couple of years, almost everything that specialists in artificial intelligence were taught has become significantly outdated.”
You see — the educational system has nothing to rely on. Absolutely nothing is stable or acceptable as a foundation.
Nevertheless, the situation directly and very strongly concerns the educational system and the qualification system. Everyone clearly understands that some large-scale reform is needed. But what exactly should be reformed? A system that already today is: “unknown what,” “unknown how,” “unknown to what extent”… yet it is clear that everything is changing and becoming obsolete very quickly.
On the one hand, it seems reasonable to simply wait — perhaps one year, perhaps two, perhaps five — until the situation stops changing so rapidly. On the other hand, it is clear that during these two or five years young people will significantly move away from the educational system toward artificial intelligence, or possibly toward something else as well.
At the same time, almost all teachers face the threat of a personal career catastrophe. A very important part of their work is rapidly losing value before their eyes. This directly threatens severe demotivation and large-scale institutional breakdown.
In my view, it is impossible to take any reasonable large-scale reform actions in a situation that itself is unclear, undefined, and rapidly changing. And of course, a similar situation has already emerged among other professionals — programmers, doctors, consultants, patent specialists, digital artists, video creators, and others.
This is happening despite the fact that programmers used to consider themselves among the most intellectually advanced. Educators were accustomed to acting as the most confident and authoritative. Patent professionals were the ones who knew the most about inventions. Now it's all just collapsing !
Today, however, a customer may suddenly write several pages of source code. A student may understand a topic much more deeply than a teacher. An inventor entering the patent system may already have studied dozens of nearby inventions in their field. And similar situations are appearing across many professions.
In my opinion, all relevant authorities that are unable to make adequate, rational decisions—beyond all sorts of alarmist concerns about future risks—must recognize and accept the situation of a temporary UNCERTAIN state of emergency and act accordingly. To avoid making foolish decisions in one direction, which will in any case prove inadequate.
It is difficult to come up with procedures that are good for both the caterpillar and the butterfly.
By searching for the author “Kokhan, Serhii G.” on Zenodo, you will find a much larger article that expands and details this post, along with additional explanations on different aspects of the topic.
1. The Generalized-Abstract Understanding
Well, it's clear that there is a generalized-abstract concept: you create text faster, translate faster, format faster, program faster, create reports and reviews faster...
2. The More Concrete Understanding
There is a more concrete concept, when you for your typical mental activity create all kinds of ensembles from sequences of prompts, agents, and so on.
3. My Understanding - True Cognitive Amplification
Well, but I don't like both these concepts. The thing is that I for myself truly discovered the possibilities and power of smart chats when I understood that with their help I can amplify my reason and intelligence. I can with their help invent something new-that which I couldn't have invented without their help. This is the real amplification of your reason and intelligence.
The first-this is to a very significant degree an exoskeleton of a secretary, translator, proofreader, referent. What does reason and intelligence have to do with it here? Not that reason-even intelligence, this is a concept about the ability to solve tasks of a type unknown to you. When I mastered such things, yes, I understood that I can be myself and plus three secretaries in effectiveness and five translators. But I didn't want to be such a multi-personality. I didn't like it. And the well-being, honestly speaking, was like that of a very loaded secretary at one and a half positions.
The second-here you simply order a certain sequence of work of yourself and some programs. Yes, this, of course, is an amplifier of organization, not of intelligence and not of reason. But so one can also call a paper notepad, a daily planner an "exoskeleton of a self-organizer"-simply this one is in computerized and web-service form.
---
Question to Readers
And how do you think-where is it better to attach the term "cognitive exoskeleton"?
I in general like more "cognitive exo-wings," because it doesn't look like a skeleton at all and is not one, and also hints for some reason about a cemetery. Real iron exoskeletons remind by appearance of strengthening and solidity, and the word "skeleton" itself reminds only about a cemetery, and also doesn't improve posture at all. This is a slightly different theme-terminological-but I write about this here since the term "cognitive exoskeleton" is already widely occupied for the first two not quite cognitive positions.
---
My previous article *"Human and AI. How to Modernize Your Consciousness in 10 Minutes"* examines and positions about a dozen other dimensions regarding your attitude toward artificial intelligence.
I repeat, here the main idea is not a questionnaire, this is intended as a means of personal psychological support in positioning oneself. For many questions the answer should/can be "don't know yet." But you will now know that such a dimension exists and will more meaningfully look at all kinds of arguments for the first, for the second, or third. Also you will begin to understand that you are surrounded by a multitude of different-type, different-vector in these senses people.
---
1. What do you see in artificial intelligence (hereinafter - "smart chats")?
- A service? - An exoskeleton? - A partner? - All of the above in turn in an incomprehensible order? - All of the above in turn in a meaningful order?
---
2. How do you see for yourself the main purpose of smart chats?
- Help in qualification educational works? - Help in programming? - Help in self-development? - Help in learning in general? - Literary creativity? - Reports for work? - Help in self-development? - Help in the depths of science? - Survey self-education? - Help with correspondence? - Help with writing articles? - Help with translation? - Help with reading large texts? - Big help with your main subject at work, in hobby? - ... (here the list is large, if something "other," please add in comments) ... 10. To what extent do you accept a person's answer using a "smart chat" to your important question of interest?
- I want the person himself to answer! - For the sake of quality and speed I accept that a person with a smart chat answers! - A correct and understandable answer is important, not who answers! - I clearly understand: where exclusively a person is needed, where a person with chat is possible, and where simply an answer of a smart chat is acceptable.
---
Do you think it's worth developing and extending this topic further? Something interesting, useful?
Full version: https://zenodo.org/records/18726277
Much of the current AI debate is stuck in a time loop, where regulation and public skepticism focus on models from 2017 to 2023. As a researcher who has been following AI since the 1980s, I argue that we have reached a phase transition. The gap between the 2023 model and the 2026 system is not gradual—it's the difference between a moped and a spaceship—yet our terminology and social contracts remain dangerously outdated.
The harmful term "artificial intelligence" creates the false illusion of an autonomous subject, obscuring the critical role of human performers and the human narratives of various communities. By reimagining these systems as "smart chats" with specific years of release, we return to an engineering-centric approach, where the value of the outcome is determined by the operator's ability to "play" the instrument. For the educated user, these tools serve as a bridge to the collective wisdom of humanity, while for the uninitiated, they remain a source of artificial information noise. https://zenodo.org/records/18683885