- WYWIADY
- WIADOMOŚCI
- KOMENTARZ
Is the Prussian school model on its way out? A former Pentagon expert has no doubt
As artificial intelligence accelerates, the way we acquire knowledge is entering uncharted territory. Traditional classrooms and static curricula struggle to keep pace with a world where skills evolve faster than ever. From cyber threats to hybrid warfare, and from lifelong careers to multiple reinventions of professional identity, the future demands a radically different approach to learning - one that adapts, personalises, and amplifies human potential in real time.
The rapid development of artificial intelligence and new technologies is not only changing the way we work, but above all redefining the way we learn. In a world where, according to the estimates, 40% of current professional skills will change, the traditional model of education is no longer meeting the needs of reality.
Are schools and universities, based on a linear, uniform education system, capable of preparing us for an era of constant change, hybrid threats, and now the inevitable cooperation of humans and AI?
We had a pleasure to discuss the concept of the Future Learning Ecosystem (FLE), the future of education, the role of artificial intelligence in skills development, and the challenges facing the security sector with Ph.D. Sae Schatz – researcher, founder of Knowledge Forge LLC, and former director of the ADL initiative at the Pentagon.
The opinions expressed by Ph.D. Sae Schatz are her own and do not reflect the position of any institution she has been or currently is affiliated with.
Future Learning Ecosystem in practice
Zuzanna Sadowska, CyberDefence24: Could you please explain in your own words the concept of future learning ecosystem (FLE)?
Dr Sae Schatz, researcher, founder of the Knowledge Forge LLC, and former Director of the Pentagon’s ADL Initiative: Let me ask you this to start. If you were going to design education, training and testing with all the tools we have today like AI and the internet, would it look like the current model? A model in which we send little kids to school, where they stay every day for a year until they’re 15, 18 or 20 years old.
A model that’s more or less the same for everybody and is linear?
It feels a bit outdated when you put it like this…
Of course. And so, the idea is how do we move to a new model of learning that takes advantage of the tools we have nowadays.
The World Economic Forum recently produced a report that said about 40% of everybody’s skills on average will be out of date by 2030 because of new technologies and thus new jobs.
And so that also means we have to have lifelong learning.
You have to be constantly learning new skills, but we’re not going to sit in classrooms until we’re 75 years old. So instead, what we want is a continuous lifelong learning pathway that is adapted to you. It is optimised to your time and needs as well as the capabilities of your device.
Do you think the human mind can take this much?
It absolutely can. But this goes back to our education and training systems, allowing for the development of these kinds of people. I think for now they aren’t really doing it. Universities, defence institutions or otherwise, are still a bit more focused on being deep experts. I mean, let’s talk about traditional universities. If you were going to train an expert for a modern technology-enabled context, what personal knowledge and skills do they truly need, and how efficient could they be if supported on the job by an AI tool? Would someone with an AI on-the-job companion still need the same competencies, or could some of those requirements be offloaded, freeing us humans up to focus on other things?
I think that we’re hesitant because there is a lot at stake. What do you think is the biggest risk of relying on technology?
It’s a little bit different from what we’ve discussed, but I think one of the biggest risks in the defence and security space involves hybrid threats. Just like education and training specialists can use digital technologies to enable learning at scale, bad actors or criminals can use these same technologies to really scale up their harmful activities.
For example, since ChatGPT was released, we’ve seen a huge, and I mean „uncountable huge” increase in spam emails, voice scams, spear phishing, and influence operations. Russia-, China-, or Iran-led propaganda, cognitive warfare, and sabotage are especially hard to address with our old models. They aren’t well designed to defend against new hybrid threats, at the same time giving a strong asymmetric advantage to the aggressor.
We can't stop the revolution but we can prepare for it
It can be considered a far-fetched sentence, but we say that cyber-criminals are one step ahead of the systems’ defenders. Do you think there is a seed of truth in this?
That’s a good question. Some of these technologies are so broad, they can be used in many different ways, and there is definitely a tendency for people bound by less regulation and having fewer scruples, to find ways to use it more rapidly. Additionally, they’re going to have a first-mover advantage. They’re not worried about the regulations in the same way that the rest of us are, and of course, they’re not worried about the risk of causing harm. In fact, sometimes they want to create harm.
We are afraid of technology, and a big reason for that is the statistics that you just mentioned: people are afraid of losing their jobs. You’re saying that technology is a part of our life and probably always will be, so let’s work with it.
This AI technology that we’re talking about really does threaten the old model of learning. If your goal of teaching people is to make them memorise things and write essays, then yes, ChatGPT will outperform all of that. But if your goal is truly learning and development, then these digital tools give us ways to „hyperscale” human potential.
All the major inventions throughout history had critics saying that it’s the end of the world. Socrates famously said that writing would be the end of understanding and memory. The printing press, calculators, and Wikipedia were all predicted to damage learning, but we still teach and learn; in fact, we’re even stronger because of these advancements.
What keeps us tied to the old model then?
We have agency in determining what the outcome is going to be. Will it be that, on average, schools continue to be in the old Prussian model from 250 years ago where we send kids to school for years at a time and treat learning as an isolated thing, outside the flow of our daily lives, or will we adapt?
Will we develop people who are trained, educated, flexible and have the right competencies for the modern world. That may not mean that they’re memorising facts that they can look up on Wikipedia. It might mean that they’re very adaptable, that they’re „expert generalists” who can, you know, pull in different sources.
The same way for the industrial revolution. People were trembling over potential job loss, machinery would bring upon them. Can you compare it with AI?
The Prussian-style education system (which we still used today) formed around the same time as the beginning of the First Industrial Revolution. By the 1830s, they had fully implemented a scalable education system, built on standardised curricula and national testing standards. People worried in the first Industrial Revolution about job loss, and famously the Luddites fought back against mechanisation. Ultimately, people did lose jobs and the world changed—and different jobs were created.
Today, we’re in, what many people call, the Fourth Industrial Revolution, characterised by technologies such as AI, the cyber-physical systems, and biotechnology. Will this revolution affect jobs, communities, and our lives—yes. But it will likely also create new opportunities, and regardless of, I doubt we will slow down or move backwards. The better question, therefore, is how to embrace this revolution smartly.
The FLE vision assumes a decreasing importance of academic degrees in favor of competency-based credentials. How would that work? We’re already seeing some of this approach in the big tech companies. They’re hiring fewer and fewer people just because they have a degree, and instead, they’re choosing candidates with broad range certificates who can demonstrate appropriate competencies.
I’m not trying to say that academia or degrees are without value, but they are only one pathway to job success, and for many people we need to take this sort of monolithic approach to education (where you have to spend 20 years to get a certain kind of degree) and break it up into little bite-sized chunks.
How would that work in practice?
People might have access to accredited (or accreditable) education, training, or experiences, and as they complete each chunk, they receive a verifiable credential—basically a digital, cryptographically signed certificate of completion. You keep these certificates in your digital wallet (which is sort of like how your browser collects cookies). Over time you can build up towards something meaningful, like stackable credentials to earn a degree or to demonstrate competence in a particular field, and they’re trusted, because each credential is signed by an authority, such as a university or professional society.
Thinking from a psychological perspective, such a solution seems to keep people motivated to explore their educational path more organically. You don’t have to decide your path while still in your teenage years and later commit to years of your major. Instead, FLE lets you build expertise following the current interests, gradually becoming an expert in what you truly feel.
There’s a book called “The 60 Year Curriculum” by Christopher Dede and John Richards. It talks about how throughout people’s lives, they’ll probably have seven entirely different careers—not just different jobs, but different careers.
I wouldn’t expect this many.
Times change so quickly and we have so much access to information.
Look at yourself. Now you’re a journalist, and then maybe in a few years you start to get more interested in health journalism. So, you start learning more about health care, and then (combining your skills in communication, tech and health) maybe you begin working with technology startups, helping them pitch digital health care systems. You might start to develop your own healthcare AI applications, and in the future (let’s imagine there are newly invited health nanobots), maybe you become a nanobot programmer. That’s four careers, already.
All of a sudden, it sounds completely reasonable.
This seems like the best use of human curiosity, hence potential.
Exactly. If we create a one-size-fits-all format for education, it will essentially fit no one. But if we instead make an infinite number of sizes—many different ways to learn—then everybody can find the right option for them at any point in time. And then we can unlock so much more human potential.
Military education at a crossroads of technological change
Do you think this model can be adapted for military education structures?
I think it would work really well because of well-documented structures within the military: you have clear goals in terms of readiness, as well as the opportunity to move people around in different ways, for example, if someone needs access to a simulator, or access to a particular experience, or to go to a classroom environment. So, I think that defence and security organisations would really benefit from this FLE approach.
In the document you co-authored, you quote Fred Drummond from the Pentagon, who emphasised that the goal of learning in the military is combat capacity. What does „combat capacity” mean in the context of such dynamic changes driven by AI?
Well, I think a different question is really, „what does capacity mean”?
So if you’re working in healthcare, maybe you’re talking about healthcare capacity, and if you’re working in education, maybe you’re talking about teaching capacity. If you’re working in the Pentagon, of course, you’re going to talk about combat capacity. But in all those cases, it really just means how do we empower people to do their jobs as well as they can. I think „the most” we can achieve, our peak human potential, isn’t just a human. It’s a human who is well trained and educated, and even more than that, a human who is also well-supported with digital technologies.
So something like „augmented intelligence”.
Exactly! We all do this already: We have our phones with us constantly, and we use the internet to augment our memory and understanding. So how do we amplify that more? It raises questions for FLE, such as what competencies do we need to actually develop in people versus which one we should plan to support in real time using digital systems.
You could imagine a mechanic: Maybe she doesn’t need to memorise all of the different ways to repair a particular motor, but instead she’s wearing an augmented reality display, with headphones on, and an AI chatbot saying the instructions. Of course, there’s some baseline that’s needed. I am not mechanically inclined, and I would make a mess—with AI or not! But for somebody who is an experienced mechanic, they could kind of gain a superpower through the use of these kinds of cognitive-support technologies.
I’m thinking whether it wouldn’t cause problems with responsibility attribution. If the mechanic messes up the car, is it the mechanic that’s responsible for the mistake or the AI is to blame… or maybe the company that produced this technology.
This is really a challenging question for policymakers. We need to be wise, we need to think about ethics, we need to bring in lots of different perspectives because there’s a lot of different stakeholders. And I don’t know what the right answer is. Although I think that it is fair to say, we do need to think about new models of answering these questions. It’s not sufficient to just convene a bunch of smart people and then write down a 15,000 words set of laws.
Governance is another area where we can use AI tools. If we were designing a government today with the tools available, we could include more ways to gather public input, evaluate policy progress, and iterate on decisions—just like software developers iterate on code. I think we could be much more creative if we allow ourselves to be.
If we’re not afraid of it as much as we are now.
Anytime something new comes in, we should be wise and cautious. But at the same time, because of the acceleration and complexity of the world around us, I think the ways we organise institutions and populations need to be different.
We can no longer just rely upon people being well educated and rising to a leadership position through merit. There’s just too much for any single person (or even a panel of people) to know.
Moving to models that are more network-based and focused on experimentation, iteration and collaboration, would help us get there, I think. Anyway, that’s a long way of saying that there are some opportunities if we’re willing to experiment and be flexible.
AI is taking over procedural and analytical tasks, forcing a shift in human competencies towards creative problem solving. Should military education institutions reformulate their programs accordingly to train more expert generalists and how in your eyes it would look like?
Yes, I think that’s true, and not only for the military; it’s true for many different roles in the security sector: we need people who are more expert generalists. What I mean is that we’re not only developing people with deep expertise in a single field, but we’re developing people who understand how to orchestrate things across a lot of different disciplines. Deep expertise (like someone who only knows „vulnerabilities management” or „crisis response”) is often already possible with AI tools. Expert generalists also need broader, transdisciplinary competencies, such as critical thinking, metacognition, strategic and systems thinking, empathy, and awareness.
Is it possible to design a training system that takes into account the irrationality of human decisions and distress?
Training and education are not only focused on develop cognitive skills. Instructional programs should help develop the full range of human capacity. That also means emotional control, stress regulation, metacognition, perspective thinking, and so on. It’s important that as we design these competencies into our learning systems; we should think holistically, and not reduce outcomes down to simple psychomotor skills or lower-order knowledge.
So yes, it is both possible and absolutely necessary to do that. There are also many great theories that can support this, and there are even different kinds of sensors (for example, neurophysiological saliva sensors that detect stress hormones) or performance models that can help us work through these. These are trainable things.
For example, performance under stress has already been studied quite a bit, and this will only continue to increase.
Plus, in the next few years, we’re going to start seeing a lot more commercial neurotechnology coming out onto the market. It’s already available today, but in combination with AI tools, I think we’ll see new opportunities.
This is another area that is both a great opportunity but also pretty scary.
This is definitely going to be a big area for people to look at. The combination of AI with neurotech could revolutionise the way we work.
All of today’s emerging technologies, like neurotechnology, space technology, biotechnologies, are all going to be a series of—maybe not revolutions—but shocks to our old way of doing things.
We need to be ready for that. And I don’t think that we are preparing people for this kind of future (which is coming just in a few years).
And it all goes back to our training and education system. What competencies define effective uniformed service personnel?
Well, this connects to what we discussed earlier about the need for more experimentation and iteration. This is a perfect area to apply that. If I were designing the requirements and competencies for a soldier, policeman, or politician, I’d start with what subject-matter experts say and build a model.
But then I’d immediately move to a data-driven approach: scraping information, seeing who actually succeeds, and using real empirical data to refine the model over time.
Also, what makes an expert infantryman today might be very different from what’s required in the future or just in different contexts. A soldier in Ukraine versus a soldier in Poland right now might need very different skills, and a Ukrainian soldier today versus two years ago also had different competencies. So, models need to be dynamic, validated with empirical data, and personalised to context and requirements. This underscores the Future Learning Ecosystem concept
Based on your experience across various security fields, have you observed cultural or structural barriers within uniform services (like emphasis on hierarchy, hard skills, or traditional gender norms) that might affect the adoption of FLE?
I would say that the driving force for all security professionals is, above all else, performing well. You could say this in other sectors too; people who are serious about cybersecurity, for example, use all of the available tools and training to do their best.
The challenge that I’ve seen much more is interoperability. Particularly in a context like NATO; how do you get 32 nations plus all the partners to work together? How do you get all these nations, with different languages, equipment, materials, and digital systems, to communicate effectively? Not only humans talking to humans, but our digital systems communicating.
And it all comes down to the smallest of details.
It’s not just a syntactic challenge of getting ones and zeros from one computer to another. Look at writing dates; as an American, I write them starting from the month, but as a European, you start by writing the day. Something as small as that can create confusion if not addressed.
That’s a simple and obvious example but let’s consider an example related to human learning, such as the concept of „leadership”. What does it mean if I say, „I have a four out of five in my leadership score?” What does leadership even mean? It may be a completely different concept, if I’m working in Poland versus in Germany, or in a hospital versus a battlefield. This becomes particularly challenging, if we’re trying to break-up training, education, and jobs into smaller parts, automatically assign learning or on-the-job advice, and be interoperable—across systems, human teams, and nations.
Information overload: the price of the digital world
I feel like nowadays people feel a bit overwhelmed with all the technology. To me it’s an indicator that we aren’t using it right. We are not taught how to use technology effectively, and in harmony with human nature. We perceive it as a threat that overtakes human potential while you say it’s a chance.
I don’t think that the problem is necessarily that people don’t understand technology. Instead, I would point out two things.
Number one, we live in a world where we are being constantly overloaded. „Information overload” is real, and it has real impacts on our cognition, ability to pay attention, and our mental health. And so if you’re feeling that way, it’s not because you’re doing something wrong; it’s because that’s the world we’re living in right now.
Second, I’d also emphasise (especially to technology developers) the importance of a user-friendly interface (UI - red.). Whenever I advise people, I tell them to focus on it early on, because removing barriers to human-system use is essential. If something is confusing or hard to use, it may be because it wasn’t designed with a good interface in mind.
This notion of a good human-system interface extends beyond the UI. There are areas of practice, like Implementation Science and Human-Systems Integrate, that consider all the different ways new technologies need to be adjusted for people and our contexts. If we’re not doing those things, then the person-technology fit will be suboptimal.
So, while training and education about new technologies is critical, we need to make sure that we’re not just giving people more information. We need to help people navigate through the information effectively. And, ultimately, you can’t „train your way” out of a bad technology design.
Is there any way to minimise information overload then?
This is a really good question. First of all, we need to be aware of information overload and its effect on our ability to make good decisions. Our decision-making and attention—our prefrontal cortex executive resources—are finite. It’s really important to protect them.
Every time you doomscroll or watch short-form videos, you’re using up some of these mental resources, and so later we have fewer of them available.
We need to learn how to self-regulate.
I think the short-term advice is to monitor how you feel as you’re being overloaded with too much information. It’s also really, really important to put screens down at some point in time, get a little bit of exposure to nature and the non-digital world. Have some time just for our own mental health.
And long term?
I think that we need to have some real training, education and discussions about these kind of mental health issues, just like we have with fitness and food.
Can AI make the final decision?
Systems can recommend actions on the battlefield, but humans still make the final decisions. Do you think we will ever reach a point where AI is trusted to make critical decisions, like pressing the red button?
This goes back to some of these larger policy questions. Though, if we’re talking about today’s technology, we have to be really careful because the systems today are not fully tested in a „mature at scale” sense. There’s a famous thought experiment called the paperclip maximiser.
Paperclip maximiser?
Imagine that you’ve given a super AI-empowered factory one job: create as many paperclips as possible. What does the AI do? It tries to make more paperclips, but it needs more resources, so it sends robots out to collect the raw materials. When people stand in the robots« way, then the AI decides to wipe out all humans, so that it can take over all the resources on Earth. It might even create spaceships to other planets, so it can also mine all of those resources, just so it can make as many paperclips as possible.
Wow, that’s a bit… excessive.
You have to get rid of those pesky humans that are blocking your resources! Obviously, it’s a little silly, but if you think about it in practice, it’s the idea that AI doesn’t have the same kind of constraints that we as humans do—unless they’re explicitly programmed into the application. We (as humans) have context that an AI system doesn’t necessarily have.
It doesn’t think about things like humanity or morals because you haven’t necessarily taught it that.
Exactly, and let me give you a real-world experiment of this principle: social media algorithms. Meta, for example, has been optimising the Facebook application for engagement (paperclips). And so what did we see? First, you might recall the scandal, where people found out that they were experimented on, to test out different ways—like emotional content—to increase engagement. Learning from these experiments, Facebook adjusted the algorithm.
Not long after, in Myanmar, the algorithms were optimising for negative emotion and engagement through pushing hate speech, and auto-playing negative videos. This continued for a while, and Amnesty International reported that Facebook was a significant contributor to the genocide in the country, due to this promotion of violent rhetoric.
So this goes back to the question whether there should be a human in the loop. Are we really ready to unleash a large-scale autonomous „paperclip factory”? I don’t think we are, technologically speaking.
See also

Absolutely. I think this is an issue that big companies need to take seriously.
We need to rethink our systems. You can’t just change some small things, sometimes the whole structure needs to be re-thought.
Meta is a good example again. There were some internal documents that leaked, reporting Meta’s 10% revenue growth in 2024 from scam ads. We’re talking about around $7 billion.
They knew that this was happening and they chose to allow it anyway with the decision that even if they were sued (and it’s really the EU that’s going to sue them), even in the worst case scenario they would still end up making income.
So I think that this is the question for policymakers, not so much regulation, because this area is difficult to regulate, given its fast-moving environment.
Would it be more effective to use positive reinforcement and incentives, or punishment and regulations?
There’s a whole variety of tools in our toolkit that we should be thinking through. I don’t know what the right answer is, but I can tell you that the wrong answer is continuing to use models of governance, of policymaking, of law that worked a hundred years ago. They just no longer fit for purpose in our current environment.
And this is why I keep talking about this idea called the Cynefin model by Dave Snowden. It talks about how context influences how we should lead, design organisational structures, and approach problems.
The Cynefin model suggests that in a complicated environment, for example, things are challenging to understand but are still knowable: understanding astrophysics may be difficult, but you can still learn it. You can study, experiment, use data, rely on scholars to push best practices up a hierarchy, and make sound, methodical, analysis-based decisions.
But in a complex environment, there are too many forces at play to be able to say „oh, I can see how A leads to Z”. Things are in flux, and it’s no longer sufficient to rely on analysis and proven theory.
That’s why we need teamwork.
Yes, we need lots of small groups that are experimenting, that are sharing their information back in a much more networked model rather than a hierarchical one.
There’s a lot more that goes into the Cynefin model, but the point I’m emphasising is that we’re now living in a complex world, which affects the role of leaders, whether in military or government or business, as well as the way we structure our organisations as hierarchies or as networks.
When CURIOSITY sets the course
Finally, at the end, I’d love to hear your story. How did you end up in this field, and what challenges did you face along the way, especially as a woman?
I’ve always been interested in a lot of different things. That led me to study human psychology, then user interfaces, media and information design, and also later computer science. Eventually, I brought all of these different facets together because I like to understand the whole system and how everything fits together. I moved into Modeling and Simulation, plus it’s application for improving human performance, like through training or human-focused analysis.
You’re a perfect example of „practice what you preach”, having so many interests yourself, the education system we have now must have been frustrating for you.
All of that led me to work on a variety of projects, but I eventually found myself working with security forces: military, defence, national security, and NATO-type organisations. Those are problems that truly matter to me. I’ve also tried other contexts, like working for businesses focused on profit, but I didn’t find the same sense of fulfillment.
You wanted to tackle hard problems.
In my experience, when you’re working on hard problems, people care more about performance and outcomes than what you look like or what you wear. What matters is whether you can get the job done.
We’re seeing this even more with the AI revolution, because things are moving so fast that there’s no time for the extra stuff.
Relying only on one specific group of people, can’t sustain the workforce. Companies and governments need to look in different environments, seeking people based on their expertise rather than gender or personal preferences.
There’s all sorts of data supporting the idea that teams made up of people from diverse backgrounds, with different experiences, training, education, and areas of expertise, perform better, because they bring multiple perspectives to the table.
If you are ever in a situation where you are being shut down or mistreated just because you’re a woman or you’re younger or don’t have the same background, it means that the place is probably not driven by the goal of performing really well. And if they’re more concerned about these kinds of office politics, rather than achieving the goal then maybe that’s not a good space to be.
Thank you so much for this insightful and truly inspiring conversation.







