There is a vast amount of knowledge in the world beyond what we will ever be able to comprehend, hence learning is fundamentally an optimisation problem that concerns: * Selecting the most appropriate material * Minimising time necessary for the same learning effects * Maximising the joy from the process of learning Though, this efficiency should be calculated as much as possible with respect to experienced value, and not cumbersome proxies even if it comes at the cost of measurement difficulties (see [[Alchemy exists!|this]]). ## Learning design The idea of optimisation becomes increasingly apparent in the process of [[Learning design|learning design]], where there are some learning objectives that need to be met with some given constraints. This is why I have concerns for LLM-based lesson planners like [Aila](https://labs.thenational.academy/) (as of 28/8/2025), given the combinatorial nature it makes it a computationally intractable NP-complete problem, which is hard for LLMs to support. But generally education has been too difficult to operationalise formally ([[Educational modelling language|educational modelling languages]]) with enough expressiveness to capture pedagogical nuance to meaningfully support educators in making decisions without being excessively reductive: most languages seem to be about modelling the design creating by the designer, than by expressing pedagogy. To my understanding, people are starting to explore the potential of human-AI teaming with both generative AI and symbolic reasoning AI, as a heuristic approach to produce better learning designs (eg. [[Orchestration graphs|orchestration graphs]]). ## Limits of knowledge Looking at it through chunks acquired through spaced repetition is useful in its simplification in measurability, at least in how much declarative knowledge can theoretically be acquired given a certain amount of time (see [how much knowledge the human brain can hold by Piotr Wozniak](https://supermemo.guru/wiki/How_much_knowledge_can_human_brain_hold)).