tags: #thought
Please note, this is a highly unfinished thought that will be constantly refined over the coming years given that it is one of my main interests.
---
In physics we see unification as the desire to have a single elegant theory that could explain everything, including the all fundamental forces. But more generally unification aims to bring together theories and ideas under one consistent framework. This thought explores its feasibility.
## The cognitive rabbit hole
The brain as a material item gives us a desire to take positivist assumptions, though these reductions and its associated failures have been repeated countless times [@wattersTeachingMachinesHistory2023|(Watters, 2023)]. Prior researchers like [[B.F. Skinner]] have tried to produce a unified theory of learning, where he argued that the simple form of [[Operant conditioning|operant conditioning]] could give rise to higher-level mental faculties - which is heavily contested. Rather, forcing ourselves to model the entirety of pedagogy using simple cognitive processes is far too complex, and forgets value-laden dimensions of education.
Though, I believe the mistake is that desire for unification does not necessitate simplicity or objective grounding. That is, **unification is the process of accumulating a set of consistent and interoperable theories that lack contradictions**.
Initially this may seem impossible, given that pedagogy sits in an interdisciplinary space that draws upon the natural, social and formal sciences; where there are lines of research into seemingly incompatible forms of knowledge such as the operation of objective material of the brain, the complex social systems where education is performed, and the philosophical values that guide its [[Purpose of education|purpose]].
Though contradictions can be reconciled through viewing these at theories that operate at different levels of abstraction. For example, socio-cultural theories do not conflict with cognitivism, rather they operate at a higher-level of abstraction not focusing on the specific mental processes that occur within the learners mind, but are rather analytical theories that allow us to make sense of the environment which can be used to produce changes that can increase the probably of the cognitive learning primitives being triggered. Another example being, constructionism does not conflict with cognitivism, but it simply lets go of some elements of control hoping that through an environment where students can construct artefacts, that relevant learning primitives such as reinforcement will naturally occur.
## Potential of symbolic reasoning
This is where I think that pedagogic symbolic modelling could be able to move up and down levels of abstraction, where situations that are too complex for cognitive reasoning can move up sociocultural and environment-based reasoning. I think cognitive modelling requires a need to operate at the lowest level, to be able to control the specific cognitive processes that occur within the learners mind. But perhaps pedagogic reasoning processes should be able to acknowledge that our knowledge of learning is incomplete and hence reasoning purely through cognitivism is too difficult, therefore depending on the goal it should **be possible to let go of fine-grained control**.
Additionally, trying to capture every single precondition about the learning environment, the students and the goals, is far too complex - which ITSs and student modelling operates under. Hence, the interaction with the designer, acts as a proxy to information of the outside world. Though the information they implicitly know are far too difficult to explicitly operationalise for reasoning, but rather we need to think about the abstract mapping between the designers implicit knowledge and capabilities of communication, with the pedagogical theories that exist.
## Meanings of unification
However, it seems that the predominant narrative is that a unified theory of education cannot exist. Though I wonder to what extent this is in the different understanding of the meaning of unification. For example, [[Étienne Wenger]] argues that due to social theory not being propositional (true/false) but being perspectives of the world, that the motive should be to [[Plug and play|plug and play]] [@wenger-traynerPracticeTheoryConfessions2013|(Wenger-Trayner, 2013)]. But this seems to be accepting that different theories have different benefits and values laden within them, where we need to seek for consistent combination of theories for our specific purpose, which is the process of unification.
# References
Watters, A. (2023). _Teaching Machines: The History of Personalized Learning_. MIT Press.
Wenger-Trayner, E. (2013). The practice of theory: Confessions of a social learning theorist. In _Reframing educational research_ (pp. 105–118). Routledge. [https://api.taylorfrancis.com/content/chapters/edit/download?identifierName=doi&identifierValue=10.4324/9780203590737-12&type=chapterpdf](https://api.taylorfrancis.com/content/chapters/edit/download?identifierName=doi&identifierValue=10.4324/9780203590737-12&type=chapterpdf)
---
TODO
* definition of learning vs education
* Incorporate my following thoughts (but should this go into [[The automated derivation of design-related contextual pedagogical theory]] instead? and link to its motivation being unification which is a dynamic state?)
* _‘Difficult to reconcile these cognitive versus logistic principles’_
I think this would be possible if we can create a flexible EML that can span levels of abstraction, and move flexibility in-between them depending on the amount of information that is available at any given time. Intuitively I feel that this is possible if we use some inherently extensible language like logic programming, where we can continuously extend the atoms and rules with which we reason.
Though more so, like how LLMs gained there abilities through automation, I think we need to do the same thing here. That is, nobody hand-coded the state of the LLMs, but it was derived through the data and the training methods. But there is a large amount of existing practice that is continuously going on, so it would be great if we could find a way that could tapped into this and use it as an automated experimentation platform that could continuously building upon theory and reconciling contradictions in the hope of unification.
* _‘How to prove that an EML is useful ?’_ I think there is a lot of danger in a static EML that is deemed as useful or not, or even one that can only be edited by humans. Rather, the EML should represent dynamic knowledge of pedagogy, where there is some system of experimentation that can edit and add to it whilst maintaining both: interoperability and consistency. But then we need to prove usefulness of the process of deriving EML, which I think can be done through taking snapshots of the EML and see how well it represents reality and its usefulness. That is, we do not evaluate the EML as an abstract entity, but its instantiated state with pedagogical knowledge, for which there are a myriad of potential methods... If the results of the state are useful, then we know the process of knowledge derivation produced something accurate and/or useful.
* Perhaps in a strange way, the algorithm itself takes a bit of a Piagetian view of the dynamic state of knowledge (which includes the language itself within the child’s mind) that is constantly being constructed upon. That is, the EML itself should be like a _schema_ which changes, where new pedagogical knowledge is _assimilated_ into it with the automated experimentation platform, with contradictions _accommodated_ for through changes to the language, in a continuously evolving cycle that tries to walk towards unification.