*Note, this is a small speculative comment where I skip hesitant language - though of course reality is more nuanced.*
---
As the software engineering costs reduce with LLMs, we will see a wave of hyper-specialised software that meets the needs of very unique user groups. Whilst current contributions look more like slop than good software, it is not due to poor coding capabilities but rather an in-capability to architect software. The quality of the code is pretty decent if I use mainstream typed languages, create types in advance and provide function stems. Though, it lacks in its capacity to decompose problems well and hence I have to provide a lot of scaffolds within which the LLM performs its generation. Whilst LLMs are improving at architecting as we go towards the ideals of agentic AI, it will not be able to continue infinitely upwards in automation as software rests upon human needs and design decisions which requires some form of interaction with humans (note, this focuses on user-facing software given my interest in educational technology).
Quite a few note how software development will become the designers operationalising and communicating their decisions, though I believe it would be more than that. Rather like a mutual negotiation of shared meaning where the agent or design software would:
1. Elicit information and desires from the designer at a necessary specificity for software construction.
2. Mediate their design-related cognitive processes to help them come to better decisions.
This mediation of design-related cognitive processes I feel would occur not solely through dialogue, but a tool with many forms of interaction. Simple tools could be drawing a rough sketch which is then instantiated into a mockup with suggestions popping up for alterations they could make in accordance with [[Human-computer interaction|HCI]] guidelines. We cannot assume that designer have all appropriate knowledge and hence we want to provide them with relevant information at appropriate times. Whilst HCI guidelines are quite universal in applicability, some of the real benefits happen when you start to not care about generalisability and try to provide recommendations in line with the context. Take for example that we are trying to produce an EdTech for language learning: there is a wealth of literature on pedagogical theories but given the interdisciplinary space, this information is difficult to navigate, digest and is seldom used in practice, though a design tool could make evidence-based suggestions based on proven [[Pedagogical patterns|pedagogical patterns]]. More complex tools could even tap into design preferences expressed by users through forums, for example [[Anki]], [[Obsidian]] and [[Goodnotes]] have passionate user bases that discuss their needs and best practices for using the technology - then uses this as a basis to suggest topics for design consideration and practical pedagogical improvements to the software.
That is, currently [[Learning design tool|learning design tools]] are focused on producing artefacts like lesson plans or a pedagogical model to guide a [[Intelligent tutoring system|ITS]]. This is not a theoretical constraint, but a pragmatic one given the cost and difficulty of developing software. Though given this cost is reducing, maybe we will have tools for producing software that aid in mediating the designs thought processes in a way that aids in producing evidence-based EdTech - in some sense it acts as the ideals of [[Memex|memex]].
Just imagine ... an easy to use [[Learning design|learning design]] tool that outputs hyper-contextual EdTech software grounded in literature.