My background is in system development, security, governance, risk, and operational design.
That background matters because Fragment Practice is not only interested in ideas in the abstract. It is interested in what happens when ideas have to survive contact with real constraints: unclear ownership, weak review loops, policy drift, tacit routines, tool pressure, handoff failures, and the practical ambiguity that appears when AI enters work faster than meaning and accountability stabilize.
Over time, the work moved upstream. Less toward tooling alone, and more toward the layer where concepts stabilize, decisions become reviewable, and human and AI-enabled systems remain legible under operating pressure.
That is the ground from which Fragment Practice works now: between worldview and operations, between language and structure, and between conceptual clarity and practical use.