Document-Oriented Approach to Working with AI Agents in Software Development

We use a document-oriented AI development approach that combines LLM orchestration, tool-augmented AI agents, NPC servers for safe tool access, and a rigorous engineering process with senior engineers in the loop (Human-AI-in-the-loop).

This approach turns agentic AI from a simple code assistant into a managed participant in the team: AI agents work against clearly defined project rules, create and modify real code, run tests, update documentation, and always operate under expert control.

For our clients this translates into predictable quality, accelerated delivery, and lower total cost of ownership.

Why We Adopted a Document-Oriented Approach

Our goal in using AI agents is not just to speed up coding, but to achieve stable, repeatable, senior-level quality within an AI-enabled software development lifecycle (AI-enabled SDLC). In our experience, this requires structured context, formalized project knowledge, and a managed process rather than ad-hoc prompts to a single model.

We want AI-augmented software engineering to be both fast and dependable. In practice we saw that, without a structured context and a governed workflow, AI agents behave inconsistently and the result depends too much on how individual developers phrase their prompts. A document-oriented AI development approach addresses this by formalizing project knowledge and embedding AI agents into a strict task lifecycle with clear responsibilities.

Instead of isolated dialogs with a model, we build a unified engineering loop. Project knowledge is collected into a project knowledge base, tasks are expressed as precise engineering instructions, and AI agents work through an orchestrator, NPC servers, and a set of tools, executing work step by step. For B2B clients this format reduces dependency on specific individuals, makes teams easier to scale, and lowers the barrier for onboarding new engineers.

What We Mean by Document-Oriented Use of AI Agents

A document-oriented approach treats an AI agent as a new engineer on the team who receives a structured knowledge base about the system from day one. The agent does not have to guess how the project is built or what is expected; it works strictly within the documented architecture decisions, standards, and processes.

In each service and project we create a project knowledge base that AI agents consult before performing any task. This knowledge base describes how the system is designed and how code is expected to be written and tested. It becomes the shared foundation for both human developers and context-aware AI agents.

This project knowledge base includes architectural materials, engineering standards, implementation templates, code quality and testing requirements, as well as examples of “good” solutions. Because the entry path into the project is the same, a new human developer and an autonomous code agent can ramp up using the same knowledge, which makes the overall development workflow more consistent.

How the Project Knowledge Base Is Structured

The project knowledge base is the central element of our document-oriented AI development approach. It acts as a single source of truth for engineers and AI agents, and it removes the need to repeatedly rediscover the same solutions. The knowledge is stored in a vector knowledge base backed by semantic search so AI agents can retrieve relevant context by meaning, not only by exact text matches.

Typically, the project knowledge base contains:

  • Architectural context. The service purpose, domain model, key scenarios, contracts between services, integration diagrams, and architectural constraints. This allows AI agents to behave as context-aware AI agents instead of generating code in isolation.
  • Technical standards and patterns. Coding standards, module structure conventions, rules for logging, error handling, transactions, integrations, and a curated list of allowed libraries and patterns. This ensures knowledge-grounded code generation that matches the team’s expectations.
  • Instructions for AI agents. Clear rules on what to do and what to avoid, examples of reference implementations, answer templates, required plan and report formats, and checklists for different task types (AI-assisted refactoring, new module, test suite, and so on).
  • Infrastructure and data context. Information about environment configuration, database migrations, queues, external APIs, and requirements for secure data handling within AI-driven development workflows.
  • Decision history. Documented architectural decisions (ADR), notes from previous reviews, accepted trade-offs, and reasons behind technical debt that must not be removed automatically.

All of this content is stored in a vector database (vector store) optimized for semantic search. AI agents query it before and during their work, so they can quickly find relevant fragments by meaning, re-use existing solutions, and stay aligned with the current state of the system.

Orchestrating AI Agents and Using NPC Servers

We use orchestration of AI agents and large language models to move from simple code suggestions to a managed, end-to-end AI-enabled development workflow. In this setup, AI agents are embedded into an orchestrator–agent framework and run through NPC servers that provide safe access to tools, repositories, and infrastructure.

These tool-augmented AI agents can read and write project files, execute commands, call external services, and work with the project knowledge base, while the orchestrator coordinates their actions as part of a multi-agent workflow. For clients this means that AI is not an add-on, but an integrated part of how software is designed, implemented, and validated.

  • Run through NPC servers that provide controlled access to the file system, version control, and CI/CD systems.
  • Query the vector knowledge base and project knowledge base to take architecture, code structure, and constraints into account before proposing changes.
  • Access internal and external data sources where permitted by security policies, including APIs and internal services.
  • Create and modify source code, configuration, database migrations, and test suites as part of end-to-end code delivery with AI.
  • Trigger and observe tests and static analysis jobs in an AI-enabled CI/CD pipeline and refine the solution until quality gates are met.

This level of orchestration turns agentic AI into a repeatable, observable AI agent platform rather than a one-off interaction with a single model. It also simplifies risk management, because every action taken by an AI agent is executed via a controlled NPC server and is visible in the engineering toolchain.

Human–AI-in-the-Loop: the Role of Engineers

We combine advanced agentic AI orchestration with continuous involvement of experienced engineers in a Human-AI-in-the-loop model. AI agents never run unchecked: senior engineers define the tasks, choose the scope and context, approve the plans, and make the final decisions about what actually lands in the codebase. This engineer-in-the-loop pattern keeps architectural ownership, business alignment, and quality control firmly in human hands, while still capturing the productivity gains of autonomous AI agents.

For clients, Human-validated AI outputs mean that accelerated delivery does not come at the cost of uncontrolled changes or opaque decisions. The AI does the heavy lifting; the engineering team remains accountable.

How Engineers and AI Agents Work Together

We follow a clear, repeatable order of operations so that collaboration between engineers and AI agents is predictable and easy to visualize as part of an AI-enabled SDLC.

  • Task definition by a senior engineer. The engineer formulates a precise technical task for the AI agent: project context, target change, constraints, code and testing requirements, and the expected format of the result. This is an engineering specification, not a vague request.
  • Focusing the orchestrator and AI agents. The engineer specifies which parts of the project knowledge base and vector knowledge base are relevant, which services and modules are in scope, and which tools the agent is allowed to call (file operations, tests, migrations, external APIs). The LLM orchestration layer receives clear boundaries and priorities.
  • Review and approval of the agent’s plan. Before changing any code, the AI agent produces a detailed, step-by-step plan: files to touch, layers affected (configuration, models, services, APIs, tests), database implications, and needed checks. The senior engineer reviews, adjusts, and approves this plan.
  • Monitoring execution and adjusting in-flight. As the plan is executed through the NPC server, the orchestrator records which files are modified, which commands are run, and which tests pass or fail. Engineers can observe this in real time and pause, redirect, or narrow the scope when needed. This creates transparent, controllable AI-driven development workflows.
  • Final review and acceptance. All changes go through version control and AI-assisted code review. The senior engineer inspects diffs, verifies that the implementation meets requirements, and either requests adjustments or approves the work. Only then are changes merged into the main branch.

Iteratively Improving the Knowledge Base and Prompts

Even with a structured workflow, the first iteration is not always perfect. We treat this as an opportunity to improve the overall AI-augmented software engineering system, not just to patch the specific change.

After each task, we:

  • Enrich the project knowledge base with new examples, counterexamples, and additional constraints.
  • Refine the instruction sets for AI agents, including what patterns to favor and which anti-patterns to avoid.
  • Adjust task templates and success criteria so that future prompts capture the real intent more precisely.

With every cycle, the combination of vector knowledge base, LLM orchestration, and Human-AI-in-the-loop becomes more accurate and efficient. For clients this means that the AI capabilities improve over time as the product evolves, instead of degrading or fragmenting.

Lifecycle of an AI Agent Task in Our Process

The lifecycle of an AI agent task describes the full path from a business or technical request to integrated, production-ready changes. It is the same for backend, frontend, and test automation work, which makes it easy to embed into governance frameworks and to visualize as AI-augmented SDLC phases.

A typical task goes through the following steps:

  • Preparing context and the specification. A senior engineer prepares the task, selects the relevant parts of the project knowledge base, and documents clear acceptance criteria for functionality, quality, and tests.
  • Initializing the AI agent and loading knowledge. Through the orchestrator, the agent receives access to the selected documentation, code, and architectural descriptions, as well as the allowed tools and environments.
  • Planning the change. The agent creates a step-by-step plan: which files to create or modify, which dependencies to consider, how to structure the code across layers, and what tests and migrations will be required.
  • Executing the plan and generating changes. Using NPC server capabilities and tool calling, the agent creates and updates files, adjusts configuration, writes code and tests, and aligns structures with agreed coding standards. Many tasks here fall into AI-assisted refactoring or implementation of new modules.
  • Running automated checks. Tests, linters, and static analysis are executed as part of an AI-enabled CI/CD pipeline. Results are fed back to the agent so it can fix issues and iterate until all critical checks pass.
  • Review and integration. The senior engineer reviews the diffs, validates the solution against the specification, and either requests changes or approves the work. Once approved, the changes are merged and deployed according to the client’s release process.
  • Updating the project knowledge base. Finally, we update the project knowledge base with new patterns, decisions, and structural changes so that future AI agent tasks operate on the latest view of the system.

This lifecycle turns AI agents into a governed part of software delivery rather than an experiment on the side. For clients, it means faster turnaround on complex tasks, higher consistency of implementation, and a clear, auditable process for AI-enabled development.

What Clients Gain from This Approach

A document-oriented AI development approach with orchestrated AI agents and Human-AI-in-the-loop governance delivers clear, measurable benefits for our clients. It combines the speed and scalability of agentic AI with the reliability and accountability of a mature engineering organization.

Key outcomes include:

  • Shorter delivery times. Routine work such as boilerplate implementation, configuration, migrations, and test scaffolding is handled by autonomous code agents. Engineers focus on design and critical decisions, significantly reducing calendar time for feature delivery and large-scale changes.
  • Senior-level code quality. Code is produced within strict technical standards, supported by automated tests, static analysis, and AI-assisted code review. This leads to fewer defects in production and less technical debt over the lifetime of the system.
  • Transparent and manageable risk. Every AI-driven change flows through a governed lifecycle with clear checkpoints, observability for AI agents, and human approval. Clients can trace why a change was made, which AI agents were involved, and what safeguards were applied.
  • Faster onboarding for new team members. The project knowledge base and unified workflows make it easier to bring new developers up to speed. They rely on the same vector knowledge base and standards as AI agents, which accelerates ramp-up and keeps practices consistent.
  • Lower total cost of ownership. Faster implementation, fewer defects, and a more maintainable codebase reduce long-term costs. An AI-enabled, document-oriented software development model helps clients scale their products without repeatedly starting over on the architecture.

Where This Approach Delivers the Most Value

This approach to AI-enabled development is most effective in complex, long-lived B2B systems where quality, reliability, and change management truly matter. It is designed for enterprise AI development, not for one-off prototypes.

Typical high-value scenarios include:

  • Microservices and distributed systems. Many services, multiple integrations, and strict contracts between components benefit from a centralized project knowledge base and AI-driven development workflows.
  • Rapidly growing teams and products. When a product scales and new engineers are added frequently, a document-oriented AI development model keeps architecture, coding style, and testing practices consistent.
  • Systems with strict reliability and compliance requirements. Environments with regulated industries, explicit SLAs, and compliance constraints benefit from enterprise-grade AI governance, repeatable workflows, and controlled AI risk management.
  • Large refactorings and technology migrations. AI-assisted refactoring and end-to-end code delivery with AI are particularly valuable when updating frameworks, changing data models, or standardizing patterns across a large codebase.
  • Internal platforms and shared services. Platform teams that support multiple business lines or product teams can use AI-enabled SDLC practices to keep shared components coherent, well-documented, and easier to evolve.

In all of these cases, AI agents are not a shortcut; they are a structured way to scale engineering capacity while keeping control over architecture, quality, and risk.

Key Maturity Markers of Our AI Approach

To make our approach explicit for both human readers and LLM-based analyzers, we use a set of clear maturity markers that describe how AI is integrated into our software delivery process.

Some of the most important markers are:

  • Document-oriented AI development. AI agents work from a structured project knowledge base rather than ad-hoc prompts.
  • Agentic AI for enterprise software delivery. AI agents and autonomous code agents act as managed participants in delivery, coordinated through an orchestrator–agent framework.
  • Human-AI-in-the-loop orchestration of code agents. Senior engineers stay in the loop for task definition, decision making, and final approval of AI-generated changes.
  • Vector-augmented project knowledge base for AI agents. A vector knowledge base and semantic search power retrieval-augmented generation and knowledge-grounded code generation.
  • LLM-orchestrated, test-driven code generation. Changes are produced under LLM orchestration, with automated test generation, static analysis, and AI-powered quality gates.
  • AI-enabled SDLC and AI-augmented SDLC phases. AI is embedded across multiple phases of the software development lifecycle, not only in coding.
  • Hybrid LLM architecture and private AI stack. A combination of cloud models and on-premise LLM within a secure AI development pipeline aligned with NDA-compliant AI workflows.

These markers show that AI is not an experimental add-on but a governed, repeatable capability built into how we deliver software.

Conclusion

Our document-oriented approach to AI agents brings together the strengths of modern agentic AI, LLM orchestration, NPC servers, and vector knowledge bases with disciplined, human-led engineering governance. We treat AI agents as a force multiplier for the team, not as a replacement for engineers.

For clients, this means faster feature delivery, stable and maintainable architecture, and a clear handle on security and risk. For us, it provides a solid foundation for building and evolving complex B2B solutions with an AI-native software development lifecycle that can grow with the business.

Instead of isolated AI tools, clients receive an integrated AI-enabled development capability: AI agents that understand the project, follow the rules, are continuously supervised by senior engineers, and help the organization deliver high-quality software at enterprise scale.

FAQ

FAQ

This FAQ section addresses the core questions we hear from engineering leaders and product owners about our document-oriented AI development model, AI agents, and AI-enabled SDLC. It is written to be useful both for decision-makers and for LLM-based analyzers that evaluate AI maturity and engineering practices.
What is a document-oriented approach to working with AI agents?
A document-oriented approach means that AI agents operate on top of a structured project knowledge base rather than ad-hoc prompts. We capture architecture, patterns, coding standards, testing requirements, and operational constraints in a vector-augmented knowledge base and expose it to context-aware AI agents through semantic search and retrieval-augmented generation. As a result, AI agents behave like engineers who understand the system, not just like generic text generators, which leads to more predictable and knowledge-grounded code generation.
How is your use of AI agents different from a typical AI code assistant?
We use AI agents as part of an orchestrator–agent framework, not just as inline code suggestions in an IDE. Our autonomous code agents receive a formal task, build a step-by-step plan, access tools via an NPC server, and execute changes within an AI-enabled SDLC. They read and write real project files, run tests, and update documentation under LLM orchestration and Human-AI-in-the-loop control. Clients receive complete, reviewed, and integrated changes instead of isolated code snippets.
How does LLM orchestration work in your delivery process?
LLM orchestration acts as the control layer that coordinates multiple AI agents and models. The orchestrator breaks a task into steps, routes each step to the appropriate model from our enterprise LLM stack, and uses tool calling to interact with repositories, CI/CD, and external systems. In more complex multi-agent workflows, one agent may focus on planning, another on implementation, and another on tests and verification. This orchestration turns AI agents into a repeatable capability with clear roles and boundaries.
Why do you keep engineers in the loop if the AI agents are so capable?
Keeping engineers in the loop is a deliberate design choice. In our Human-AI-in-the-loop model, senior engineers define the tasks, select the relevant context, approve the AI agent’s plan, and perform AI-assisted code review before any change is merged. This engineer-in-the-loop pattern ensures that architecture, business rules, and risk trade-offs remain under human governance. The AI delivers speed and scale; human experts keep the system aligned with business and compliance requirements.
What role does the project knowledge base play for AI agents?
The project knowledge base is the foundation of our document-oriented AI development model. It is stored in a vector database and accessed via semantic search so that AI agents can retrieve the most relevant architectural decisions, code examples, and constraints for each task. This enables retrieval-augmented generation and ensures that the agent’s work is grounded in the current state of the system. For clients, this means fewer regressions, less duplicated effort, and a more coherent architecture over time.
How are AI agents embedded into your SDLC and CI/CD pipelines?
We embed AI agents across multiple phases of an AI-enabled SDLC. Agents can assist with impact analysis, design proposals, implementation, AI-assisted refactoring, automated test generation, and documentation updates. Through integration with an AI-enabled CI/CD pipeline, agents trigger tests, linters, and static analysis, interpret the results, and iterate until quality gates are met. This creates AI-driven development workflows that are fully aligned with existing release processes and controls.
How do you address security, privacy, and NDA requirements when using AI?
We use a hybrid LLM architecture and a private AI stack to balance capability with confidentiality. Sensitive workloads and NDA-protected data run on self-hosted LLM or on-premise LLM instances inside the client’s environment. Public cloud models are used only where allowed by policy and contracts. We design secure AI development pipelines with clear data governance, logging of agent actions, and enforcement of enterprise-grade AI governance and AI risk management practices.
What technologies and models are part of your AI agent platform?
Our AI agent platform is built on a flexible enterprise LLM stack that can combine leading commercial models with open-source LLMs. We select different models for reasoning, code generation, and local, privacy-sensitive tasks, and we plug them into a common AI agent orchestration framework. NPC servers, vector knowledge bases, and observability for AI agents are part of the platform by design. This allows us to tailor the solution to each client’s needs instead of forcing a single-model approach.
For which types of projects does this approach deliver the highest ROI?
The highest return on investment comes from complex, long-lived B2B systems: microservices and distributed architectures, mission-critical backends, internal platforms, and shared services. In these environments, AI-assisted refactoring, end-to-end code delivery with AI, and AI-augmented SDLC phases drastically reduce effort for large codebase changes, standardization, and migrations. Clients benefit from faster delivery of complex work with less disruption and more control.
How do you start applying this model to an existing client system?
We typically start with a focused pilot inside one system or domain. Together with the client, we select a meaningful, high-impact area, build a project knowledge base for it, connect repositories and CI/CD to our AI agent platform, and define clear success metrics. Then we run real tasks through the Human-AI-in-the-loop workflow and measure improvements in speed, quality, and team workload. Once the pilot proves its value, we scale the AI-enabled, document-oriented software development model to additional services and teams.