8. The Presentation Layer#
The VLAN automation had been running for six weeks. The network team was proud of it. Every morning, three or four new service requests arrived from application teams, and the Orchestrator handled them without anyone on the network team touching a keyboard. The deployments worked. The switches were configured. The network was healthy.
The escalation arrived on a Thursday. The application team lead was asking why VLAN requests were taking three to five business days when the portal said “submitted.” The network team checked their queue: zero pending requests, all deployments successful. The automation had processed every request within twenty minutes of receipt. But the ServiceNow tickets still showed “In Progress,” because nobody had written the integration that would update them.
The automation had done its job. The result was invisible.
A week later, the team deployed a quick self-service portal so application teams could submit requests directly and see their status. By noon they had received forty-seven VLAN requests in a single hour. All valid. All formatted correctly. The problem: the portal had no authorization model. It accepted submissions from anyone who had the URL. All requests ran under a single API token with platform-wide admin rights. There was no rate limiting, no approval gate, no audit trail of who submitted what.
The automation worked. The access model did not.
These two failures are both Presentation failures. One is the absence of a feedback loop; the other is the absence of guardrails. This chapter closes that gap.
8.1. Fundamentals#
8.1.1. Context#
Every building block covered so far faces inward: toward other blocks or toward engineers who understand the platform. The Source of Truth (SoT) holds network intent for automation systems. The Executor applies changes to devices. Observability validates results. The Orchestrator coordinates all of them. Each of these blocks has a UI, an API, or both, designed for the people who built and operate the platform, not for everyone who needs to interact with it.
The Presentation (Layer) layer faces outward. Its job is to make the platform accessible to audiences who should not need to understand the internals: the application team requesting a network service, the security auditor asking what changed last quarter, the CI/CD pipeline provisioning infrastructure without a human in the loop.
In Chapter 3 the Presentation block sat at the edge of the NAF framework, facing humans and external systems. Chapter 6 established the boundary between observability visualization and Presentation: dashboards built directly on network telemetry belong to the Observability block by design affinity; how those dashboards are surfaced to non-engineers, access-controlled, or embedded in portals is a Presentation concern. Chapter 7 established that async workflows require status endpoints and notification hooks. The Presentation layer delivers both.
8.1.2. Goals#
The Presentation layer serves three goals that map directly to three architectural functionalities.
Provide a stable, authenticated API with a consistent access model. Every consumer, human or machine, should interact with the platform through a versioned, access-controlled contract that does not change without notice. The underlying blocks can be replaced, upgraded, or restructured; the consumer-facing contract must remain stable. Authentication and authorization are enforced at this boundary, centralized rather than duplicated per tool.
Serve every consumer type through the interface that fits their workflow. A network engineer and an application team manager have different needs, different levels of technical depth, and different expectations about how automation communicates with them. The Presentation layer provides multiple surfaces: GUI portals, Command Line Interface (CLI), chatops integrations, all backed by the same API and the same access model, with status surfaced to the audience that initiated the action.
Connect the platform bidirectionally to external systems and deliver results back through the channels consumers already use. The application team already works in ServiceNow. The CI/CD pipeline already runs in a version control system. The platform should meet them where they are: receiving requests from their systems and sending results back to those same systems, not requiring a new tool in their workflow.
8.1.3. Pillars#
Three pillars support these goals, one per functionality:
- API layer: the foundation: versioned, authenticated, RBAC-enforced, stable contracts. Authentication and multi-tenancy are enforced here, not per tool. Every other surface is built on top of it.
- Client interfaces: all consumer-facing surfaces (GUI portals, CLI, mobile, chatops) as different form factors of the same underlying API.
- Integrations and notifications: external system connections (ITSM, CI/CD pipelines, messaging systems) and outbound result delivery (webhooks, callbacks, push notifications).
8.1.4. Scope#
The Presentation layer surfaces. It does not produce.
In scope:
- The API layer: authentication, authorization, versioning, and rate limiting for all consumers
- Client interfaces built on that API: GUI portals, CLI, chatops, mobile surfaces
- External integrations: ITSM workflows, CI/CD pipeline hooks, webhook delivery
- Outbound notifications: status callbacks, push alerts, messaging channel events
- Operational dashboards when surfaced to non-engineer audiences (access control, audience scoping, and portal embedding; the underlying metrics architecture belongs to Chapter 6)
Out of scope:
- Data production (Observability, Chapter 6)
- Configuration rendering and template processing (Source of Truth, Chapter 4)
- Workflow execution and audit record production (Orchestrator, Chapter 7)
A Presentation layer that starts accumulating business logic (deciding which workflow to run, validating inputs against the network model, managing workflow state) has grown into something else. These responsibilities belong in the Orchestrator and Source of Truth. If the portal begins encoding network policy or retry logic, the architectural boundary has collapsed and the platform will be difficult to evolve independently.
8.2. Functionalities#
The three goals and pillars are realized through three core functionalities, each mapping directly to one goal and one pillar:
- API Layer: the contract and access model for all consumers
- Client Interfaces: the surfaces built on top of that contract
- Integrations and Notifications: the connections to external systems and outbound delivery
graph LR
subgraph Goals
direction TB
A1[Stable authenticated API and consistent access model]
A2[Right surface for each consumer type]
A3[Bidirectional integration with external systems]
end
subgraph Pillars
direction TB
B1[API layer: versioned, authenticated, stable]
B2[Client interfaces: GUI, CLI, chatops, mobile]
B3[Integrations and notifications: ITSM, CI/CD, webhooks]
end
subgraph Functionalities
direction TB
C1[API Layer]
C2[Client Interfaces]
C3[Integrations and Notifications]
end
A1 --> B1 --> C1
A2 --> B2 --> C2
A3 --> B3 --> C3
classDef row1 fill:#eef7ff,stroke:#4a90e2,stroke-width:1px;
classDef row2 fill:#ddeeff,stroke:#4a90e2,stroke-width:1px;
classDef row3 fill:#cce5ff,stroke:#4a90e2,stroke-width:1px;
class A1,B1,C1 row1;
class A2,B2,C2 row2;
class A3,B3,C3 row3;
8.2.1. API Layer#
Chapter 4 covered the SoT’s own API in depth: how automation systems query intent data, the consumption patterns (REST, GraphQL, webhooks), and the read/write model for network configuration. The API discussed here is different in purpose: it is the outward-facing contract for consumers of the automation platform as a whole. Where the SoT API answers “what should the network look like?”, the Presentation layer API answers “what is the automation platform doing, and how do I interact with it?” Both may be REST APIs; they serve different audiences with different access models.
The Presentation layer API is the foundation. Every consumer surface (portal, CLI, ITSM form, chatops bot, AI agent) is a caller of this layer. Authentication, RBAC, versioning, and rate limiting are enforced here. Design it well or everything above inherits its problems.
The Presentation API should not mirror internal block interfaces. Exposing a /v1/sot/vlans/ endpoint that proxies directly to the SoT’s API, or a /v1/orchestrator/jobs/ endpoint that wraps Orchestrator job IDs, ties consumers to internal implementation details. When you replace one block with another, every CI/CD pipeline that stored those IDs must be updated. The Presentation API should expose platform-level concepts instead: a /v1/requests/ endpoint that represents a service request regardless of which Orchestrator processed it, a /v1/services/vlan/ endpoint that returns a VLAN’s current state aggregated from SoT and Observability without revealing which block supplied each piece of data. Consumers get a stable contract; the internal implementation can evolve freely behind it.
8.2.1.1. What the API exposes#
The API exposes two categories of endpoints:
Read endpoints: workflow status and history, audit records, device and service state aggregated from the SoT and Observability blocks. These are the queries the application team uses to check request status, the auditor uses to review a change record, and the monitoring system uses to verify platform health.
Write endpoints: trigger a workflow, submit a service request, approve or reject a pending gate, cancel a running job. Write endpoints require stronger authorization. Different roles should have access to different write operations: an application team member can submit a request but cannot trigger arbitrary workflows; a network engineer can approve a pending gate but cannot modify workflow definitions.
The read/write distinction also shapes contract stability. Read endpoints must remain stable indefinitely: a dashboard running for a year should not break because an upstream schema changed. Write endpoints must be versioned explicitly, with deprecation notices before breaking changes.
8.2.1.2. Versioning and stability#
Consumer-facing APIs must be versioned. The Presentation layer API is the contract; the block internals are the implementation. Internal refactoring of the Orchestrator, SoT, or Observability blocks must not break external callers.
The standard approach: version by URL prefix (/v1/, /v2/) or by Accept header, maintain the previous version for a defined deprecation window, and communicate breaking changes through a changelog. A CI/CD pipeline that has been calling /v1/workflows/trigger for eight months should not discover on a Monday that the endpoint moved without warning.
8.2.1.3. Authentication and authorization#
Authentication answers: who are you? Authorization answers: what are you allowed to do?
Many teams implement authentication before authorization, then discover that “authenticated” and “authorized” are different problems when someone submits forty-seven requests under a valid token they should not have had.
Authentication patterns for network automation platforms:
- SSO / LDAP integration: the enterprise standard. Engineers and application teams authenticate with their corporate identity. No separate credentials to manage, and deprovisioning is automatic when someone leaves.
- OAuth 2.0 / OIDC: for external systems and web portal users. Produces short-lived tokens rather than long-lived credentials.
- Scoped API tokens: for programmatic access by CI/CD pipelines and automation scripts. Each token is scoped to a specific set of permissions with a defined expiry. A shared admin token used by all consumers is not authentication: it is a shared password that cannot be revoked without breaking every caller simultaneously.
Authorization through RBAC. Roles should map to operational responsibilities, not to tool capabilities. A starting model for network automation:
read-only: view any data, trigger no actionsoperator: trigger pre-approved workflows, approve gates, submit service requestsengineer: full workflow management, SoT write access, view all audit recordsadmin: platform configuration, user management, credential rotation
Each role inherits all permissions of the role below it. An engineer can do everything an operator can, plus write to the SoT and manage workflows.
flowchart TD
RO[read-only]
OP[operator]
ENG[engineer]
ADM[admin]
RO -->|adds: trigger workflows + approve gates| OP
OP -->|adds: SoT write access + workflow management| ENG
ENG -->|adds: platform config + user management| ADM
style RO fill:#e8f5e9,stroke:#4caf50
style OP fill:#c8e6c9,stroke:#388e3c
style ENG fill:#a5d6a7,stroke:#2e7d32
style ADM fill:#66bb6a,stroke:#1b5e20
RBAC is enforced at the API boundary. The underlying blocks see only authenticated API calls from the Presentation layer; they do not manage consumer identity independently. Multi-tenancy is implemented through data scoping: every query is filtered by the caller’s organizational scope. The application team for Building B should not see requests from the retail team for Building A. This must be designed from the start. Retrofitting multi-tenancy into a flat data model is a painful restructuring project.
The audit trail should capture denied requests alongside approved ones. Who tried to do what and was denied is as important for compliance as what was permitted. The Presentation layer produces this record alongside the workflow audit trail from Chapter 7. Chapter 12 extends this model with secrets rotation, policy-as-code, and compliance-driven automation flows.
8.2.1.4. Rate limiting#
Automated consumers without rate limits will exhaust the Orchestrator’s queue. The forty-seven-request incident did not require a malicious actor: only a motivated team, a URL, and no throttle.
Rate limiting at the API boundary: per-token limits (requests per minute per consumer), burst limits (simultaneous in-flight requests), and operation-specific limits (a firmware upgrade workflow should never run more than one instance per device at a time). Rate limit responses should return HTTP 429 with a Retry-After header, not a silent queue fill that surfaces as a timeout hours later.
8.2.1.5. REST, GraphQL, and the MCP interface#
REST is the default. It is simpler to version, reason about, and cache than GraphQL. Network automation teams rarely need GraphQL’s consumer-driven query flexibility and pay a meaningful operational cost for the additional complexity. The exception: if the platform serves a large number of distinct consumer types with significantly different query patterns, GraphQL can reduce over-fetching and the need for multiple specialized endpoints. It is a justified choice; it is rarely the right first choice.
The Model Context Protocol (MCP) interface is the AI surface of the Presentation layer. Just as human operators access the platform via CLI and application teams via portal, Large Language Model (LLM)-based agents access it via an MCP server. The agent calls tools (query workflow status, trigger a remediation, read the audit log) in whatever sequence its reasoning requires, subject to the same RBAC model as any other caller. This connects directly to the agentic orchestration pattern introduced in Chapter 7 (section 7.2.7): the Presentation layer’s MCP server is the interface that makes those patterns operable across the full platform without hardcoded integrations between the agent and each individual block.
REST and MCP differ in who drives the interaction. In a REST integration, the consumer knows in advance which endpoints to call and in what order: a CI/CD pipeline calls POST /v1/requests/vlan, then polls GET /v1/requests/{id} until completion. The sequence is fixed in code. With MCP, an Large Language Model (LLM)-based agent decides at runtime which tools to invoke and in what sequence, based on the result of each preceding call. The consumer is not a pipeline with a predetermined call graph; it is a reasoning system that reads each result before deciding what to do next. The MCP server defines the available tools and their schemas; the agent decides how to use them. This makes MCP appropriate for open-ended operational queries (“investigate the connectivity issue between building-b and the core”) that would require a developer to anticipate every possible call sequence if implemented as REST. It also makes authorization more sensitive: an agent with broad tool access can combine operations in ways the access model was not designed for explicitly. The RBAC model must be applied at the tool level, not only at the server level.
8.2.2. Client Interfaces#
Client interfaces are the surfaces built on top of the API. They are different form factors of the same underlying platform, each appropriate for a different consumer type. The RBAC model applies uniformly regardless of which surface is used.
8.2.2.1. GUI and self-service portal#
The web portal is the primary interface for non-engineers. Its design principle is progressive disclosure: show the right amount of information to the right audience without exposing the underlying complexity.
The application team sees a three-step view: submitted, in progress, complete, with a status detail that reads “pre-checks running across 24 switches” in plain language, not AWX job IDs. The network engineer sees the per-device pre-check results, the approval gate with an approve or reject action, and the full workflow trace if needed. The admin sees everything, including configuration and audit queries.
Read and write interfaces have different design requirements. A read-only status dashboard can be relatively permissive: any engineer in the platform can see the current state of their requests. The write path (submitting a request, approving a gate, cancelling a running job) needs input validation, a confirmation step, and a clear statement of what will happen before the action is committed.
I have seen teams build portals where the submit button on a new service request form calls the Orchestrator API directly, with no validation layer between the form and the workflow. When a user submits a subnet that conflicts with an existing allocation, the error comes back as an unformatted Orchestrator stack trace. The Presentation layer should validate inputs against the SoT model before the request reaches the Orchestrator, and return a clear, actionable error when validation fails.
Adoption risk is highest at the interface. A technically correct portal that no one uses because it does not fit how the team already works delivers less value than a simpler integration in the tool everyone opens every morning. The application team that already lives in ServiceNow will submit requests more consistently through a ServiceNow form than through a new portal they must learn and log into separately. The engineering team that already uses Slack for incident coordination will act on approval gate notifications faster through a Slack message than through a browser link requiring additional authentication. Familiarity reduces friction at adoption and reduces error rates in daily use. Where a choice exists between building a new surface and meeting users in the tool they already know, the integration is almost always the right first step.
8.2.2.2. CLI#
The CLI for the automation platform is not the device CLI, which is the domain of Chapter 9. This is a command-line interface to the platform itself: a tool engineers use to trigger, inspect, and manage automation without opening a browser.
Engineers prefer CLI for the same reason they prefer shell scripts over GUIs for repetitive work: composability, speed, and scriptability. A CLI command can be aliased, piped into other commands, looped over a device list, incorporated into a runbook, or committed to a repository alongside the infrastructure it manages. A portal click cannot. During an incident at 2am, a command typed from memory in five seconds is faster than a portal that takes twenty seconds to navigate and authenticate. For CI/CD pipelines, a CLI produces structured exit codes (0 for success, non-zero for failure) that map directly to pipeline pass/fail conditions without any parsing. Engineers also trust CLI tools more for high-stakes operations: the parameters are visible in the shell history, auditable, and reproducible.
The CLI earns its place even when a portal exists. During an incident at 2am, opening a browser, logging in, navigating to a workflow, and clicking through a form is slower than running a single command. For CI/CD pipelines, a CLI is preferable to raw API calls: it handles authentication from environment variables, produces structured exit codes, and gives human-readable output when something fails.
Design principles for an automation platform CLI:
- Consistent noun-verb command structure (
workflow run,workflow status,request list) that maps predictably to API operations - Machine-readable output via a
--jsonflag so pipeline scripts can parse the result - Environment-aware configuration: API endpoint and token read from a config file or environment variables, not hardcoded into scripts
- The same RBAC that applies to the API applies to the CLI: a token with operator-level permissions cannot trigger admin-level operations regardless of which surface is used
8.2.2.3. Instant messaging and mobile#
Slack, Teams, and similar platforms serve a dual role in network automation: they are both a notification channel (the Presentation layer pushes events into them) and an interaction surface (engineers send commands through them). Understanding this dual role matters for architecture: the same workspace that receives alert notifications should support approval flows, reducing context-switching for engineers who are already monitoring those channels.
As an interaction surface, messaging platforms work through bots that translate conversational commands into API calls. The bot is thin: it parses the message, maps it to a Presentation layer API call under the sender’s identity, and formats the response for the channel. Three patterns work well in practice:
- Approval flows: a Slack message with “Approve” and “Reject” buttons lets a network engineer act on a pending gate without leaving Slack. The button click calls the API approval endpoint with the engineer’s identity, resolved through the platform’s SSO integration with Slack’s OAuth.
- Status queries:
@netbot status app-paymentsreturns the current workflow state in a formatted channel message. - Quick actions:
@netbot compliance-check building-btriggers a lightweight verification workflow and posts the result inline.
Mobile interfaces serve a specific audience that portals and CLI do not reach: data center technicians working physically in the field. A technician replacing a line card in a rack does not have a laptop open. Their hands may be occupied, their position awkward. A mobile app that lets them scan a device barcode, pull up its current automation state (managed by automation, last deployment 14 days ago, no pending changes), confirm a physical replacement, and trigger the appropriate remediation workflow connects the physical work to the automation platform without a return to a workstation. The RBAC model applies: the technician’s token scopes to the operations their role permits. The interface scopes to field-relevant data: device identity, current state, pending tasks, and simple trigger actions, not the full platform view.
The same RBAC model applies. The bot authenticates engineers through SSO, and forwards requests with a token scoped to that engineer’s role. An engineer with read-only access cannot trigger a workflow through Slack any more than through the portal.
As a notification target, messaging channels receive event-driven updates from the Presentation layer: deployment completions, failure alerts, approval gate requests, and critical workflow errors. Notification routing is a configuration policy: which events go to which channels for which audiences. Engineers receive failure details in a dedicated operations channel. Application teams receive completion status through the ITSM ticket. On-call engineers receive critical failures via PagerDuty.
Mobile interfaces follow the same pattern. The constraint is screen space and interaction model, not architecture. A mobile approval interface that lets an engineer approve a pending gate from their phone calls the same API as the portal. The RBAC model does not change.
8.2.2.4. When to build vs. accept embedded UIs#
Almost every block already has a UI. AWX has a workflow portal. Nautobot has a web interface. Grafana has dashboards. These embedded UIs are sufficient for engineering audiences who understand the platform. The decision to build a dedicated Presentation layer should not be the default.
A practical decision sequence:
- Are all consumers engineers who already use the block UIs? Use embedded UIs. Do not build a custom portal.
- Do non-engineers need to request or track automation? You need either a portal or ITSM integration.
- Is ITSM already where all service requests are managed? Either adopt ITSM as your Presentation layer or integrate with it as a primary consumer. If ITSM’s workflow engine, approval model, and notification system are sufficient for your request patterns, let ITSM be the Presentation layer directly: automation calls originate inside ITSM workflows, not from a separate layer above it. If you need a more capable API contract, cross-block status views, or RBAC that ITSM cannot enforce cleanly, a thin dedicated layer between ITSM and the blocks gives you those properties while ITSM remains the user-facing surface.
- Do you need RBAC that spans multiple blocks uniformly? You need an API layer with centralized auth, regardless of which client surfaces you build on top.
- Can you commit to maintaining a custom portal long-term? If uncertain, start with ITSM integration. Build a portal only when ITSM proves insufficient for the access patterns you need.
- Do consumers need to enter or view details that ITSM forms cannot represent? Complex input fields (YAML snippets, topology parameters, subnet allocation previews), inline validation against the SoT model, or rich status views with per-device drill-down typically exceed what ITSM form builders support cleanly. If consumers regularly need that level of specificity, a custom portal earns its operational cost.
The custom portal is worth building when non-engineers need access that ITSM cannot cleanly express, or when you need a single cross-block status view that none of the embedded UIs provides. The most common mistake is building it before validating that the need is real.
8.2.2.5. Documentation and reporting#
Two related read-only outputs of the Presentation layer serve audiences who consume automation knowledge asynchronously: documentation consumers (teams who need to understand the current state of network services) and reporting consumers (managers and auditors who need periodic summaries and compliance evidence).
The docs-as-code pattern applies here: documentation generated from live platform data using templating languages (Jinja2, Markdown), version-controlled alongside the automation codebase, and regenerated on every change. Mermaid diagrams embedded in generated documents can reflect actual topology from the SoT rather than manually maintained drawings. Similarly, the normalization logic that the Observability block applies to raw metrics (covered in Chapter 6) can be reused here: a reporting template queries the same normalized time-series data to produce tabular summaries for auditors, without duplicating the normalization work.
Auto-generated documentation converts live platform data into readable, shareable artifacts without manual maintenance. For every deployed network service, a generated document can combine the service definition from the SoT, the current operational status from Observability, and the change history from the Orchestrator audit trail. Because the source data is always current, the documentation stays current automatically. Workflow definitions in the Orchestrator are another source: a documentation generator can produce a human-readable runbook from a workflow definition, ensuring that what the runbook describes matches what the automation actually does.
Reporting serves management and compliance audiences who need periodic summaries rather than real-time views. Weekly change summaries (how many workflows ran, success rate, average duration, devices touched) feed operational reviews. Compliance exports (all change records for a period, structured for audit submission) are assembled from the Orchestrator’s audit trail and the Presentation layer’s authorization log. SLA and capacity reports (time-to-provision trends, error rates by device class, pending request backlog) feed capacity planning and service improvement discussions.
Operational dashboards span both chapters by design intent. Chapter 6 covers the data architecture: what gets collected, how it is normalized, and the dashboards built directly on top of telemetry for engineering audiences. The Presentation layer’s involvement begins when those same dashboards are surfaced to non-engineering audiences: embedding them in a self-service portal, scoping access so application teams see only their services, or structuring mobile-friendly views for field use. The underlying metrics stay in the Observability block; the access model and the surface context are Presentation concerns.
The distinction from operational dashboards (Chapter 6) is the consumer and temporal framing: dashboards show current state for engineers making real-time decisions; documentation and reports are snapshots consumed asynchronously by non-engineers. The underlying data may be the same. The surface and the cadence are different.
8.2.3. Integrations and Notifications#
The Presentation layer connects to external systems in both directions: receiving events that trigger automation and delivering results back to the systems that initiated requests. This is where the consumer’s workflow and the platform’s workflow converge.
The distinction from the API layer is directionality and initiation. The API layer handles inbound requests: a consumer calls the platform and waits for a response. Integrations and notifications are about the platform reaching out to systems that did not initiate the current interaction: pushing a status update to a ServiceNow ticket, delivering a webhook callback to a CI/CD pipeline waiting asynchronously, posting an event to a Slack channel monitoring a specific service. The API layer answers calls. This section sends events. Both use the same authentication model and RBAC boundaries; the difference is the direction of initiation. At small scale, a single component handles both patterns cleanly. At larger scale, outbound event delivery (with its retry logic, dead-letter queues, and subscription management) typically becomes a distinct component with its own operational concerns, which is why this model separates the two as distinct functionalities.
8.2.3.1. ITSM integration#
ITSM platforms occupy two distinct positions in a network automation architecture, and the distinction matters for design. In the first position, the ITSM platform is the Presentation layer: its forms define the user interface, its workflow engine handles approvals and notifications, and the automation API is called from within ITSM workflows. In this model there is no external integration because ITSM is not external to the Presentation layer: it is the Presentation layer. In the second position, a dedicated Presentation layer exists and the ITSM platform is one of the consumers it synchronizes with: requests arrive via ITSM webhooks and status updates flow back to ITSM tickets. A third role is also common: ITSM as a SoT data source. When the CMDB contains authoritative asset or service records, the SoT may query it as a federated data source (covered in Chapter 4), but ITSM in this role plays no part in the user-facing automation interface.
The rest of this section describes the integration pattern (second position). The ITSM-as-Presentation-layer pattern (first position) is the right choice when the ITSM platform’s workflow engine, approval model, and notification system are sufficient for your request patterns and you want to avoid introducing a separate layer between users and the blocks.
ITSM integration is what you build when consumers should not need to know the automation platform exists. The application team already works in ServiceNow. The most usable interface is the one they are already using.
A ServiceNow form for “New Network Service Request” captures exactly the fields the SoT model needs and submits them in the structured format the API layer expects. The form is the Presentation layer; the consumer never sees the API call. On submission, the Presentation layer validates the payload, authenticates the service account token the ITSM integration uses, and forwards the request to the Orchestrator.
After the request fires, the ticket should reflect the workflow state in near-real time: “SoT validation complete,” then “Pre-checks running: 24 switches,” then “Approval gate: pending engineer sign-off,” then “Complete: 24/24 switches configured.” This bidirectional synchronization is more complex than a one-shot webhook. It requires the Presentation layer to subscribe to Orchestrator status events and translate them into ticket updates using a persistent event subscription or a polling reconciler.
In many organizations, the ITSM ticket is the change management record. The Presentation layer must ensure the ticket contains enough information to satisfy change management requirements even if the authoritative audit trail lives in the Orchestrator. The two records serve different audiences: the ITSM ticket serves the requester and their manager; the Orchestrator audit record serves the network engineer and the compliance auditor.
ITSM integration has limits. It is appropriate for structured, form-driven request workflows with defined states. It is not suitable for real-time operational queries, complex workflow trace inspection, or power-user operations where engineers need to iterate quickly. Design the platform knowing that ITSM covers the majority of non-engineer requests and a CLI or portal covers the rest.
8.2.3.2. CI/CD pipeline integration#
A deployment pipeline that provisions network resources calls the Presentation layer API at a defined step, passes structured inputs, and blocks until a success or failure result is returned. The pipeline runs under a service account with a scoped token: sufficient permissions to trigger the network workflow and read its status, nothing more.
This is also where the CLI earns its place in automated contexts. A pipeline step that runs netauto workflow run vlan-deploy --params params.json --wait is easier to debug, easier to version, and easier to replace than a raw HTTP call constructing the API payload inline. The CLI’s structured exit code maps directly to the pipeline step’s pass or fail condition.
8.2.3.3. Push notifications and webhook delivery#
When a workflow completes, reaches an approval gate, or fails, the Presentation layer notifies the right audience through the right channel. Notification routing is a policy decision, not a hardcoded mapping. Engineers receive failure details in a dedicated Slack channel. Application teams receive completion status via ticket update. On-call engineers receive critical failures via PagerDuty. The routing rules are configuration, not code.
Webhook delivery is the outbound counterpart to incoming webhooks. A caller that registered a callback URL when triggering a workflow receives the result via HTTP POST when the workflow completes. Delivery guarantees, retry policy on failure, and payload schema are part of the API contract. A callback that fails silently (no retry, no log, no alert) is worse than no callback at all, because the caller assumes the result was delivered.
8.2.4. Solutions Landscape#
The tools listed here are examples for explanatory purposes, not recommendations. Evaluate them against your team’s capabilities, existing tooling, and operational constraints.
The Presentation chapter has a different relationship with the solutions landscape than any other chapter in Part 2. There are almost no tools that exist exclusively as Presentation. Every block already has a UI. The architectural question is not “which Presentation tool should I use?” It is: when do I accept the embedded UIs of each block, and when do I build a dedicated Presentation layer above them?
Two models coexist in mature automation platforms.
Embedded model: use the built-in UI of each block for its audience. Engineers use the orchestrator portal for workflow management, the SoT web UI for inventory, the observability dashboards for network health. This works when all consumers are engineers who understand the tooling and when cross-block views are not needed. The operational overhead is low: no additional systems to run.
Dedicated Presentation model: build or adopt a layer above the blocks that provides a unified experience. Necessary when non-engineers need access, when you need cross-block status in a single view, or when the RBAC requirements do not map cleanly to the built-in permissions of individual tools.
| Approach | Examples | When to use |
|---|---|---|
| Embedded per-block UI | AWX portal, Nautobot UI, Grafana | Engineering audiences; per-tool RBAC acceptable; no cross-block views needed |
| ITSM as primary interface | ServiceNow, Jira Service Management | Enterprise orgs; non-engineers already in ITSM; form-driven request flows |
| Custom self-service portal | React/Vue app, Django app | Non-engineers need access; unified RBAC across blocks; self-service with approval flows |
| API gateway | Kong, AWS API Gateway, NGINX | Multiple consumers with different auth needs; rate limiting; versioning enforcement |
| Network-native portals | Itential, NSO northbound UI | Network-centric platforms with built-in RBAC and ITSM adapters |
| Developer portal | Backstage | Large orgs with many internal platforms needing a unified entry point |
Understanding what is inside the embedded UIs matters for customization decisions. NetBox is built on Django (Python); its web interface and REST API are Django views and Django REST Framework endpoints. Nautobot shares the same lineage. Infrahub uses FastAPI. The “Presentation component” of these SoT tools is a mature web framework: customizable through plugins, custom views, and serializer extensions. That is both its strength (well-documented, production-grade) and its constraint (you are customizing inside a framework designed primarily for the SoT use case, not for the self-service portal use case).
The ITSM row in the table above represents ITSM as the Presentation layer, not as an external integration. When an organization has standardized on ServiceNow or Jira Service Management as the entry point for all service requests, ITSM is the Presentation layer. The automation API is what ITSM calls internally as part of its own workflows; no separate gateway sits between the user and ITSM. The gateway sits between ITSM and the downstream blocks.
One architectural principle that cuts across all approaches: the Presentation layer should be thin. It is a surface, not a system. Business logic belongs in the Orchestrator and SoT. The Presentation layer translates, authenticates, and routes. The moment it begins making automation decisions, the boundary has collapsed.
8.3. Implementation Example#
8.3.1. Two Surfaces, Three Audiences, One Platform#
We have followed the campus network through all of Part 2. The VLAN service request for app-payments was modeled in the SoT in Chapter 4, deployed by the Executor in Chapter 5, validated by Observability in Chapter 6, and coordinated end-to-end by the Orchestrator in Chapter 7. What was never addressed is how three different audiences interact with that workflow.
In this campus, the Presentation layer is composed of three components. ServiceNow is the primary interface for the broader organization: application teams submit requests and track status entirely within ServiceNow, which routes through the Presentation layer’s API as part of its own workflow automation. A custom portal with an audit view serves the engineering and compliance audiences: network engineers review pre-check results and act on approval gates there, and security auditors query composite change records through its read-only audit interface. Both surfaces share the same API layer, which sits within the Presentation layer itself and enforces authentication, RBAC, and versioning before any request reaches the underlying blocks.
The three audiences
The application team submits requests through a ServiceNow form. They want to know when the service is ready and what happened if it failed. They should never need to open AWX or Nautobot. ServiceNow is their Presentation layer; the platform API is something they never see.
The network engineer received an approval gate notification during the workflow (Chapter 7, step 3). They need to see the pre-check results for the 24 target switches, approve or reject, and be able to inspect the outcome. Their interface is the custom portal: more detailed than the ServiceNow form, but still bounded to their team’s scope.
The security auditor arrives three months later with a question: who requested this VLAN, who approved it, which version of the deployment workflow ran, and what was the before-and-after state of the affected switches? Their interface is the portal’s audit view: read-only, with no ability to trigger anything.
The two Presentation surfaces
The API layer is the shared foundation within the Presentation layer. Both ServiceNow and the custom portal route every request through it before anything reaches the underlying blocks. It enforces three role-based tokens: the application team’s operator token (read own requests, submit new requests), the network engineer’s engineer token (read all requests in their scope, approve or reject gates), and the auditor’s read-only token (query audit records across the full platform, no write access). Neither surface bypasses it.
flowchart TD
subgraph Consumers
AT[Application Team]
NE[Network Engineer]
SA[Security Auditor]
end
subgraph PL[Presentation Layer]
SN[ServiceNow]
PORTAL[Custom Portal]
API[API Layer: Auth · RBAC · Versioning]
end
subgraph Blocks[Platform Blocks]
ORC[Orchestrator]
SOT[Source of Truth]
OBS[Observability]
end
AT --> SN
NE & SA --> PORTAL
SN & PORTAL --> API
API --> ORC & SOT & OBS
classDef presentation fill:#f0e6ff,stroke:#9b59b6,color:#4a235a
classDef api fill:#e8e8e8,stroke:#555,color:#111,font-weight:bold
classDef block fill:#f5f5f5,stroke:#999,color:#333
class SN,PORTAL presentation
class API api
class ORC,SOT,OBS block
Step 1: ServiceNow as the application team surface
The application team fills a ServiceNow form: service name (app-payments), subnet size (/24), building (building-b), requesting team, and business justification. On submission, ServiceNow calls the platform API layer directly as part of its own workflow automation. The API layer validates the payload against the SoT data model, authenticates the service account token ServiceNow uses, and forwards the structured request to the Orchestrator.
If validation fails (for example, the requested building does not match any site in the SoT, or the subnet size conflicts with an existing allocation), the API layer returns a structured error immediately, before any Orchestrator workflow is started. ServiceNow updates the ticket with a clear failure reason: “building-c not found in site registry” or “subnet /24 conflicts with existing allocation 10.22.14.0/24 in building-b.” The application team sees the rejection in their ticket and can correct the request without involving a network engineer. No partial workflow state is created and no Saga rollback is needed, because the workflow never started.
As the workflow progresses, the Presentation layer subscribes to Orchestrator status events and translates them into ServiceNow ticket updates: “SoT validation complete,” “Pre-checks running: 24 switches,” “Approval gate: pending engineer sign-off,” “Complete: 24/24 switches configured.” The application team watches their ticket update without knowing that the Orchestrator, the SoT, or AWX exist.
Step 2: The approval gate surface
When the workflow reaches the approval gate, the Orchestrator pauses and emits an event. The Presentation layer receives it, identifies the network engineer responsible for Building B, and sends an approval request to their Slack channel with a direct link to the gate review page. The review page shows the pre-check results per switch: which passed, which failed, which were skipped, and why. The engineer approves or rejects from the portal. The action is logged: who approved, from which interface, at what time, under which token.
Step 3: The audit view
Three months later, the security auditor queries the Presentation layer API: “show me the full change record for VLAN app-payments, Building B.” The read-only audit endpoint aggregates three sources:
- The Orchestrator’s execution record (Chapter 7, section 7.2.4): which workflow version ran, every step input and output, any Saga compensation actions
- The SoT change record (Chapter 4): before-and-after state of the VLAN definition
- The Presentation layer’s own authorization log: who submitted the request, which token they used, who approved the gate and when
The response is a structured document the auditor can attach to the change management record. None of the three underlying blocks was designed to produce this composite record independently. The Presentation layer assembled it from their individual APIs under the auditor’s read-only token.
What this demonstrates
The same ten-step workflow from Chapter 7 served three different audiences through two Presentation surfaces without any of those audiences needing to understand the underlying platform. ServiceNow served the broader organization: the application team tracked their request through a tool they already use every day, with no awareness of AWX, Nautobot, or the Orchestrator behind it. The custom portal served the engineering and compliance audiences: the network engineer reviewed a clean approval interface scoped to their team’s requests, and the auditor queried a composite change record assembled from three blocks through a single read-only view. One API layer, enforcing the same access model for both surfaces, made the platform accessible without making it visible.
8.4. Summary#
The Presentation (Layer) layer is the last building block in Part 2, and it is the one most likely to be treated as an afterthought. The blocks below it do the substantive work: holding intent, applying changes, validating results, coordinating workflows. The Presentation layer produces none of that. But without it, the platform is only accessible to the people who built it, and every other audience stays dependent on a human intermediary.
The API layer is the foundation. Authentication and authorization enforced at the API boundary (not per tool) is what separates accessible automation from dangerous automation. Versioning and stable contracts are what separate a platform from a prototype that breaks its callers on every update. The Model Context Protocol (MCP) interface extends the same access model to Large Language Model (LLM)-based agents, making the platform available to the agentic orchestration patterns introduced in Chapter 7 and developed further in Chapter 17.
Client interfaces are different form factors of the same underlying API. A GUI portal with progressive disclosure serves non-engineers who need to request and track automation without understanding the platform. A CLI serves operators who need speed, scriptability, and CI/CD integration. ChatOps and mobile surfaces serve approval flows and incident queries. The decision about which surfaces to build should follow a deliberate sequence: start with embedded block UIs for engineering audiences, integrate with ITSM when non-engineers need to request automation, build a custom portal only when ITSM proves insufficient.
Integrations and notifications close the loop that Chapter 7’s async response contract opened. The Orchestrator produces a workflow result; the Presentation layer delivers it to the audience that triggered the action through the channel they already use. Bidirectional ITSM synchronization, webhook callbacks, and push notifications are not convenience features. They are what make automation visible to the people who depend on it.
The campus scenario showed this in practice: one workflow, three audiences, two Presentation surfaces. ServiceNow served the broader organization as its own Presentation surface, calling the platform API directly and surfacing status through familiar ticket updates. The custom portal served the engineering and compliance audiences: the network engineer reviewed a clean approval interface scoped to their team’s requests, and the auditor queried a composite change record assembled from the Orchestrator, the SoT, and the Presentation layer’s own authorization log. The same API layer enforced the access model for both. The platform was no longer invisible.
With the Presentation layer in place, Part 2 has covered all seven building blocks of the NAF framework. The next chapter turns to the one block that all of this automation ultimately acts upon: The Network itself, covered in Chapter 9.
💬 Found something to improve? Send feedback for this chapter