AI Agents & Prompt Templates¶
Kiket’s AI features are powered by a definition-first approach: every agent is described by a YAML manifest and paired with a versioned prompt template. This section explains how workspace owners and template authors can manage those files.
Where Definitions Live¶
Agent and prompt definitions are loaded from three locations (in priority order):
config/kiket/agents/config/kiket/prompts– first-party defaults that ship with the product.definitions/<template>/.kiket/agents|prompts– sample repositories used when bootstrapping new customers.storage/workflow_sources/<org>/<repo>/.kiket/agents|prompts– synced customer repositories.
Duplicate IDs are rejected at boot, so a single manifest controls each agent across all sources.
Looking to publish your own agents through configuration-as-code? See the Agent Definition DSL for the schema used by
ConfigurationLoader.
Agent Manifest Highlights¶
Agent manifests live under .kiket/agents/*.yml and include:
model_version– schema version (1.0).id,version,name,description.capabilitiestags (e.g.summarize,classify) used by KiketScript and the command palette.context.required/context.optional– declare which data sources must be assembled (issue metadata, analytics snapshots, etc.).human_in_loopsettings for approval workflows.confidence_thresholdand free-formmetadata.
All manifests are parsed through ERB, but unknown helpers are left intact so mistakes are obvious in reviews.
Prompt Templates¶
Prompt templates sit beside manifests under .kiket/prompts/*.yml (or .yml.erb). Each template defines:
variables– the fields that can be interpolated inside thebody.body– the actual ERB prompt sent to Gemini 2.5 Flash.- Optional
output_schemaoroutput_schema_path– JSON schema used to validate the model response.
Templates must be referenced from a manifest via the prompt key.
Linting Before Deploy¶
Run the AI lint command from the CLI (kiket ai lint) or trigger it via the admin automation console to validate agents and prompts before rollout. The lint job fails fast on duplicate IDs, invalid field values, malformed JSON schema, or ERB errors.
Manifest Field Summary¶
| Key | Description |
|---|---|
model_version |
Schema version, currently 1.0. |
id |
Lowercase slug (letters, digits, ., _, -). Must be unique across every repository. |
version |
Semantic version string for rollout tracking. |
name / description |
Human-facing metadata shown in admin tooling. |
capabilities |
Tags that describe what the agent can do (summarize, classify, etc.). |
context.required |
Data providers that must be assembled before calling the model (e.g. issues, analytics_snapshot). |
context.optional |
Additional context that improves the result when available. |
human_in_loop |
Approval settings (required, escalation_strategy, notes). |
confidence_threshold |
Float between 0.0 and 1.0 used to trigger human review. |
metadata |
Free-form key/value pairs for UI hints or rollout flags. |
Prompt Template Field Summary¶
| Key | Description |
|---|---|
model_version |
Schema version (1.0). |
id |
Template slug referenced by a manifest’s prompt value. |
version |
Semantic version for change tracking. |
variables |
Snake_case variables usable within the body. Missing variables raise an error at runtime. |
body |
The ERB prompt sent to Gemini 2.5 Flash. |
output_schema / output_schema_path |
JSON schema describing the model response. |
metadata |
Optional structured data such as max token limits or safety settings. |
Task Generation Output & Approvals¶
The Vertex AI task generator now emits richer metadata so human reviewers can intervene when needed:
- Operational context — each call automatically enriches the prompt with the latest sprint history, linked pull requests, deployments, and incident postmortems so generated tasks reflect real project conditions.
- Structured confidence — every task includes a
rationalestring and aconfidencescore (0.0 – 1.0). These fields are exposed in the API responses, audit logs, andissue.metadatafor downstream consumers. - Low-confidence handoffs — when the confidence score falls below
0.65, Kiket raises a pendingWorkflowApprovalassigned to the project lead (or the organization’smanagerrole). Teams can review, edit, or reject those items before they leave the backlog. - Acceptance criteria templates — if the model omits acceptance criteria, type-specific templates are applied to ensure every task retains verifiable outcomes.
Assignment Optimizer 2.0¶
- Workload & skills awareness — assignment recommendations now blend live workload snapshots, a structured skills matrix, and historical velocity to surface the best teammate for each issue.
- Pairing and delegation guidance — when risk is high or nobody has capacity, the agent proposes a pairing partner or routes the review to project leads instead of auto-assigning.
- Guardrails by default — auto-assignment only happens at ≥80% confidence with low workload impact; everything else flows through the Workflow Approval queue.
- Override with rationale — accepting or overriding the suggestion captures the explanation in the audit log, keeping a transparent trail of assignment decisions.
Additional Resources¶
- Agent Manifest Reference
- Agent Definition DSL
- Prompt Template Guide
- AI roadmap highlights remain in the platform changelog and will be announced before new features roll out.
Command Palette Quick Actions¶
- Agents that include a
metadata.command_paletteblock are exposed in the project command palette under the “AI Quick Actions” group. - Each command executes via
Agents::Runner, so spend limits, audit logs, and usage metrics are recorded automatically. - Optional fields such as
store_result_inlet the action persist the AI output (e.g. as a comment or custom field) after the command runs.