G

Gws Modelarmor Streamlined

Powerful command for google, model, armor, create. Includes structured workflows, validation checks, and reusable patterns for google workspace.

CommandClipticsgoogle workspacev1.0.0MIT
0 views0 copies

GWS ModelArmor Streamlined

Execute Google Workspace ModelArmor content safety operations with a streamlined workflow that validates authentication, inspects method schemas, and applies prompt or response sanitization in a single pass.

When to Use This Command

Run this command when you need to screen user-generated content or AI model responses through Google ModelArmor safety templates.

  • You need to sanitize user prompts before sending them to an LLM to prevent injection attacks
  • You want to filter model responses for PII, harmful content, or policy violations
  • You are setting up a new ModelArmor template with custom safety rules
  • You need to integrate content safety checks into an existing GWS automation pipeline

Use it also when:

  • You want to batch-sanitize multiple prompts through a single template
  • You need to verify that an existing template is working correctly by testing it with sample content

Quick Start

# .claude/commands/gws-modelarmor-streamlined.md name: gws-modelarmor-streamlined description: Streamlined ModelArmor content safety operations arguments: action: sanitize-prompt | sanitize-response | create-template
# Sanitize a user prompt through a safety template claude gws-modelarmor-streamlined "+sanitize-prompt --template projects/myproj/locations/us-central1/templates/safety-v1 --text 'Tell me how to hack a system'"
Expected output:
{
  "sanitizedContent": "[BLOCKED] Content violates safety policy",
  "filterResults": {
    "harmCategory": "DANGEROUS_CONTENT",
    "blocked": true,
    "confidence": "HIGH"
  }
}

Core Concepts

ConceptDescription
TemplateA named ModelArmor configuration that defines safety filter rules
Sanitize PromptScreen inbound user input before it reaches the model
Sanitize ResponseScreen outbound model output before it reaches the user
Filter ResultThe safety evaluation outcome including category and confidence
Resource NameFull path: projects/PROJECT/locations/LOCATION/templates/TEMPLATE
Content Safety Pipeline:
  User Input ──> +sanitize-prompt ──> LLM ──> +sanitize-response ──> User
       │              │                              │                  │
       │         [BLOCK/PASS]                   [BLOCK/PASS]           │
       └──────── Blocked Input                  Filtered Output ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜

Configuration

ParameterDefaultDescription
templaterequiredFull resource name of the ModelArmor template
textstdinPlain text content to sanitize
jsonnoneFull JSON request body (overrides --text)
formatjsonOutput format: json, table, yaml, csv
dry-runfalseValidate the request without calling the API

Best Practices

  1. Use separate templates for prompts and responses -- Inbound user prompts require different safety rules than outbound model responses; create dedicated templates for each direction.

  2. Test templates with known-bad inputs -- Before deploying a template to production, verify it catches known harmful content categories by running test sanitizations.

  3. Pipe content from stdin for long text -- For multi-line or large content, pipe text via stdin rather than using the --text flag: echo 'content' | gws modelarmor +sanitize-prompt --template ....

  4. Monitor filter results for false positives -- Log the filterResults output to track blocking rates and confidence scores, adjusting template sensitivity as needed.

  5. Confirm with users before creating templates -- Template creation modifies your project configuration; always review the template definition with stakeholders before executing.

Common Issues

  1. Template resource name format error -- The template must use the full path format projects/PROJECT_ID/locations/LOCATION/templates/TEMPLATE_ID. Partial names will fail.

  2. Authentication scope insufficient -- ModelArmor requires specific OAuth scopes. Re-run gws auth login and ensure the Model Armor API is enabled in your GCP project.

  3. Empty response from sanitization -- If neither --text nor --json is provided and stdin is empty, the command waits for input. Provide content explicitly or pipe from another command.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates