E

Easy Gws Modelarmor Sanitize

Comprehensive command designed for google, model, armor, sanitize. Includes structured workflows, validation checks, and reusable patterns for google workspace.

CommandClipticsgoogle workspacev1.0.0MIT
0 views0 copies

Easy GWS ModelArmor Sanitize

One-step command to sanitize user prompts through a Google ModelArmor safety template, catching harmful content, injection attacks, and policy violations before they reach your AI model.

When to Use This Command

Run this command when you need to quickly screen a user prompt through a ModelArmor safety template without constructing the full API call manually.

  • You have a ModelArmor template configured and want to test it against specific user inputs
  • You need to integrate prompt sanitization into a shell script or CI pipeline
  • You want to validate that user-facing input passes safety checks before forwarding to an LLM
  • You are debugging why certain prompts are being blocked by your safety filters

Use it also when:

  • You need to process prompts from stdin in a pipeline workflow
  • You want to compare sanitization results across different templates

Quick Start

# .claude/commands/easy-gws-modelarmor-sanitize.md name: easy-gws-modelarmor-sanitize description: Quick prompt sanitization through ModelArmor arguments: template: Full template resource name text: The user prompt to sanitize
# Sanitize a prompt with explicit text claude easy-gws-modelarmor-sanitize "--template projects/my-project/locations/us-central1/templates/prod-filter --text 'How do I reset my password?'"
Expected output:
{
  "sanitizationResult": {
    "allowed": true,
    "content": "How do I reset my password?",
    "filterMatches": []
  }
}

Core Concepts

ConceptDescription
Prompt SanitizationScreening user input for harmful or policy-violating content
TemplateA pre-configured set of safety rules in ModelArmor
Filter MatchA detected policy violation with category and confidence
Pass-ThroughContent that passes all safety checks unchanged
Blocked ContentInput flagged and rejected by the safety template
Sanitization Flow:
  Raw Prompt ──> ModelArmor Template ──> Filter Engine
                                             │
                              ┌──────────────┼──────────────┐
                              v              v              v
                          ALLOWED        MODIFIED        BLOCKED
                         (pass-through)  (redacted)     (rejected)

Configuration

ParameterDefaultDescription
templaterequiredFull resource path projects/P/locations/L/templates/T
textstdinPlain text prompt to sanitize
jsonnoneFull JSON body overriding --text
formatjsonResponse format: json, table, yaml, csv
dry-runfalseValidate without executing the API call

Best Practices

  1. Keep template resource names in environment variables -- Store long template paths in .env files or shell variables to avoid typos and simplify command invocation.

  2. Use stdin for multi-line prompts -- Pipe content through stdin for complex or multi-line user inputs: cat prompt.txt | gws modelarmor +sanitize-prompt --template $TEMPLATE.

  3. Log all sanitization results -- Capture the full JSON output including filter matches for audit trails and tuning your safety templates over time.

  4. Test with edge cases -- Run prompts that sit on the boundary of your safety rules to understand how the template handles ambiguous content.

  5. Pair with response sanitization -- For complete safety coverage, sanitize both the inbound prompt and the outbound model response using the corresponding +sanitize-response command.

Common Issues

  1. Template not found error -- Verify the template exists with gws modelarmor --help and double-check the project ID, location, and template ID in the resource path.

  2. Timeout on large prompts -- Very long text inputs may exceed API limits. Break large content into smaller chunks and sanitize each one individually.

  3. Unexpected blocking of safe content -- False positives indicate overly aggressive template rules. Review the filterMatches output to identify which filter category triggered and adjust the template thresholds.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates