ai_security.ai_security
BigQuery table contains enriched records of AI security violations. These records include debug metadata, workflow context, raw content, and validation/model metadata.
You can use this table for various purposes, including:
your_project
→ dataset ai_security
→ table ai_security
.bq
:logName
— Log stream name.resource.type
— GCP resource type (e.g., k8s_container
).resource.labels.pod_name
— Pod name (k8s).resource.labels.location
— Region / zone.resource.labels.namespace_name
— K8s namespace.resource.labels.cluster_name
— Cluster name.resource.labels.project_id
— GCP project id.timestamp
— Event timestamp (when it occurred).receiveTimestamp
— When the log was ingested.insertId
— Unique insert id (dedupe).labels.commit_hash
, labels.branch
, labels.full_version
— Build/version metadata.jsonPayload.ai_security
contains a JSON representation of the AiSecurityLogEntry
proto. Important fields and their meaning:
event_id
(string) — globally unique event id.
event_type
(enum) — e.g., VIOLATION
.
event_description
(string) — human readable description.
user_id
(string) — user that triggered the event.
session_info
— object with tab_id
and session_tracking_token
.
action
(enum) — enforcement taken (BLOCK_REQUEST
/ ALLOW_REQUEST
).
content_raw
(string) — raw content that caused the event (user prompt or retrieved content).
content_metadata
(repeated {key,value}
) — context keys such as:
RESOURCE_NAME
, RESOURCE_ID
, RESOURCE_URL
,
AGENT_NAME
, RUN_ID
, CHAT_SESSION_ID
, AGENT_ID
, SOURCE
validation_metadata
(repeated {key,value}
) — model prediction / validation debugging key-values.
UNNEST()
is used to flatten repeated metadata arrays. Replace YOUR_PROJECT
accordingly.
event_id
from the Findings dashboard or the Run ID
.event_id
in BigQuery to fetch content_raw
, content_metadata
and validation_metadata
.llm_call
and agent_span
fields (if present) to see prompt / response context.content_metadata.RESOURCE_ID
or normalized content_raw
hashes to group similar violations.validation_metadata
keys (e.g., model label or confidence buckets) to identify common false positives.digest
entries with workflow/compiler logs (workflow
, workflow_compiler
) in the exported fields to see enqueue vs execution differences.ai_security_reporting
dataset for dashboards.SELECT *
). Use UNNEST()
carefully and filter early.content_raw
may contain sensitive user content. Limit access via IAM and consider creating sanitized views that mask or redact content_raw
before sharing with wider teams.BLOCK_REQUEST
in short window), schedule queries to write a metric table and tie it to Cloud Monitoring or a Cloud Function that publishes alerts.content_raw
unnecessarily.