VisionAI Observ
DataVerse
English
English
  • Your Observ Journey
  • Tech Specification
    • Technical Specifications
  • Overview
    • Creating Your First AI Detection Task
    • Managing and Monitoring Detection Tasks
    • Event Records
    • Task Settings
    • AI Detection Streaming
    • Notifications
    • AI Detection Range (ROI)
    • Event Zones (Edit Zone)
    • AI Confidence Score
    • Event Parameters
    • Collecting AI Data
    • Location and Cameras
    • Event Records
    • Event Statistical Analysis
    • Organizations and Users
    • Personal Settings
  • Advance Feature
    • System Settings
    • Exporting Data to DataVerse
    • Managing AI Models
    • Customizing Scenarios
    • Batch Create, Edit, Download
    • Custom Detection Events (Beta)
      • (Event Component) Object Card
      • (Event Component) Area Card
      • (Event Component) Logic Gate
        • Logic Gate: With
        • Logic Gate: Overlap
        • Logic Gate: OR
        • Logic Gate: NOT
        • Logic Gate: AND
        • Logic Gate: Object Tag
        • Logic Gate: Counting
      • (Event Component) Trigger Card
    • Video Task (Beta)
    • Smart Traffic Task (Beta)
    • VLM Playground (Beta)
    • VLM Event-Based Detection (Beta)
  • AI Detection Scenarios
    • Virtual Fencing
    • Smoke and Fire Detection
    • PPE Compliance
    • Pathway Blockage Detection
    • Corrosion Damage Detection
    • Pedestrian and Vehicle Counting
    • Illegal Parking Detection
    • Lane Violation Detection
    • Vehicle Breakdown Detection
    • Customer Counting
    • Queue Monitoring
    • New-PPE and Equipment Compliance
    • Presence and Behavior Monitoring
    • Traffic Violation and Abnormal Behavior Detection
    • Traffic Flow Analysis
    • People Flow and Behavioral Analysis
    • Fire and Smoke Monitoring
    • Water and Pipeline Issues
    • Facility and Access Status
  • API Document
    • API Configuration
    • Event
    • Task
    • Camera
    • Location
    • Organization
    • Compute Target
  • Support
    • Troubleshooting
      • No Image Display Issues
      • AI Detection Issues
      • No Events Detected?
      • Slow or Laggy Live Stream
    • FAQ
  • Observ usage
    • Usage and Billing
  • updates
    • Updates and Release Information
      • Realease 2025/4/10
      • Realease 2025/1/8
      • Release 2024/11/12
      • Release 2024/09/18
      • Release 2024/08/06
      • Release 2024/05/13
Powered by GitBook
On this page
  • How to Use VLM Playground
  • VLM Templates List
  1. Advance Feature

VLM Playground (Beta)

The VLM Playground provides an interactive platform for users to experiment with Vision-Language Models (VLMs) and configure templates for various analysis tasks. It allows users to test inputs, observe outputs, and save reusable templates for streamlined workflows.

How to Use VLM Playground

  • Upload Image Select an image to analyze using the VLM model.

  • Instruction Tell what will this vlm do. An instruction to guide the behavior and response of the vlm. Configure input prompts and explicitly define the parameter keys with expected output types.

Instruction Suggestion

The system automatically detects output types such as string, number, vec, and boolean.

Users are encouraged to explicitly define output expectations in the prompt and provide detailed descriptions for each parameter. This ensures stable and structured data formats, enabling consistent results when templates are reused.

Example: For a question like "What is the weather?", specify single-choice options (e.g., Sunny, Cloudy, Rainy) and a format type (vec).

Why Define Outputs Clearly?

  • Explicitly defining expected output types in the prompt ensures:

    • Consistency: Guarantees that future tasks using the template produce stable and structured data formats.

    • Reliability: Reduces the chances of unexpected results or variations in output.

    • Data Analysis: Stable, structured data is easier to analyze and integrate into larger workflows.

  • Adjust Settings

    Fine-tune model settings such as temperature for creativity or token limit for response length.

  • Submit to Run Analysis Submit the input to the VLM and observe the model's output in real-time.

  • Save as Template

    Once satisfied with the setup, click Save Template to store the configuration for future use.


VLM Templates List

Navigate to the VLM Templates List tab to view, edit, or reuse existing templates.

  • View all saved templates in the Templates List section.

  • Templates are organized by name, parameter keys, and creation date for easy access and management.

Templates for Tasks:

These templates can be directly applied to VLM tasks to automate analysis processes such as:

  • Detecting scene details (e.g., weather, object types, abnormal events).

  • Providing structured outputs for population density, traffic analysis, or emergency detection.

  • Supporting event-based triggers, ensuring detailed and consistent context for detected events.

The VLM Playground allows users to experiment with Vision-Language Models by testing prompts, configuring parameters, and saving templates for reuse. By explicitly defining structured outputs, users can ensure consistent and reliable data formats for analysis. These templates can be seamlessly applied to tasks, enabling automated and standardized VLM processing for real-world applications.

PreviousSmart Traffic Task (Beta)NextVLM Event-Based Detection (Beta)
VLM Event-Based Detection (Beta)