VisionAI Observ
DataVerse
English
English
  • Your Observ Journey
  • Tech Specification
    • Technical Specifications
  • Overview
    • Creating Your First AI Detection Task
    • Managing and Monitoring Detection Tasks
    • Event Records
    • Task Settings
    • AI Detection Streaming
    • Notifications
    • AI Detection Range (ROI)
    • Event Zones (Edit Zone)
    • AI Confidence Score
    • Event Parameters
    • Collecting AI Data
    • Location and Cameras
    • Event Records
    • Event Statistical Analysis
    • Organizations and Users
    • Personal Settings
  • Advance Feature
    • System Settings
    • Exporting Data to DataVerse
    • Managing AI Models
    • Customizing Scenarios
    • Batch Create, Edit, Download
    • Custom Detection Events (Beta)
      • (Event Component) Object Card
      • (Event Component) Area Card
      • (Event Component) Logic Gate
        • Logic Gate: With
        • Logic Gate: Overlap
        • Logic Gate: OR
        • Logic Gate: NOT
        • Logic Gate: AND
        • Logic Gate: Object Tag
        • Logic Gate: Counting
      • (Event Component) Trigger Card
    • Video Task (Beta)
    • Smart Traffic Task (Beta)
    • VLM Playground (Beta)
    • VLM Event-Based Detection (Beta)
  • AI Detection Scenarios
    • Virtual Fencing
    • Smoke and Fire Detection
    • PPE Compliance
    • Pathway Blockage Detection
    • Corrosion Damage Detection
    • Pedestrian and Vehicle Counting
    • Illegal Parking Detection
    • Lane Violation Detection
    • Vehicle Breakdown Detection
    • Customer Counting
    • Queue Monitoring
    • New-PPE and Equipment Compliance
    • Presence and Behavior Monitoring
    • Traffic Violation and Abnormal Behavior Detection
    • Traffic Flow Analysis
    • People Flow and Behavioral Analysis
    • Fire and Smoke Monitoring
    • Water and Pipeline Issues
    • Facility and Access Status
  • API Document
    • API Configuration
    • Event
    • Task
    • Camera
    • Location
    • Organization
    • Compute Target
  • Support
    • Troubleshooting
      • No Image Display Issues
      • AI Detection Issues
      • No Events Detected?
      • Slow or Laggy Live Stream
    • FAQ
  • Observ usage
    • Usage and Billing
  • updates
    • Updates and Release Information
      • Realease 2025/4/10
      • Realease 2025/1/8
      • Release 2024/11/12
      • Release 2024/09/18
      • Release 2024/08/06
      • Release 2024/05/13
Powered by GitBook
On this page
  • Observ April Updates
  • New Features:
  • Coming Soon:
  • Support
  1. updates
  2. Updates and Release Information

Realease 2025/4/10

PreviousUpdates and Release InformationNextRealease 2025/1/8

Last updated 1 month ago

Observ April Updates

This month, Observ brings a series of powerful enhancements focused on VLM-driven detection, performance optimization, and workflow flexibility. Here’s what’s new:

New Features:

🔍 VLM Sampling Event Detection

You can now schedule VLM sampling events by setting fixed time intervals and applying predefined VLM templates. This enables automated streaming analysis with VLM to detect contextual patterns in real-time stream.

✏️ Customizable VLM Template

VLM prompts can now be augmented with additional parameters, giving you more control and precision over how events are interpreted and triggered.

📊 VLM Results Visualization

We’ve added analytic charts to visualize VLM detection results—helping you quickly understand detection patterns, and prompt effectiveness.


🚀 Extreme Performance Optimization

Streaming pipelines now support GPU decode with the ability to assign a specific GPU, maximizing performance on multi-GPU systems.

⏱ Flexible Event Timing Configuration

You can now define custom detection timeframes per event, allowing for more refined and situation-specific detection schedules.


📦 Expanded Batch Management Capabilities

Batch operations now support:

  • Batch VLM configuration

  • Batch editing of event time settings

  • Batch event creation for faster setup across multiple cameras or tasks


Coming Soon:

We’re continuing to make Observ more intelligent, more powerful, and easier to operate. Stay tuned for more exciting updates in the coming months!

  • VLM Sandbox: A new VLM sandbox environment will allow users to directly test prompts and preview detection results, making it easier to refine and validate templates before deployment.

  • Performance Tuning for Large-Scale Deployments: Upcoming improvements will focus on performance tuning for streaming models and large-scale deployments, ensuring stable, optimized operation in demanding environments.


Support

For technical support, please contact our during business hours.

support team