Realease 2025/4/10
Last updated
Last updated
This month, Observ brings a series of powerful enhancements focused on VLM-driven detection, performance optimization, and workflow flexibility. Here’s what’s new:
You can now schedule VLM sampling events by setting fixed time intervals and applying predefined VLM templates. This enables automated streaming analysis with VLM to detect contextual patterns in real-time stream.
VLM prompts can now be augmented with additional parameters, giving you more control and precision over how events are interpreted and triggered.
We’ve added analytic charts to visualize VLM detection results—helping you quickly understand detection patterns, and prompt effectiveness.
Streaming pipelines now support GPU decode with the ability to assign a specific GPU, maximizing performance on multi-GPU systems.
You can now define custom detection timeframes per event, allowing for more refined and situation-specific detection schedules.
Batch operations now support:
Batch VLM configuration
Batch editing of event time settings
Batch event creation for faster setup across multiple cameras or tasks
We’re continuing to make Observ more intelligent, more powerful, and easier to operate. Stay tuned for more exciting updates in the coming months!
VLM Sandbox: A new VLM sandbox environment will allow users to directly test prompts and preview detection results, making it easier to refine and validate templates before deployment.
Performance Tuning for Large-Scale Deployments: Upcoming improvements will focus on performance tuning for streaming models and large-scale deployments, ensuring stable, optimized operation in demanding environments.
For technical support, please contact our during business hours.