VLM Event-Based Detection (Beta)
Last updated
Last updated
Event-based VLM detection enhances the accuracy and depth of event monitoring by integrating Visual Language Model (VLM) analysis after standard event detection. This approach provides additional context and detailed insights for detected events.
Task Creation Process:
When creating a camera detection task, follow the standard process:
Step 1: Setup the task and select a camera source.
Step 2: Select the monitoring scenario, such as fire detection or zone intrusion.
Add VLM Detection (Step 3):
In the "Config Event" step, select the event triggers you want to enhance with VLM detection.
Toggle the VLM Analysis Template for each selected event to enable VLM.
Choose a pre-configured VLM Template from the dropdown menu to define the type of analysis that will be performed.
Save and Create Task:
Once the configuration is complete, save the task, and the system will now trigger VLM analysis whenever the specified events are detected.
Note
Ensure that VLM templates are pre-configured in VLM Playground before enabling VLM detection. This ensures meaningful results when events are triggered.
After detecting a predefined event, the system automatically runs the VLM for further analysis.
Integrated Event Results:
The VLM analysis results are appended to the detected event for enriched reporting.
Example Results:
Weather: Sunny, partly cloudy.
Alert Level: 2.
Vehicle Types: Firetruck, car.
Scene Description: Downtown street with a firetruck stopped.
Example Use Case:
If a fire is detected, the VLM analyzes the scene to identify emergency personnel, vehicles, or abnormal conditions.
Event-based VLM detection enhances monitoring by providing detailed insights into events, such as identifying potential risks or verifying emergency response. It seamlessly integrates into the task creation process, allowing users to enable VLM detection without added complexity, while offering flexibility to customize templates for specific scenarios or event types.