contexts

In the VisionAI format, the "contexts" information lists all the contextual information present in the annotation, such as scene properties, weather conditions, or location. It is available for tagging custom information that has no spatial or temporal information such as image resolution, size, color, or brightness value. The tagging information can be filtered and sorted in the Dataverse. The "contexts" under “visionai“ information denotes the state of a context within a sequence and refers to its status across multiple frames. This information comprises both static and dynamic elements, providing a comprehensive description of the context present in the annotated data.

How to use?

In VisionAI format, the "contexts" is mainly used for classification and tagging information. The reason for distinguishing between the two is to inform the Dataverse system which information should be used for model training (classification) and which information is used for data management (tagging).

#Schema

Example

"contexts": {
    "ae379bbe-c173-11ed-afa1-0242ac120002": {
        "name": "enviroment_0",
        "type": "enviroment",
        "frame_intervals": [{"frame_start": 0, "frame_end": 0}],
      	"context_data_pointers": {
            "weather": {
              	"type": "vec",
              	"frame_intervals": [{"frame_start": 0, "frame_end": 0}],
                "attributes": {
                    "probability": "vec"
                }
            },
            "temperature": {
              	"type": "num",
              	"frame_intervals": [{"frame_start": 0, "frame_end": 0}],
            }
        }
    }, // Place the items that need to participate in model training, such as classification categories.
    "5019ccb4-14b8-443f-9382-664c553c88f3": {
        "frame_intervals": [{"frame_start": 0, "frame_end": 0}],
        "name": "*tagging01",
        "context_data_pointers": {
            "Inroom": {
                "type": "boolean",
                "frame_intervals": [{"frame_start": 0, "frame_end": 0}]
            }
        },
        "type": "*tagging"
    }, // Tagging type: Place any other items that do not need to participate in model training. This is a dynamic example.
    "5019ccb4-14b8-443f-9382-664c553c88f5": {
        "context_data": {
            "text": [{
                    "name": "note",
                    "val": "this is a note of static info"
                    }]
        },
        "frame_intervals": [{"frame_start": 0, "frame_end": 0}],
        "name": "*tagging02",
        "context_data_pointers": {
            "note": {
                "type": "text",
                "frame_intervals": [{"frame_start": 0, "frame_end": 0}]
            }
        },
        "type": "*tagging"
    }, //Tagging type: Place any other items that do not need to participate in model training. This is a static example. Static items remain consistent within this sequence and do not change with different frames or sensors.
    "d4af429c-c173-11ed-afa1-0242ac120002": { ... }
}

contexts {}

The ${CONTEXT_UUID} information denotes the state of a context within a sequence and refers to its status across multiple frames. Such information comprises both static and dynamic elements.

namedescriptiontyprerequired

${CONTEXT_UUID}

The id of the context. It uses UUID32 as a key.

object

true, unique

name

The unique name of this context. (ex. environment_0) It can be any value.

string

false

type

The context name. (ex. environment) The values used in this format must conform to the ontology of the project. *tagging The value of "type" field should be "*tagging" when the information is for the "project information tagging" without model training requirements such as image size, resolution, city name, diver information, etc. The tagging information allows for the filtering and sorting of the data in the Dataverse. Please note that the system only recognizes tags present in the ground truth data and does not read tags from other annotation sources. If you require specific tags to be associated with your data and displayed on the platform, please ensure they are included in your ground truth annotations.

string

true

frame_intervals

This key indicates which frames of this context exist. Please refer to the example & table below.

object

true

context_data

It contains static information to describe the context, such as the annotation shapes, attributes, or matrics in a sequence. (ex. city: Taipei, or driver: Chen.) It is the static information of the context in a sequence that will not be changed via stream(sensors) or frames. If there is no static information about this context, it is not a required item.

object

false

context_data_pointers

This context points out all attributes without the value of this context and contains all static and dynamic information separately. For example, if one car with the static color blue, and a dynamic location, it will be described in different keys in a context_data_pointers.

object

true

frame_intervals {}

It is an array of the context which indicates all number of frames in this sequence.

namedescriptiontyperequired

frame_start

Initial frame number of the interval.

int

true

frame_end

Ending frame number of the interval.

int

true

context_data {}

It contains static information to describe the context, such as the annotation shapes, attributes, or matrics in a sequence. The "context_data" here reduces redundancy by storing static information consistent throughout the sequence. This item primarily focuses on the "value" within the frames.

namedescriptiontyperequired

name

description

type

Required

${CONTENT_TYPE}

The information type, which is static information. (ex. text)

object

true

name

The name of this attribute. (ex. city)

string

true

val

The value of this attribute. (ex. Taipei)

string

true

context_data_pointers {}

It is an array of the contexts on static and dynamic information which indicates all frames in this sequence. This item primarily focuses on the "type" that exists for rapidly retrieving information without the need to explore the entire set of frames.

namedescriptiontyperequired

${CONTENT_NAME}

The context information name. (ex. image_size) It would be static information or dynamic information.

object

true

type

The value type of this attribute. (ex. num)

string

true

frame_intervals

Shows this attribute exists in which frames. Refer to the frame_intervals above.

object

true

attributes

The attributes of this content. If there is any attribute of the contexts in this sequence, it is a required item. (ex. probability": "vec")

object

false


Use Case

Classification

To describe a classification dataset with one camera sensor:

  • sensor: camera (#camera1)

  • ontology

    • gender (vec): female, male

    • age (vec): child or adult

Example Code

pageclassification

Tagging

  • tagging

    • weather (vec): sunny, cloudy, rainy, snowy, foggy

    • timeofday (vec): daytime, night, DawnDusk

    • scene (vec): tunnel, residential, parkingLot, cityStreet, gasStations, highway

    • Inroom: boolean

    • imagesize: num

    • note: text (static info)

To describe a dataset with taggings:

  • sensor: camera (#camera1)

pagetagging

Last updated