Developer toolkit for adding annotation to streaming video presentations.
There is a single Video Annotator application that should be sub-classed to add in any editing or other behaviors that you need in your particular use case. See the demo for an example of how this can be done.
Each application instance has the following methods for managing the application’s configuration or set of annotations.
The Video Annotator has a richer API for managing shapes than bodies because its focus is on the video.
This method is exported from the SVG-based presentation once the presentation is set up, which happens after the
run method is run.
Because the video annotation application doesn’t manage the presentation of annotation bodies, the
addBodyType only manages information needed to import or export annotation bodies using the Open Annotation data model.
Each application instance has a data store available as its
dataStores.canvas property. See the data schema documentation for information about the data schema.
Each application instance has a data view that filters out currently appropriate annotations. This set consists of those annotations for which the
CurrentTime is between the
.npt_end times as well as some time beyond those times based on the value of the
The data view is available as the
dataViews.currentAnnotations property of the application.
Each application instance has a presentation managing the shapes on the play surface. The presentation is an instance of
OAC.Client.StreamingVideo.Presentation.RaphaelCanvas available as the
presentations.raphsvg property. The presentation takes its data from the
currentAnnotations data view.
Each application instance has a number of variables to track application state, such as the current position in the video or the current annotation receiving focus in the user interface.
The active annotation is the one receiving focus in the user interface.