Using Semantics to Enhance Bodyworn Camera Video for Law Enforcement

Industry

In the US today, law enforcement agencies and officers are under increased scrutiny regarding policing behaviors. More and more law enforcement agencies are adapting the use of bodyworn cameras to assist with recording officer's interactions with the public. 
Although touted by many as one more step toward transparency of interactions between the police and the public, the result is that terabytes of video information are being recorded every year by hundreds of law enforcement agencies around the US, with very little attention being paid to how to manage this video, except to 'store it in the cloud' with minimal metadata attached. 

For now, police agencies are using an online video management system provided by a hardware vendor that does little more than upload and store the video to a cloud environment. The cloud system does allow for manual application of metadata, applied by an officer. It also provides limited administrative metadata, such as date and time. 

The challenge is that countless hours of body camera video are being uploaded into the cloud solution with very little attention being paid to how the video can be optimized for future use. Police departments fear that with the US open access laws such as the Freedom of Information Act, private citizens will be increasingly aware of the presence of the bodycamera videos, and will start to exercise their right to view this video. 
With few exceptions, local and state laws have not yet caught up with the use of bodycamera video, and have rules and regulations based on viewing document-based police reports, for example, rather than bodycamera video. Since most if not all bodycamera videos exposed to the public will need some kind of redaction (e.g. some faces, locations and circumstances will need to be ‘blurred out’ prior to release), which is still a very manual process that can take many hours, police agencies are concerned that bodycamera video processing will overwhelm their personnel. (NB: video redaction is itself a large topic for further discussion, but not for this presentation).

This talk will briefly address how semantic ontology software and speech-to-text recognition software can be used to greatly enhance the overall effectiveness and usage of body camera video for law enforcement agencies. A brief discussion and display of a law enforcement ontology in the PoolParty Thesaurus Manager software will be done, with an accompanying look at how the RAMP technology’s MediaCloud can generate time-coded, transcribed text from a video’s audio track. The combination of transcribed text being analyzed by a semantic-based ontology tool with the potential to be used in a number of different ways by law enforcement users – from legal teams to training teams to investigation teams – is the basis for this exploratory work. The potential exists to use this methodology for extending how metadata are used by law enforcement agencies, including how ‘passive’ metadata can be harvested by analyzing audio text and comparing it to an established law enforcement ontology. In addition, the use of the Oracle Front Porch video management system will also be mentioned as the underlying asset management backbone of the offering.

Although still in the early stages of development, this methodology is being considered by some law enforcement agencies who are aware of the limited usability of simply storing video in the cloud, and are interested in how a system can be implemented to address a more thorough and complete use of the video.

Speakers: