We’ve all seen a photo or a movie where security guards watch multiple video surveillance screens at once. In the past, that’s similar to how many video surveillance systems worked. They relied heavily on humans watching video feeds, flagging content they thought was important such as traffic accidents or possible criminal activity.
But when a single camera produces 1MB of data per second, 24 hours a day, 365 days a year, the amount of video humans needs to watch quickly becomes unmanageable. Cities and communication service providers (CSPs) with dozens or even hundreds of cameras have far too much footage for humans to monitor effectively. And when humans can’t effectively review video, the entire system becomes far less valuable. Why use video to monitor traffic when you don’t even notice an accident when it happens?
That’s the problem video analytics was designed to solve.
Video analytics software helps process information from video streams so human monitors don’t get overwhelmed and video systems can effectively help the companies and cities using them meet their goals.
How does it work? We’ll dive into the details in this article, as well as some benefits cities and CSPs get by using video analytics software.
How Video Analytics Software Functions
While video analytics applications from vary from provider to provider, most follow this general process.
1. Video Analytics Connects to Cameras
Typically, video systems have multiple cameras at different angles or locations. Video analytics software connects to each of these cameras, so the software can filter through all data regardless of source.
2. Central or Edge Data Processing Begins
Legacy analytics software will process all camera data at a central location. In this case, it’s common for cameras send video footage from a local area network (LAN) in one part of a city to a wide area network (WAN) where the central system is located.
More advanced video analytics software uses this tactic as well as edge analytics. With edge analytics, software starts working at the source, i.e. the LAN. By analyzing data at the source, you quickly cut out normal footage that doesn’t need further processing. This can greatly reduce the amount of data that needs to be uploaded to the WAN and cut down on packet loss, improving the quality of data sent to the central processing system.
3. Algorithms Process Data
Both central processing and edge analytics use algorithms to determine what’s normal in video footage (like pedestrians walking down a sidewalk) and what isn’t (such as a traffic accident). There are two common types of algorithms used:
In the era of artificial intelligence (AI) and smart platforms, statistical algorithms are sometimes referred to as plain math tools. They are used to quickly and easily filter out large chunks of normal video that doesn’t need to be reviewed by human monitors.
A type of AI, deep learning tools are fluid. Over time they “learn” by processing large quantities of data. If, in the data, they see pedestrians walking down the street often, it labels that activity as normal. Something the system hasn’t seen before, or sees very rarely, like a car driving on the sidewalk, is considered as an anomaly. As deep learning tools process more data over time, they become better at spotting anomalies that human monitors should review.
Video analytics software can use deep learning, statistical algorithms, or both. The most advanced platforms use statistical algorithms to pre-filter data, then deep learning for more advanced processing.
4. Final Data Processing Happens at a Central Location
Video analytics software either functions entirely at a central location or, if it uses edge analytics, will finish analysis at a central location.
The central processing application performs high-value analytics to cut down on normal video footage so only anomalous footage is shown to human monitors. Advanced systems with smart technology will also use connected cameras as IoT sensors, with the central application performing metadata analysis on their content. If anomalies are detected, the software can issue an alert to reduce the time it takes a human to review the content.
5. Human Review
Once video analytics software has finished its processing, there should be approximately five percent of footage remaining. Human monitors finish the reviewing footage, determining which events need further action.
Human review can happen in real-time. Systems with AI technology will flag important content when it happens, issuing real-time alerts and prioritizing the stream for review. This helps improve reactivity and overall video monitoring efficacy.
The Benefits of Video Analytics Software
The tangible results of video analytics are a reduction in video footage and alerts for anomalous events. With less footage, cities and CSPs decrease the amount of data storage they need for video content and relieve pressure on human monitors.
Ultimately, video analytics is designed to make your surveillance more effective. By reducing storage needs, it helps save money. By cutting down on the footage humans need to review, it makes your system more efficient. Humans only have a limited capacity to watch video – they miss up to 95 percent of screen activity after just 22 minutes of continuous video monitoring¹ – so reviewing less footage will help monitors be more successful.
The results can be dramatic, improving public safety in cities, increasing customer insights for businesses, and helping create more efficient transportation. The results you receive will be tailored to your organization and its goals.
If you have questions or want more details on video analytics software and how it works, check out Nokia’s recent webinar: Unassisted AI and How it Will Change Video Analytics Forever.