未加星标

Streaming Video Analysis in Python

字体大小 | |
[开发(python) 所属分类 开发(python) | 发布者 店小二05 | 时间 2016 | 作者 红领巾 ] 0人收藏点击收藏

Editor’s note: This post is part of our Trainspotting series, a deep dive into the visual and audio detection components of ourCaltrain project. You can find the introduction to the serieshere.

At SVDS we have analyzed Caltrain delays in an effort to use real time, publicly available data to improve Caltrain arrival predictions. However, the station-arrival time data from Caltrain was not reliable enough to make accurate predictions. In order to increase the accuracy of our predictions, we needed to verify when, where, and in which direction trains were going. In this post, we discuss our Raspberry Pi streaming video analysis software, which we use to better predict Caltrain delays.

Platform architecture for our Caltrain detector

In aprevious post, Chloe Mawer implemented a proof-of-concept Caltrain detector using a webcam to acquire video at our Mountain View offices. She explained the use of OpenCV’s python bindings to walk through frame-by-frame image processing. She showed that using video alone, it is possible to positively identify a train based on motion from frame to frame. She also showed how to use regions of interest within the frame to determine the direction in which the Caltrain was traveling.

The work Chloe describes was done using pre-recorded, hand-selected video. Since our goal is to provide real time Caltrain detection, we had to implement a streaming train detection algorithm and measure its performance under real-world conditions. Thinking about a Caltrain detector IoT device as a product, we also needed to slim down from a camera and laptop to something with a smaller form factor. We already had some experiencelistening to trains using a Raspberry Pi, so we bought a camera module for it and integrated our video acquisition and processing/detection pipeline onto one device.


Streaming Video Analysis in Python

The image above shows the data platform architecture of our Pi train detector. On our Raspberry Pi 3B, our pipeline consists of hardware and software running on top of Raspbian Jesse , a derivative of Debian linux. All of the software is written in Python 2.7 and can be controlled from a Jupyter Notebook run locally on the Pi or remotely on your laptop. Highlighted in green are our three major components for acquiring, processing, and evaluating streaming video:

Video Camera: Initializes PiCamera and captures frames from the video stream. Video Sensor: Processes the captured frames and dynamically varies video camera settings. Video Detector: Determines motion in specified Regions of Interest (ROIs), and evaluates if a train passed.

In addition to our main camera, sensor, and detector processes, several subclasses (orange) are needed to perform image background subtraction, persist data, and run models:

Mask: Performs background subtraction on raw images, using powerful algorithms implemented in OpenCV 3. History: A pandas DataFrame that is updated in real time to persist and access data. Detector Worker: Assists the video detector in evaluating image, motion and history data. This class consists of several modules (yellow) responsible for sampling frames from the video feed, plotting data and running models to determine train direction. Binary Classification

Caltrain detection, at its simplest, boils down to a simple question of binary classification: Is there a train passing right now? Yes or no.


Streaming Video Analysis in Python

As with any other binary classifier, the performance is defined by evaluating the number of examples in each of four cases:

Classifier says there is a train and there is a train, True Positive Classifier says there is a train when there is none, False Positive Classifier says there is no train when there is one, False Negative Classifier says there is no train when there isn’t one, True Negative

For more information on classifier evaluation, check outthis work by Tom Fawcett.

After running our minimum viable Caltrain detector for a week, we began to understand how our classifier performed, and importantly, where it failed.

Causes of false positives:

Delivery trucks Garbage trucks Light rail Freight trains

Darkness is the main causes of false negatives.

Our classifier involves two main parameters set empirically: motion and time. We first evaluate the amount of motion in selected ROIs. This is done at five frames per second. The second parameter we evaluate is motion over time, wherein a set amount of motion must occur over a certain amount of time to be considered a train. We set our time threshold at two seconds, since express trains take about three seconds to pass by our sensor located 50 feet from the tracks. As you can imagine, objects like humans walking past our IoT device will not create large enough motion to trigger a detection event, but large objects like freight trains or trucks will trigger a false positive detection event if they traverse the video sensor ROIs over two seconds or more. Future blog posts will discuss how we integrate audio and image classification to decrease false positive events.

While our video classifier worked decently well at detecting trains during the day, we were unable to detect trains (false negatives) in low light conditions after sunset. When we tried additional computationally expensive image processing to detect trains in low light on the Raspberry Pi, we ended up processing fewerframes per second than we captured, grinding our system to a halt. We have beenable to mitigate the problem somewhat by using the NoIR model of the Pi camera, which lets more light in during low light conditions, but the adaptive frame rate functionality on the camera didn’t have sufficient dynamic range out of the box.

Totruly understand image classification and dynamic camera feedback, it is helpful to understand the nuts and bolts of video processing on a Raspberry Pi. We’ll now walk through some of those nuts and bolts―note that we include the code as we go along.

PiCamera and the Video_Camera class

The PiCamera package is an open source package that offers a pure Python interface to the Pi camera module that allows you to record image or video to file or stream. After some experimentation, we decided to use PiCamera in a continuous capture mode , as shown below in the initialize_camera and initialize_video_stream functions.

class Video_Camera(Thread):
def __init__(self,fps,width,height,vflip,hflip,mins):
self.input_deque=deque(maxlen=fps*mins*60)
#...
def initialize_camera(self):
self.camera = pc.PiCamera(resolution=(self.width,self.height), framerate=int(self.fps))#...
def initialize_video_stream(self):
self.rawCapture = pc.array.PiRGBArray(self.camera, size=self.camera.resolution)

本文开发(python)相关术语:python基础教程 python多线程 web开发工程师 软件开发工程师 软件开发流程

主题: Raspberry PiOpenCVPythonDebianLinux
分页:12
转载请注明
本文标题:Streaming Video Analysis in Python
本站链接:http://www.codesec.net/view/482852.html
分享请点击:


1.凡CodeSecTeam转载的文章,均出自其它媒体或其他官网介绍,目的在于传递更多的信息,并不代表本站赞同其观点和其真实性负责;
2.转载的文章仅代表原创作者观点,与本站无关。其原创性以及文中陈述文字和内容未经本站证实,本站对该文以及其中全部或者部分内容、文字的真实性、完整性、及时性,不作出任何保证或承若;
3.如本站转载稿涉及版权等问题,请作者及时联系本站,我们会及时处理。
登录后可拥有收藏文章、关注作者等权限...
技术大类 技术大类 | 开发(python) | 评论(0) | 阅读(30)