banner



How To Recrod Synchronized Video From Multiple Camera

In today's tutorial, y'all'll learn how to stream alive video over a network with OpenCV. Specifically, you'll learn how to implement Python + OpenCV scripts to capture and stream video frames from a photographic camera to a server.

Every week or so I receive a comment on a blog postal service or a question over e-mail that goes something like this:

Howdy Adrian, I'm working on a project where I need to stream frames from a customer photographic camera to a server for processing using OpenCV. Should I apply an IP camera? Would a Raspberry Pi work? What about RTSP streaming? Have y'all tried using FFMPEG or GStreamer? How do y'all propose I approach the problem?

Information technology's a great question — and if y'all've ever attempted live video streaming with OpenCV then you know there are a ton of unlike options.

Yous could go with the IP camera route. But IP cameras tin can be a pain to work with. Some IP cameras don't fifty-fifty allow yous to access the RTSP (Real-time Streaming Protocol) stream. Other IP cameras just don't work with OpenCV's cv2.VideoCapture function. An IP camera may be too expensive for your budget likewise.

In those cases, you are left with using a standard webcam — the question and so becomes, how exercise you stream the frames from that webcam using OpenCV?

Using FFMPEG or GStreamer is definitely an selection. But both of those can be a royal hurting to work with.

Today I am going to testify you my preferred solution using message passing libraries, specifically ZMQ and ImageZMQ, the latter of which was developed past PyImageConf 2018 speaker, Jeff Bass. Jeff has put a ton of work into ImageZMQ and his efforts really shows.

As y'all'll come across, this method of OpenCV video streaming is not only reliable but incredibly easy to apply, requiring merely a few lines of code.

To acquire how to perform live network video streaming with OpenCV, simply go along reading!

Looking for the source code to this mail?

Leap Right To The Downloads Section

Alive video streaming over network with OpenCV and ImageZMQ

In the beginning office of this tutorial, we'll discuss why, and under which situations, we may choose to stream video with OpenCV over a network.

From in that location we'll briefly discuss message passing along with ZMQ, a library for high performance asynchronous messaging for distributed systems.

We'll then implement ii Python scripts:

  1. A client that will capture frames from a unproblematic webcam
  2. And a server that will take the input frames and run object detection on them

Will be using Raspberry Pis as our clients to demonstrate how cheaper hardware tin be used to build a distributed network of cameras capable of pipage frames to a more than powerful machine for additional processing.

By the finish of this tutorial, you'll be able to apply live video streaming with OpenCV to your own applications!

Why stream videos/frames over a network?

Effigy 1: A great application of video streaming with OpenCV is a security camera system. You lot could use Raspberry Pis and a library chosen ImageZMQ to stream from the Pi (customer) to the server.

There are a number of reasons why you may want to stream frames from a video stream over a network with OpenCV.

To start, yous could be building a security awarding that requires all frames to be sent to a central hub for boosted processing and logging.

Or, your client machine may be highly resources constrained (such as a Raspberry Pi) and lack the necessary computational horsepower required to run computationally expensive algorithms (such equally deep neural networks, for case).

In these cases, you need a method to take input frames captured from a webcam with OpenCV and then pipage them over the network to another system.

At that place are a diversity of methods to accomplish this task (discussed in the introduction of the post), but today we are going to specifically focus on bulletin passing.

What is message passing?

Figure 2: The concept of sending a bulletin from a procedure, through a message broker, to other processes. With this method/concept, we tin stream video over a network using OpenCV and ZMQ with a library called ImageZMQ.

Message passing is a programming paradigm/concept typically used in multiprocessing, distributed, and/or concurrent applications.

Using bulletin passing, one process can communicate with ane or more than other processes, typically using a message banker.

Whenever a procedure wants to communicate with another procedure, including all other processes, it must first ship its request to the bulletin broker.

The message banker receives the request and then handles sending the message to the other process(es).

If necessary, the message broker also sends a response to the originating procedure.

Every bit an example of bulletin passing let'southward consider a tremendous life event, such as a mother giving birth to a newborn child (process communication depicted in Figure 2 to a higher place). Process A, the female parent, wants to announce to all other processes (i.e., the family), that she had a baby. To do so, Procedure A constructs the bulletin and sends it to the message broker.

The message broker then takes that message and broadcasts information technology to all processes.

All other processes and so receive the message from the message banker.

These processes want to evidence their support and happiness to Procedure A, so they construct a message saying their congratulations:

Figure 3: Each process sends an acknowledgment (ACK) message back through the message broker to notify Process A that the message is received. The ImageZMQ video streaming project past Jeff Bass uses this arroyo.

These responses are sent to the bulletin broker which in turn sends them back to Process A ( Figure three ).

This example is a dramatic simplification of bulletin passing and message broker systems only should aid you understand the full general algorithm and the type of communication the processes are performing.

Y'all can very easily become into the weeds studying these topics, including various distributed programming paradigms and types of letters/advice (1:ane communication, 1:many, broadcasts, centralized, distributed, broker-less etc.).

Equally long every bit you empathise the basic concept that message passing allows processes to communicate (including processes on dissimilar machines) then y'all will be able to follow along with the balance of this post.

What is ZMQ?

Figure 4: The ZMQ library serves as the backbone for bulletin passing in the ImageZMQ library. ImageZMQ is used for video streaming with OpenCV. Jeff Bass designed it for his Raspberry Pi network at his farm.

ZeroMQ, or but ZMQ for short, is a high-performance asynchronous message passing library used in distributed systems.

Both RabbitMQ and ZeroMQ are some of the almost highly used message passing systems.

However, ZeroMQ specifically focuses on high throughput and low latency applications — which is exactly how you lot can frame live video streaming.

When building a system to stream live videos over a network using OpenCV, you would want a organization that focuses on:

  • Loftier throughput: There volition exist new frames from the video stream coming in chop-chop.
  • Low latency: Equally we'll want the frames distributed to all nodes on the system equally shortly as they are captured from the photographic camera.

ZeroMQ also has the benefit of being extremely easy to both install and use.

Jeff Bass, the creator of ImageZMQ (which builds on ZMQ), chose to utilise ZMQ as the message passing library for these reasons — and I couldn't agree with him more.

The ImageZMQ library

Figure 5: The ImageZMQ library is designed for streaming video efficiently over a network. It is a Python package and integrates with OpenCV.

Jeff Bass is the owner of Yin Yang Ranch, a permaculture farm in Southern California. He was one of the first people to join PyImageSearch Gurus, my flagship computer vision class. In the course and customs he has been an active participant in many discussions around the Raspberry Pi.

Jeff has establish that Raspberry Pis are perfect for computer vision and other tasks on his subcontract. They are inexpensive, readily available, and astoundingly resilient/reliable.

At PyImageConf 2018 Jeff spoke about his farm and more specifically about how he used Raspberry Pis and a central reckoner to manage data collection and analysis.

The heart of his projection is a library that he put together called ImageZMQ.

ImageZMQ solves the problem of real-time streaming from the Raspberry Pis on his farm. It is based on ZMQ and works really well with OpenCV.

Patently and simple, it just works. And it works really reliably.

I've found it to be more reliable than alternatives such equally GStreamer or FFMPEG streams. I've as well had improve luck with it than using RTSP streams.

You can learn the details of ImageZMQ by studying Jeff'due south code on GitHub.

Jeff's slides from PyImageConf 2018 are also available here.

In a few days, I'll be posting my interview with Jeff Bass on the blog as well.

Permit's configure our clients and server with ImageZMQ and put information technology them to work!

Configuring your organization and installing required packages

Figure 6: To install ImageZMQ for video streaming, you lot'll need Python, ZMQ, and OpenCV.

Installing ImageZMQ is quite like shooting fish in a barrel.

First, let'due south pip install a few packages into your Python virtual environment (assuming you're using i). If you need to set up pip and virtual environments, please refer to my pip install opencv tutorial first.

Then use the following commands:

$ workon <env_name> # my environment is named py3cv4 $ pip install opencv-contrib-python $ pip install imagezmq $ pip install imutils          

You must install these packages on both the clients and server. Provided you didn't meet any issues you are now set up to movement on.

Note: On your Raspberry Pi, nosotros recommend installing this version of OpenCV: pip install opencv-contrib-python==4.ane.0.25.

Preparing clients for ImageZMQ

ImageZMQ must be installed on each client and the fundamental server.

In this section, we'll comprehend one important difference for clients.

Our code is going to use the hostname of the customer to identify it. Yous could use the IP accost in a string for identification, but setting a client's hostname allows you to more easily identify the purpose of the client.

In this example, we'll presume y'all are using a Raspberry Pi running Raspbian. Of form, your client could run Windows Embedded, Ubuntu, macOS, etc., simply since our demo uses Raspberry Pis, let's learn how to modify the hostname on the RPi.

To change the hostname on your Raspberry Pi, burn up a terminal (this could be over an SSH connectedness if you'd like).

Then run the raspi-config command:

$ sudo raspi-config          

You'll be presented with this terminal screen:

Figure seven: Configuring a Raspberry Pi hostname with raspi-config. Shown is the raspi-config home screen.

Navigate to "2 Network Options" and press enter.

Figure 8: Raspberry Pi raspi-config network settings folio.

Then choose the option "N1 Hostname".

Figure 9: Setting the Raspberry Pi hostname to something easily identifiable/memorable. Our video streaming with OpenCV and ImageZMQ script volition utilize the hostname to place Raspberry Pi clients.

You can now change your hostname and select "<Ok>".

Yous will be prompted to reboot — a reboot is required.

I recommend naming your Raspberry Pis like this: pi-location . Here are a few examples:

  • pi-garage
  • pi-frontporch
  • pi-livingroom
  • pi-driveway
  • …you get the idea.

This way when you pull upwards your router page on your network, yous'll know what the Pi is for and its corresponding IP address. On some networks, you could even connect via SSH without providing the IP address like this:

$ ssh pi@pi-frontporch          

Equally y'all tin can encounter, it volition probable salve some time later on.

Defining the client and server human relationship

Effigy ten: The client/server relationship for ImageZMQ video streaming with OpenCV.

Earlier we actually implement network video streaming with OpenCV, permit'south showtime define the client/server relationship to ensure we're on the same folio and using the same terms:

  • Customer: Responsible for capturing frames from a webcam using OpenCV so sending the frames to the server.
  • Server: Accepts frames from all input clients.

Yous could fence dorsum and forth as to which system is the customer and which is the server.

For example, a system that is capturing frames via a webcam so sending them elsewhere could exist considered a server — the system is undoubtedly serving upwardly frames.

Similarly, a organisation that accepts incoming data could very well be the client.

However, we are assuming:

  1. At that place is at to the lowest degree one (and likely many more) organisation responsible for capturing frames.
  2. There is only a single system used for actually receiving and processing those frames.

For these reasons, I prefer to call back of the system sending the frames every bit the client and the system receiving/processing the frames as the server.

You lot may disagree with me, but that is the client-server terminology nosotros'll be using throughout the remainder of this tutorial.

Project structure

Be sure to grab the "Downloads" for today'due south project.

From there, unzip the files and navigate into the project directory.

You lot may use the tree command to inspect the structure of the project:

$ tree . ├── MobileNetSSD_deploy.caffemodel ├── MobileNetSSD_deploy.prototxt ├── client.py └── server.py  0 directories, 4 files          

Note: If you're going with the third culling discussed above, so y'all would need to place the imagezmq source directory in the projection besides.

The first two files listed in the project are the pre-trained Caffe MobileNet SSD object detection files. The server (server.py ) volition take reward of these Caffe files using OpenCV's DNN module to perform object detection.

The client.py script volition reside on each device which is sending a stream to the server. Later on, nosotros'll upload customer.py onto each of the Pis (or another automobile) on your network so they tin can send video frames to the cardinal location.

Implementing the client OpenCV video streamer (i.e., video sender)

Let's first past implementing the customer which volition be responsible for:

  1. Capturing frames from the camera (either USB or the RPi camera module)
  2. Sending the frames over the network via ImageZMQ

Open up the client.py file and insert the following code:

# import the necessary packages from imutils.video import VideoStream import imagezmq import argparse import socket import time  # construct the argument parser and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-due south", "--server-ip", required=True, 	assistance="ip address of the server to which the client volition connect") args = vars(ap.parse_args())  # initialize the ImageSender object with the socket accost of the # server sender = imagezmq.ImageSender(connect_to="tcp://{}:5555".format( 	args["server_ip"]))          

We start off by importing packages and modules on Lines 2-vi:

  • Pay close attention here to see that we're importing imagezmq in our customer-side script.
  • VideoStream will be used to grab frames from our camera.
  • Our argparse import will be used to process a command line argument containing the server'southward IP address (--server-ip is parsed on Lines 9-12).
  • The socket module of Python is only used to take hold of the hostname of the Raspberry Pi.
  • Finally, time volition be used to let our camera to warm up prior to sending frames.

Lines 16 and 17 but create the imagezmq sender object and specify the IP address and port of the server. The IP address will come from the command line argument that we already established. I've institute that port 5555 doesn't usually have conflicts, so information technology is hardcoded. Yous could easily turn it into a command line argument if you lot need to equally well.

Allow's initialize our video stream and start sending frames to the server:

# get the host name, initialize the video stream, and allow the # camera sensor to warmup rpiName = socket.gethostname() vs = VideoStream(usePiCamera=Truthful).get-go() #vs = VideoStream(src=0).showtime() time.slumber(2.0)   while True: 	# read the frame from the camera and ship it to the server 	frame = vs.read() 	sender.send_image(rpiName, frame)          

Now, we'll take hold of the hostname, storing the value equally rpiName (Line 21). Refer to "Preparing clients for ImageZMQ" above to ready your hostname on a Raspberry Pi.

From at that place, our VideoStream object is created to connect grab frames from our PiCamera. Alternatively, you can employ any USB camera connected to the Pi by commenting Line 22 and uncommenting Line 23.

This is the betoken where you lot should also set your camera resolution. We are just going to use the maximum resolution so the statement is not provided. But if you find that in that location is a lag, y'all are likely sending too many pixels. If that is the example, you lot may reduce your resolution quite easily. Just option from one of the resolutions available for the PiCamera V2 here: PiCamera ReadTheDocs. The 2d table is for V2.

One time you've called the resolution, edit Line 22 similar this:

vs = VideoStream(usePiCamera=True, resolution=(320, 240)).get-go()          

Note: The resolution argument won't make a departure for USB cameras since they are all implemented differently. Every bit an alternative, you can insert a frame = imutils.resize(frame, width=320) between Lines 28 and 29 to resize the frame manually.

From there, a warmup sleep time of ii.0 seconds is set (Line 24).

Finally, our while loop on Lines 26-29 grabs and sends the frames.

As yous can see, the customer is quite simple and straightforward!

Let's motion on to the bodily server.

Implementing the OpenCV video server (i.due east., video receiver)

The live video server volition be responsible for:

  1. Accepting incoming frames from multiple clients.
  2. Applying object detection to each of the incoming frames.
  3. Maintaining an "object count" for each of the frames (i.e., count the number of objects).

Permit's get ahead and implement the server — open the server.py file and insert the following code:

# import the necessary packages from imutils import build_montages from datetime import datetime import numpy every bit np import imagezmq import argparse import imutils import cv2  # construct the statement parser and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-p", "--prototxt", required=True, 	help="path to Caffe 'deploy' prototxt file") ap.add_argument("-1000", "--model", required=Truthful, 	help="path to Caffe pre-trained model") ap.add_argument("-c", "--confidence", type=float, default=0.ii, 	aid="minimum probability to filter weak detections") ap.add_argument("-mW", "--montageW", required=True, type=int, 	assistance="montage frame width") ap.add_argument("-mH", "--montageH", required=True, type=int, 	aid="montage frame height") args = vars(ap.parse_args())          

On Lines 2-8 we import packages and libraries. In this script, about notably we'll be using:

  • build_montages : To build a montage of all incoming frames.
  • imagezmq : For streaming video from clients. In our instance, each client is a Raspberry Pi.
  • imutils : My parcel of OpenCV and other image processing convenience functions available on GitHub and PyPi.
  • cv2 : OpenCV's DNN module will be used for deep learning object detection inference.

Are yous wondering where imutils.video.VideoStream is? We usually use my VideoStream grade to read frames from a webcam. However, don't forget that we're using imagezmq for streaming frames from clients. The server doesn't have a camera direct wired to information technology.

Permit's process five command line arguments with argparse:

  • --prototxt : The path to our Caffe deep learning prototxt file.
  • --model : The path to our pre-trained Caffe deep learning model. I've provided MobileNet SSD in the "Downloads" merely with some minor changes, yous could elect to use an alternative model.
  • --confidence : Our confidence threshold to filter weak detections.
  • --montageW : This is not width in pixels. Rather this is the number of columns for our montage. We're going to stream from four raspberry Pis today, so you could exercise 2×2, 4×1, or one×4. Y'all could besides do, for instance, 3×iii for nine clients, merely v of the boxes would exist empty.
  • --montageH : The number of rows for your montage. See the --montageW explanation.

Let'due south initialize our ImageHub object along with our deep learning object detector:

# initialize the ImageHub object imageHub = imagezmq.ImageHub()  # initialize the list of grade labels MobileNet SSD was trained to # observe, and so generate a set of bounding box colors for each class CLASSES = ["background", "aeroplane", "bicycle", "bird", "boat", 	"bottle", "motorbus", "car", "cat", "chair", "cow", "diningtable", 	"dog", "horse", "motorbike", "person", "pottedplant", "sheep", 	"sofa", "train", "tvmonitor"]  # load our serialized model from disk print("[INFO] loading model...") net = cv2.dnn.readNetFromCaffe(args["prototxt"], args["model"])          

Our server needs an ImageHub to have connections from each of the Raspberry Pis. Information technology essentially uses sockets and ZMQ for receiving frames across the network (and sending back acknowledgments).

Our MobileNet SSD object CLASSES are specified on Lines 29-32. If you aren't familiar with the MobileNet Unmarried Shot Detector, please refer to this web log post or Deep Learning for Computer Vision with Python.

From at that place we'll instantiate our Caffe object detector on Line 36.

Initializations come next:

# initialize the consider set (class labels we intendance near and desire # to count), the object count dictionary, and the frame  dictionary CONSIDER = set(["domestic dog", "person", "car"]) objCount = {obj: 0 for obj in CONSIDER} frameDict = {}  # initialize the lexicon which volition contain  information regarding # when a device was last active, and then shop the last time the check # was made was now lastActive = {} lastActiveCheck = datetime.now()  # stores the estimated number of Pis, active checking period, and # calculates the elapsing seconds to expect earlier making a check to # see if a device was active ESTIMATED_NUM_PIS = 4 ACTIVE_CHECK_PERIOD = ten ACTIVE_CHECK_SECONDS = ESTIMATED_NUM_PIS * ACTIVE_CHECK_PERIOD  # assign montage width and height and then we tin view all incoming frames # in a single "dashboard" mW = args["montageW"] mH = args["montageH"] print("[INFO] detecting: {}...".format(", ".join(obj for obj in 	CONSIDER)))          

In today's example, I'g but going to CONSIDER 3 types of objects from the MobileNet SSD list of CLASSES . We're considering (one) dogs, (two) persons, and (three) cars on Line twoscore.

We'll shortly use this CONSIDER set to filter out other classes that nosotros don't care nearly such equally chairs, plants, monitors, or sofas which don't typically move and aren't interesting for this security type project.

Line 41 initializes a dictionary for our object counts to exist tracked in each video feed. Each count is initialized to zero.

A separate dictionary, frameDict is initialized on Line 42. The frameDict lexicon will incorporate the hostname key and the associated latest frame value.

Lines 47 and 48 are variables which aid u.s. determine when a Pi concluding sent a frame to the server. If information technology has been a while (i.e. there is a problem), we tin can go rid of the static, out of date epitome in our montage. The lastActive lexicon volition accept hostname keys and timestamps for values.

Lines 53-55 are constants which help usa to summate whether a Pi is active. Line 55 itself calculates that our check for activity will be twoscore seconds. You lot can reduce this period of time by adjusting ESTIMATED_NUM_PIS and ACTIVE_CHECK_PERIOD on Lines 53 and 54.

Our mW and mH variables on Lines 59 and 60 represent the width and height (columns and rows) for our montage. These values are pulled directly from the control line args dictionary.

Permit's loop over incoming streams from our clients and processing the data!

# start looping over all the frames while True: 	# receive RPi name and frame from the RPi and acknowledge 	# the receipt 	(rpiName, frame) = imageHub.recv_image() 	imageHub.send_reply(b'OK')  	# if a device is not in the last active lexicon then it ways 	# that its a newly connected device 	if rpiName not in lastActive.keys(): 		impress("[INFO] receiving data from {}...".format(rpiName))  	# record the last active time for the device from which nosotros just 	# received a frame 	lastActive[rpiName] = datetime.at present()          

We begin looping on Line 65.

Lines 68 and 69 grab an image from the imageHub and send an ACK bulletin. The result of imageHub.recv_image is rpiName , in our case the hostname, and the video frame itself.

It is really as simple as that to receive frames from an ImageZMQ video stream!

Lines 73-78 perform housekeeping duties to make up one's mind when a Raspberry Pi was lastActive .

Permit'southward perform inference on a given incoming frame :

            # resize the frame to take a maximum width of 400 pixels, then 	# take hold of the frame dimensions and construct a blob 	frame = imutils.resize(frame, width=400) 	(h, w) = frame.shape[:two] 	blob = cv2.dnn.blobFromImage(cv2.resize(frame, (300, 300)), 		0.007843, (300, 300), 127.5)  	# laissez passer the hulk through the network and obtain the detections and 	# predictions 	internet.setInput(blob) 	detections = net.frontwards()  	# reset the object count for each object in the CONSIDER set 	objCount = {obj: 0 for obj in CONSIDER}          

Lines 82-90 perform object detection on the frame :

  • The frame dimensions are computed.
  • A blob is created from the image (see this post for more details about how OpenCV's blobFromImage function works).
  • The hulk is passed through the neural cyberspace.

From there, on Line 93 nosotros reset the object counts to nada (nosotros volition exist populating the dictionary with fresh count values shortly).

Allow's loop over the detections with the goal of (1) counting, and (ii) drawing boxes around objects that nosotros are considering:

            # loop over the detections 	for i in np.arange(0, detections.shape[two]): 		# excerpt the confidence (i.east., probability) associated with 		# the prediction 		confidence = detections[0, 0, i, ii]  		# filter out weak detections by ensuring the confidence is 		# greater than the minimum confidence 		if conviction > args["confidence"]: 			# excerpt the alphabetize of the class label from the 			# detections 			idx = int(detections[0, 0, i, 1])  			# check to see if the predicted class is in the ready of 			# classes that need to be considered 			if CLASSES[idx] in CONSIDER: 				# increment the count of the particular object 				# detected in the frame 				objCount[CLASSES[idx]] += 1  				# compute the (ten, y)-coordinates of the bounding box 				# for the object 				box = detections[0, 0, i, iii:7] * np.array([w, h, westward, h]) 				(startX, startY, endX, endY) = box.astype("int")  				# draw the bounding box around the detected object on 				# the frame 				cv2.rectangle(frame, (startX, startY), (endX, endY), 					(255, 0, 0), 2)          

On Line 96 we brainstorm looping over each of the detections . Inside the loop, we go along to:

  • Extract the object confidence and filter out weak detections (Lines 99-103).
  • Grab the label idx (Line 106) and ensure that the label is in the CONSIDER ready (Line 110). For each detection that has passed the two checks (confidence threshold and in CONSIDER ), nosotros will:
    • Increment the objCount for the corresponding object (Line 113).
    • Draw a rectangle effectually the object (Lines 117-123).

Side by side, let's annotate each frame with the hostname and object counts. We'll also build a montage to display them in:

            # draw the sending device name on the frame 	cv2.putText(frame, rpiName, (10, 25), 		cv2.FONT_HERSHEY_SIMPLEX, 0.five, (0, 0, 255), ii)  	# draw the object count on the frame 	label = ", ".bring together("{}: {}".format(obj, count) for (obj, count) in 		objCount.items()) 	cv2.putText(frame, label, (10, h - xx), 		cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255,0), 2)  	# update the new frame in the frame dictionary 	frameDict[rpiName] = frame  	# build a montage using images in the frame dictionary 	montages = build_montages(frameDict.values(), (w, h), (mW, mH))  	# display the montage(s) on the screen 	for (i, montage) in enumerate(montages): 		cv2.imshow("Domicile pet location monitor ({})".format(i), 			montage)  	# detect any kepresses 	key = cv2.waitKey(1) & 0xFF          

On Lines 126-133 nosotros make two calls to cv2.putText to draw the Raspberry Pi hostname and object counts.

From at that place nosotros update our frameDict with the frame corresponding to the RPi hostname.

Lines 139-144 create and display a montage of our client frames. The montage will be mW frames broad and mH frames alpine.

Keypresses are captured via Line 147.

The last block is responsible for checking our lastActive timestamps for each client feed and removing frames from the montage that have stalled. Let's see how it works:

            # if current fourth dimension *minus* final fourth dimension when the active device check 	# was made is greater than the threshold set then do a bank check 	if (datetime.at present() - lastActiveCheck).seconds > ACTIVE_CHECK_SECONDS: 		# loop over all previously active devices 		for (rpiName, ts) in list(lastActive.items()): 			# remove the RPi from the last active and frame 			# dictionaries if the device hasn't been active recently 			if (datetime.now() - ts).seconds > ACTIVE_CHECK_SECONDS: 				print("[INFO] lost connectedness to {}".format(rpiName)) 				lastActive.pop(rpiName) 				frameDict.pop(rpiName)  		# set the last agile cheque time as current time 		lastActiveCheck = datetime.now()  	# if the `q` key was pressed, break from the loop 	if primal == ord("q"): 		intermission  # do a scrap of cleanup cv2.destroyAllWindows()          

In that location's a lot going on in Lines 151-162. Permit's pause it downwards:

  • Nosotros simply perform a check if at to the lowest degree ACTIVE_CHECK_SECONDS have passed (Line 151).
  • We loop over each key-value pair in lastActive (Line 153):
    • If the device hasn't been agile recently (Line 156) we need to remove data (Lines 158 and 159). First we remove (pop ) the rpiName and timestamp from lastActive . Then the rpiName and frame are removed from the frameDict .
  • The lastActiveCheck is updated to the current time on Line 162.

Effectively this will help us get rid of expired frames (i.due east. frames that are no longer existent-time). This is really important if you are using the ImageHub server for a security application. Possibly you are saving fundamental motion events similar a Digital Video Recorder (DVR). The worst thing that could happen if you don't go rid of expired frames is that an intruder kills power to a customer and you don't realize the frame isn't updating. Think James Bond or Jason Bourne sort of spy techniques.

Concluding in the loop is a cheque to see if the "q" key has been pressed — if so nosotros break from the loop and destroy all agile montage windows (Lines 165-169).

Streaming video over network with OpenCV

Now that we've implemented both the client and the server, let's put them to the test.

Brand sure you utilise the "Downloads" department of this postal service to download the source lawmaking.

From there, upload the client to each of your Pis using SCP:

$ scp client.py pi@192.168.1.10:~ $ scp client.py pi@192.168.1.11:~ $ scp client.py pi@192.168.1.12:~ $ scp client.py pi@192.168.i.thirteen:~          

In this example, I'm using four Raspberry Pis, but four aren't required — y'all tin utilise more or less. Be certain to apply applicable IP addresses for your network.

Yous also need to follow the installation instructions to install ImageZMQ on each Raspberry Pi. Run across the "Configuring your organisation and installing required packages" section in this weblog post.

Before we outset the clients, we must start the server. Let'due south fire it up with the following command:

$ python server.py --prototxt MobileNetSSD_deploy.prototxt \ 	--model MobileNetSSD_deploy.caffemodel --montageW 2 --montageH 2          

Once your server is running, go ahead and start each client pointing to the server. Here is what yous need to do on each client, step-past-step:

  1. Open an SSH connection to the client: ssh pi@192.168.i.10
  2. Start screen on the client: screen
  3. Source your profile: source ~/.contour
  4. Activate your surround: workon py3cv4
  5. Install ImageZMQ using instructions in"Configuring your organization and installing required packages".
  6. Run the client: python client.py --server-ip 192.168.1.5

As an alternative to these steps, y'all may kickoff the client script on reboot.

Automagically, your server will start bringing in frames from each of your Pis. Each frame that comes in is passed through the MobileNet SSD. Here'southward a quick demo of the result:

A total video demo tin exist seen below:

What's next? I recommend PyImageSearch Academy.

Form information:
35+ total classes • 39h 44m video • Concluding updated: February 2022
★★★★★ four.84 (128 Ratings) • 3,000+ Students Enrolled

I strongly believe that if yous had the correct teacher you could master computer vision and deep learning.

Do you recollect learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science?

That'due south non the case.

All you need to master calculator vision and deep learning is for someone to explicate things to you in simple, intuitive terms. And that's exactly what I exercise. My mission is to change education and how complex Bogus Intelligence topics are taught.

If you're serious well-nigh learning computer vision, your adjacent stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you'll learn how to successfully and confidently apply computer vision to your piece of work, enquiry, and projects. Join me in estimator vision mastery.

Within PyImageSearch Academy you'll detect:

  • 35+ courses on essential reckoner vision, deep learning, and OpenCV topics
  • ✓ 35+ Certificates of Completion
  • 39h 44m on-demand video
  • Brand new courses released every calendar month , ensuring yous can keep up with state-of-the-fine art techniques
  • Pre-configured Jupyter Notebooks in Google Colab
  • ✓ Run all code examples in your spider web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
  • ✓ Admission to centralized code repos for all 500+ tutorials on PyImageSearch
  • Easy one-click downloads for lawmaking, datasets, pre-trained models, etc.
  • ✓ Access on mobile, laptop, desktop, etc.

Click here to join PyImageSearch University

Summary

In this tutorial, you learned how to stream video over a network using OpenCV and the ImageZMQ library.

Instead of relying on IP cameras or FFMPEG/GStreamer, we used a simple webcam and a Raspberry Pi to capture input frames and then stream them to a more powerful machine for additional processing using a distributed organisation concept called message passing.

Thank you to Jeff Bass' hard work (the creator of ImageZMQ) our implementation required only a few lines of code.

If you are ever in a state of affairs where you need to stream live video over a network, definitely give ImageZMQ a attempt — I remember you lot'll notice it super intuitive and easy to employ.

I'll be back in a few days with an interview with Jeff Bass besides!

To download the source lawmaking to this postal service, and exist notified when future tutorials are published here on PyImageSearch, just enter your electronic mail address in the form below!

Download the Source Code and FREE 17-page Resource Guide

Enter your e-mail accost below to get a .zilch of the code and a FREE 17-page Resource Guide on Calculator Vision, OpenCV, and Deep Learning. Inside yous'll find my hand-picked tutorials, books, courses, and libraries to help you primary CV and DL!

Source: https://pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/

Posted by: baxteratudeas46.blogspot.com

0 Response to "How To Recrod Synchronized Video From Multiple Camera"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel