What are the functions of OpenCV video library

Image processing with Processing.py

Everyone his little Warhol

Processing, and thus Processing.py as well, has a whole arsenal of filters for image manipulation. At the beginning I would like to pick out two of them and use them to create a small program, the result of which should be reminiscent of the famous screen prints by Pop Art artist Andy Warhol.

I took a photo of our Sheltie Joey and me, that Stefanie Radon shot of us about four years ago, which adorns my Facebook profile and converted it into a pure black and white drawing. accepts parameters between and - the smaller the value, the less is displayed. After a few experiments I decided on, which in my opinion brought the most useful result for this photo.

In the function I then drew the picture eight times in a row in two rows and colored it in a different color with the filter. I had to experiment with the colors for a while until I got the result shown above, which I am now happy with.

The source code

The source code is simple and easy to understand. In I loaded the image and converted it into a black and white version, in I then created the eight differently colored versions. I ran a loop over the list of colors I selected:


Of course you can also use the photo of Joey and me for your own experiments - after all, it's on Flickr and in the grimacing book, but it would certainly be more in the spirit of Andy Warhol if you (select) your own pictures that you can colorize and want to serialize.

Filters for image processing

Processing, and thus Processing.py, come with a small collection of ready-made filters for image manipulation that can be applied to any image. The filters have the following syntax: either


Whether a filter can receive an additional parameter depends on the filter. How the filters work and whether and how they receive a parameter can be seen in the following table:

Original image (no filter)
THRESHOLD, parameter (optional) between 0 and 1, default 0.5
GRAY, no parameter
INVERT, photographically speaking the negative, no parameter
POSTERIZE, between 2 and 255, but you only have a real effect with low values
BLUR, the larger the value, the more blurry the image will be. The parameter is optional, the default is 1
ERODE, no parameter
DILATE (the opposite of ERODE), no parameter
Filters can also be combined, here first GRAY and then POSTERIZE

With the following little sketch you can play with the various filters (I have the commented out parts for the Thumbnails required in the table above):

Simply enter the desired value (between and) and then let the sketch run. You are of course invited to play with the filters that allow parameters.

The last (commented out) line shows you how to save the result. Processing recognizes the format of the image by the extension.

Interactive filter

You can of course explore the effect of the various filter parameters interactively with the mouse. As an example, I've written two little skits that explore one and the other.

Since the high values ​​no longer provide any interesting effects, I have limited the parameter to the values ​​between and with the help of the function.

expects values ​​between and. So I just divided the value by the width of the window. Because of the integer division of Python 2.7 I had to explicitly convert one of the values ​​to one in order to get the desired result (otherwise you only get the value zero). However, if the mouse is far left, the picture will only be white, while if the mouse is positioned far right in the window it will be almost completely black. The interesting results lie somewhere in between. You should try this out with various pictures to get a feeling for the expected effects.


Pointillism describes a style of painting that had its heyday between 1889 and 1910. Pointillist images consist of small regular dabs of color in pure colors. The overall color impression of a surface only emerges in the eye of the beholder and from a certain distance.

When I was a little boy, we also had a variety of pointillism in art classes at my school: Using a hole punch, we punched as colorful confetti as possible from the colored pages of discarded magazines and then glued colored pictures together onto templates. Of course we didn't have any nude photos as a template - after all, it was a Catholic elementary school and the wild 1968s were still in the future.

Something like this can of course also be easily reproduced in Processing.py (whereby the colors that are as pure as possible are only approximated in the example program, because the original image is a hand-colored photograph, probably also from the 19th century1).

The program window shows the initial image on the left. On the right, the target image, composed of randomly sized circles, is slowly emerging. The points have a starting value () of six, which is multiplied by a random factor between 0.2 and 1.5. (I use the Python function in the program and not the built-in function of Processing. I like the Python function somehow, but that is probably a matter of taste.)

Each time the loop is run through, the color value of a random point in the original image is determined and then drawn as a circle (point) in the target image. The result is similar to the original image, only that it gives the impression that one is looking at it through a pane of textured glass, such as the one that sometimes adorns shower or bathroom doors.

The source code

The source code is again nice and short and invites you to experiment. If you set the constant, for example, then the target image looks significantly more realistic. And you get a very strange result if you comment out the line with the.

You don't necessarily have to draw circles, of course. A square or a triangle produces completely different effects. Just play around with it a little. Processing (.py) is designed to be played with.

More pointillism

If I am honest, the result of the program from the last section cannot really convince aesthetically or in the sense of pointillism. This is due to the fact that in the program every single pixel is interrogated and then displayed as an enlarged point. In the end, this creates something like a washed-out original, but not a grid. Therefore - based on an idea from the wonderful book »Generative Design« (unfortunately only available in English at the moment) - I actually programmed a raster version of the nude picture and the result convinced me more:

To do this, I first reduced the image, which was originally 400 x 640 pixels in size, to 50 x 80 pixels and then added it

to create a corresponding grid for the output window, which is still 400 x 640 pixels in size. With the formula

I then converted the scanned colors to grayscale, I took the weightings from the above-mentioned book "Generative Design", Wikipedia, for example, names other weightings, but equally distributed weightings are possible and common. So there is still room for experiments here.


I then determined the radius of the circles depending on the gray level: the darker the gray level, the larger the circle. I found the value experimentally, there is still room for experiments here, too. For example, you get a nice result if you read the line


replaced. The processing source code from "Generative Design" also shows a few really nice possibilities of what you can do with such a grid.

The source code

Here is the complete source code, it is - as almost always - refreshingly short:

Videos are pictures too

So far, I've written little to nothing about Processing's video capabilities. It was probably due to the fact that with the current versions of Processing the video library is no longer part of the standard distribution, but that you have to download it separately. And here was the first rabbit in the pepper.

Because my attempts to install the library via the menu ended each time with a timeout: I had tried it a hundred times, and each time the attempt ended with the error message: "Connection waiting time exceeded while downloading video", i.e. a timeout. After I had almost given up on it, it worked on try 101 - the library was finally installed.

The rest was easy (a nice series of video tutorials from Daniel Shiffman):

As already described here, you can integrate the library into your sketch and then you can just get started:

Those few lines are really enough to write a video player in Processing.py. Of course, the video library has a few more methods that are the most needed ones (in my eyes):

  • - plays the video only once (instead of)
  • - stops the video at the current point
  • - jumps to a certain point in the video (information in seconds (as a floating point number - so 3.57 seconds is also possible))
  • - returns the length of the movie (also in seconds).

The function in the above sketch is also important. It first sets the event loop in motion, with which the video shows it in the Processing window every time a new frame is ready. Without this you won't see anything.

But the most interesting thing is that once the frame is loaded, it is an image (). All filters and image processing functions that are available in Processing can therefore also be used on videos. As soon as I find a nice video on Archive.org, I will experiment with it a little over the next few days and then report here. Still digging!

The video library also has classes and methods for directly processing live videos from a camera (). But since the built-in camera of my old MacBook Pro didn't work (it never really worked, but I never needed it (I never skype for privacy reasons)), I couldn't test it. But in principle they work the same as the video functions with saved videos. You can also see what the differences are in the above-linked video playlist from Daniel Shiffman see who experimented intensively with it.

OpenCV and Processing.py

OpenCV is a free program library with algorithms for image processing and machine vision. It is written for the programming languages ​​C, C ++, Python and Java and is available as free software under the terms of the BSD license. The "CV" in the name stands for "Computer Vision" in English. And after recently watching some videos where Daniel Shiffman Having implemented computer vision algorithms in Processing (Java) by foot, I thought to myself that this should also be easier. After all, OpenCV is available as a library for processing and it is based on the "official" OpenCV Java API.

Really, the hardest part of the whole thing was installing the library. As here, the repository for the processing libraries was probably too heavily used and so I only got the library after several attempts, each with one Time-out canceled, downloaded.

The rest then was simple: I stuck to this sample program in Processing (Java) from the GitHub side of the project and ported it to Python. It looked like this:

The program converts a color photo to a black and white image and shows where the contour lines are, according to which OpenCV decides what is displayed in black and what is white. You can (and should - especially if you are using a different photo) play around with the value to see what exactly is going on.

The photo I used (© 2012 by Stefanie Radon) I had cut to 420 x 420 pixels. If you use another photo with a different size, you must of course adjust the size of the output window to this photo.

Face recognition with OpenCV and Processing.py

One of the most cited applications of OpenCV is face recognition and I wanted to test how well this works with Processing.py and OpenCV:

OpenCV has several face recognition libraries, one of which is the Hair Cascade Classifierwho is on a paper by Viola and Jones dating back to 2000. OpenCV already brings some pre-trained Hair Cascade Classifier with - among other things to recognize faces of people or cats. The algorithm is pretty fast, but - as you will see - not entirely error-free.

I initialized this classifier in the function,

namely the one who is supposed to recognize faces from the front. I took this photo with mannequins as a test image, because mannequins are probably the only way to test facial recognition algorithms without having problems with data protection. (The photo has Gabi shot, I also used a few other photos with mannequins for tests - see below)

the rest is straightforward, first the OpenCV library is loaded and the array is initialized with the faces (). In the function, all faces that the Hair Cascade Classifier recognizes, saved.

If you want to check whether faces have been recognized at all, you can display the number of recognized faces with the commented-out instruction.

The function shows the photo and places a lime green square around each recognized face:

That's all. As you can see from the screenshot, the two faces of the mannequins are recognized, but the classifier has its problems with the batik pattern of the mannequin on the right. And that is not an isolated case: how Olver Moser reported in its beautiful "Introduction to Computer Vision with OpenCV and Python", the Classifier also regularly recognizes the back of its chair as a face. So here you either have to use another, more computationally intensive classifier, such as the "HOG Detector", or try the Hair Cascade Classifier keep training. Both are very computationally intensive, so I have come to terms with the result for the time being.

Here are a few more pictures from Neukölln shop windows, with which the Classifier did more and less well:

The source code

Here again, for the curious, the complete source code for reprogramming: