Display a standalone video or a video loop under song lyrics Use touch-optimized VideoPsalm on a Windows tablet or laptop How to display and setup the stage view Wrap long lines or not. Display song lyrics How easily and fast you can display song lyrics and Bible verses to your assistance. Video display corporation (vdc) is a world-class supplier of simulation solutions, display systems, displays, display components, as well as tempest products and services. Headquartered in Tucker, GA with manufacturing facilities in Lexington, KY and Cocoa, FL, VDC is a small business, founded in 1975, and is a publicly traded company on the. The controls attribute adds video controls, like play, pause, and volume. It is a good idea to always include width and height attributes. If height and width are not set, the page might flicker while the video loads. The element allows you to specify alternative video files which the browser may choose from. The browser will use the first recognized format.
This tutorial is from the book Learning Processing, 2nd Edition by Daniel Shiffman, published by Morgan Kaufmann, © 2015 Elsevier Inc. All rights reserved. If you see any errors or have comments, please let us know.
Daniel Shiffman
Live Video
Now that you’ve explored static images in Processing, you are ready to move on to moving images, specifically from a live camera (and later, from a recorded movie). I’ll begin by walking through the basic steps of importing the video library and using the Capture class to display live video.
Step 1. Import the Processing video library.
Although the video library is developed and maintained by the Processing Foundation, due to its size, it must still be downloaded separately through the contributions manager. The Video and Sound libraries need to be downloaded through the Library Manager. Select 'Add Library...' from the 'Import Library...' submenu within the Sketch menu.
Once you’ve got the library installed, the next step is to import the library in your code. This is done by selecting the menu option Sketch → Import Library → Video, or by typing the following line of code (which should go at the very top of your sketch):Using the “Import Library” menu option does nothing other than automatically insert that line into your code, so manual typing is entirely equivalent.
Step 2. Declare a Capture object.
You’ve recently seen how to create objects from classes built into the Processing language such as PShape and PImage. Both of these classes, it should be noted, are part of the processing.core library and, therefore,no import statement were required. The processing.video library has two useful classes inside of it — Capture, for live video, and Movie, for recorded video. In this step, I’ll be declaring a Capture object.
Step 3. Initialize the Capture object.
The Capture object “video” is just like any other object — to construct an object, you use the new operator followed by the constructor. With a Capture object, this code typically appears in setup().
The above line of code is missing the appropriate arguments for the constructor. Remember, this is not a class you wrote yourself so there is no way to know what is required between the parentheses without consulting the online reference.
The reference will show there are several ways to call the Capture constructor. A typical way to call the constructor is with three arguments:
Let’s walk through the arguments used in the Capture constructor.
Step 1. Import the Processing video library.
Although the video library is developed and maintained by the Processing Foundation, due to its size, it must still be downloaded separately through the contributions manager. The Video and Sound libraries need to be downloaded through the Library Manager. Select 'Add Library...' from the 'Import Library...' submenu within the Sketch menu.
Once you’ve got the library installed, the next step is to import the library in your code. This is done by selecting the menu option Sketch → Import Library → Video, or by typing the following line of code (which should go at the very top of your sketch):Using the “Import Library” menu option does nothing other than automatically insert that line into your code, so manual typing is entirely equivalent.
Step 2. Declare a Capture object.
You’ve recently seen how to create objects from classes built into the Processing language such as PShape and PImage. Both of these classes, it should be noted, are part of the processing.core library and, therefore,no import statement were required. The processing.video library has two useful classes inside of it — Capture, for live video, and Movie, for recorded video. In this step, I’ll be declaring a Capture object.
Step 3. Initialize the Capture object.
The Capture object “video” is just like any other object — to construct an object, you use the new operator followed by the constructor. With a Capture object, this code typically appears in setup().
The above line of code is missing the appropriate arguments for the constructor. Remember, this is not a class you wrote yourself so there is no way to know what is required between the parentheses without consulting the online reference.
The reference will show there are several ways to call the Capture constructor. A typical way to call the constructor is with three arguments:
Let’s walk through the arguments used in the Capture constructor.
- this: If you’re confused by what this means, you are not alone. Technically speaking, this refers to the instance of a class in which the word this appears. Unfortunately, such a definition is likely to induce head spinning. A nicer way to think of it is as a self-referential statement. After all, what if you needed to refer to your Processing program within your own code? You might try to say “me” or “I.” Well, these words are not available in Java, so instead you say this. The reason you pass this into the Capture object is you are telling it: “Hey listen, I want to do video capture and when the camera has a new image I want you to alert this sketch.”
- 320: Fortunately, the first argument, this, is the only confusing one. 320 refers to the width of the video captured by the camera.
- 240: The height of the video.
You can use the text of these configurations to create a Capture object. On a Mac with a built-in camera,for example, this might look like:
Capture.list() actually gives you an array so you can also simply refer to the index of the configuration you want.
Step 4. Start the capture process.
Video Psalm App
Once the camera is ready, it’s up to you to tell Processing to start capturing images.In almost every case you want to begin capturing right in setup(). Nevertheless, start() is its own method, and you do have the option of, say, not starting capturing until some other time (such as when a button is pressed, etc.)
Step 5. Read the image from the camera.
There are two strategies for reading frames from the camera. I will briefly look at both and choose one forthe remainder of the examples in this chapter. Both strategies, however, operate under the same fundamental principle: I only want to read an image from the camera when a new frame is available to be read.
In order to check if an image is available, you use the function available(), which returns true or false depending on whether something is there. If it is there, the function read() is called and the frame from the camera is read into memory. You can do this over and over again in the draw() loop, always checking to see if a new image is free to be read.
The second strategy, the “event” approach, requires a function that executes any time a certain event — in this case a camera event — occurs. The function mousePressed() is executed whenever the mouse is pressed. With video, you have the option to implement the function captureEvent(), which is invoked any time a capture event occurs, that is, a new frame is available from the camera. These event functions (mousePressed(), keyPressed(), captureEvent(), etc.) aresometimes referred to as a “callback.” And as a brief aside, if you’re following closely, this is where
this fits in. The Capture object, video, knows to notify this sketch by invoking captureEvent() because you passed it a reference to this sketch when creating the Capture object video.
captureEvent() is a function and therefore needs to live in its own block, outside of setup() and draw().
You might notice something odd about captureEvent(). It includes an argument of type Capture
in its definition. This might seem redundant to you; after all, in this example I already have a global variable video. Nevertheless in the case where you might have more than one capture device, the same event function can be used for both and the video library will make sure that the correct Capture object is passed in to captureEvent().
To summarize, I want to call the function read() whenever there is something to read, and I can do so by either checking manually using available() within draw() or allowing a callback to handle it for you — captureEvent(). This allows sketches to operate more efficientlyby separating out the logic for reading from the camera from the main animation loop.
Step 6. Display the video image.
This is, without a doubt, the easiest part. You can think of a Capture object as a PImage that changes over time, and, in fact, a Capture object can be utilized in an identical manner as a PImage object.
All of this is put together in the following code:
Again, anything you can do with a PImage (resize, tint, move, etc.) you can do with a Capture object. As long as you read() from that object, the video image will update as you manipulate it. See the following example:
Note that a video image can be tinted just like a PImage. It can also be moved, rotated, and sized just like a PImage. Following is the “adjusting brightness” example with a video image:
Recorded video
Displaying recorded video follows much of the same structure as live video. Processing’s video library accepts most video file formats; for specifics, visit the Moviereference.
Step 1. Instead of a Capture object, declare a Movie object.Step 2. Initialize Movie object.
The only necessary arguments are this and the movie’s filename enclosed in quotes. The movie file should be stored in the sketch’s data directory.
Step 3. Start movie playing.
There are two options, play(), which plays the movie once, or loop(), which loops it continuously.
Step 4. Read frame from movie.
Again, this is identical to capture. You can either check to see if a new frame is available, or use a callback function.
Or:
Step 5. Display the movie.
The following code shows the program all together:
Although Processing is by no means the most sophisticated environment for displaying and manipulating recorded video, there are some more advanced features available in the video library. There are functions for obtaining the duration (length measured in seconds) of a video, for speeding it up and slowing it down, and for jumping to a specific point in the video (among others). If you find that performance is sluggish and the video playback is choppy, I would suggest trying the P2D or P3Drenderers.
Following is an example that makes use of jump() (jump to a specific point in the video) and duration() (returns the length of movie in seconds). In this example, if mouseX equals 0, the video jumps to the beginning. If mouseX equals width, it jumps to the end. Any other value is in between. The jump() function allows you to jump immediately to a point of time within the video. duration() returns the total length of the movie in seconds.
Step 1. Instead of a Capture object, declare a Movie object.Step 2. Initialize Movie object.
The only necessary arguments are this and the movie’s filename enclosed in quotes. The movie file should be stored in the sketch’s data directory.
Step 3. Start movie playing.
There are two options, play(), which plays the movie once, or loop(), which loops it continuously.
Step 4. Read frame from movie.
Again, this is identical to capture. You can either check to see if a new frame is available, or use a callback function.
Or:
Step 5. Display the movie.
The following code shows the program all together:
Although Processing is by no means the most sophisticated environment for displaying and manipulating recorded video, there are some more advanced features available in the video library. There are functions for obtaining the duration (length measured in seconds) of a video, for speeding it up and slowing it down, and for jumping to a specific point in the video (among others). If you find that performance is sluggish and the video playback is choppy, I would suggest trying the P2D or P3Drenderers.
Following is an example that makes use of jump() (jump to a specific point in the video) and duration() (returns the length of movie in seconds). In this example, if mouseX equals 0, the video jumps to the beginning. If mouseX equals width, it jumps to the end. Any other value is in between. The jump() function allows you to jump immediately to a point of time within the video. duration() returns the total length of the movie in seconds.
Software mirrors
With small video cameras attached to more and more personal computers, developing software that manipulates an image in real-time is becoming increasingly popular. These types of applications are sometimes referred to as “mirrors,” as they provide a digital reflection of a viewer’s image. Processing’s extensive library of functions for graphics and its ability to capture from a camera in real-time make it an excellent environment for prototyping and experimenting with software mirrors.
You can apply basic image processing techniques to video images, reading and replacing the pixels one by one. Taking this idea one step further, you can read the pixels and apply the colors to shapes drawn onscreen.
I will begin with an example that captures a video at 80 × 60 pixels and renders it on a 640 × 480 window. For each pixel in the video, I will draw a rectangle eight pixels wide and eight pixels tall.
Let’s first write the program that displays the grid of rectangles. In the following example, the videoScale variable stores the ratio of the window’s pixel size to the grid’s size and for every column and row, a rectangle is drawn at an (x,y) location scaled and sized by videoScale.
Knowing that I want to have squares eight pixels wide by eight pixels high, I can calculate the number of columns as the width divided by eight and the number of rows as the height divided by eight.
You can apply basic image processing techniques to video images, reading and replacing the pixels one by one. Taking this idea one step further, you can read the pixels and apply the colors to shapes drawn onscreen.
I will begin with an example that captures a video at 80 × 60 pixels and renders it on a 640 × 480 window. For each pixel in the video, I will draw a rectangle eight pixels wide and eight pixels tall.
Let’s first write the program that displays the grid of rectangles. In the following example, the videoScale variable stores the ratio of the window’s pixel size to the grid’s size and for every column and row, a rectangle is drawn at an (x,y) location scaled and sized by videoScale.
Knowing that I want to have squares eight pixels wide by eight pixels high, I can calculate the number of columns as the width divided by eight and the number of rows as the height divided by eight.
- 640/8 = 80 columns
- 480/8 = 60 rows
For every square at column i and row j, I look up the color at pixel (i, j) in the video image and color it accordingly. See the following example with the new parts in bold:
As you can see, expanding the simple grid system to include colors from video only requires a few additions. I have to declare and initialize the Capture object, read from it, and pull colors from the pixel array.
Less literal mappings of pixel colors to shapes in the grid can also be applied. In the following example, only the colors black and white are used. Squares are larger where brighter pixels in the video appear, and smaller for darker pixels.
It’s often useful to think of developing software mirrors in two steps. This will also help you think beyondthe more obvious mapping of pixels to shapes on a grid.
Step 1. Develop an interesting pattern that covers an entire window.
Step 2. Use a video’s pixels as a look-up table for coloring that pattern.
Say for Step 1, I write a program that scribbles a random line around the window. Here is my algorithm, written in pseudocode.
- Start with an (x,y) position at the center of the screen.
- Repeat forever the following:
—Pick a new (x,y), staying within the window.
—Draw a line from the old (x,y) to the new (x,y).
—Save the new (x,y).
Now that I have finished the pattern generating sketch, I can change stroke() to set a color according to the video image. Note again the new lines of code added in bold in the following code:
SOLUTIONS FOR
VIDEO DISPLAY CORPORATION (VDC) IS A WORLD-CLASS SUPPLIER OF SIMULATION SOLUTIONS, DISPLAY SYSTEMS, DISPLAYS, DISPLAY COMPONENTS, AS WELL AS TEMPEST PRODUCTS AND SERVICES.
Display A Video Video Psalm 103
Headquartered in Tucker, GA with manufacturing facilities in Lexington, KY and Cocoa, FL, VDC is a small business, founded in 1975, and is a publicly traded company on the OTC Markets under the symbol (VIDE).VDC’s business segments include the design and manufacture of digital projection simulation systems for commercial and defense contractors for use in flight simulation and training, the design and manufacture of standard and customized TEMPEST products and services that meet the most demanding requirements for government customers, and, the design and manufacture of displays and display components for Command and Control in both the commercial and military sectors. VDC also continues to manufacture and distribute cathode ray tubes for legacy military and commercial applications.