Video Frame Capture using the Video Experimenter Shield

Difficulty Level = 3 [What’s this?]

In addition to overlaying text and graphics onto a video signal, the Video Experimenter shield can also be used to capture image data from a video source and store it in the Arduino’s SRAM memory. The image data can be displayed to a TV screen and can be overlayed onto the original video signal.

Believe it or not, the main loop of the program is this simple:

void loop() {

NOTE: On versions of Arduino greater than 1.6, you may need to add a 1ms delay at the end of loop. Just the line “delay(1);”.

For this project, we connect a video camera to the input of the Video Experimenter shield. The output select switch is set to “overlay” and the sync select jumper is set to “video input”. The video output is connected to an ordinary TV. When performing this experiment, turn the analog threshold potentiometer to the lowest value to start, then adjust it to select different brightness thresholds when capturing images.

By moving the output select switch to “sync only”, the original video source is not included in the output, only the captured monochrome image. You will need to adjust the threshold potentiometer (the one with the long shaft) to a higher value when the output switch is in this position. Experiment!

Monochrome video frame capture in Arduino memory

In the VideoFrameCapture.ino sketch below, we capture the image in memory by calling tv.capture(). When this method returns, a monochrome image is stored in the TVout frame buffer. The contents of the frame buffer are not displayed until we call tv.resume(). This project is the example “VideoFrameCapture” in the TVout library for Video Experimenter.

Here is the Arduino code. If you use a television with the PAL standard (that is, you are not in North America), change tv.begin(NTSC, W, H) to tv.begin(PAL, W, H).

#include <TVout.h>
#include <fontALL.h>
#define W 128
#define H 96

TVout tv;
unsigned char x,y;
char s[32];

void setup()  {
  tv.begin(NTSC, W, H);


void initOverlay() {
  TCCR1A = 0;
  // Enable timer1.  ICES0 is set to 0 for falling edge detection on input capture pin.
  TCCR1B = _BV(CS10);

  // Enable input capture interrupt
  TIMSK1 |= _BV(ICIE1);

  // Enable external interrupt INT0 on pin 2 with falling edge.
  EIMSK = _BV(INT0);
  EICRA = _BV(ISC01);

void initInputProcessing() {
  // Analog Comparator setup
  ADCSRA &= ~_BV(ADEN); // disable ADC
  ADCSRB |= _BV(ACME); // enable ADC multiplexer
  ADMUX &= ~_BV(MUX0);  // select A2 for use as AIN1 (negative voltage of comparator)
  ADMUX |= _BV(MUX1);
  ADMUX &= ~_BV(MUX2);
  ACSR &= ~_BV(ACIE);  // disable analog comparator interrupts
  ACSR &= ~_BV(ACIC);  // disable analog comparator input capture

ISR(INT0_vect) {
  display.scanLine = 0;

void loop() {

Published by Michael, on March 20th, 2011 at 2:52 pm. Filed under: Arduino,Level 3,Video. | 52 Comments |

52 Responses to “Video Frame Capture using the Video Experimenter Shield”

  1. Great work!

    Comment by noonv on March 24, 2011 at 1:50 PM

  2. Hi , that’s interesting stuff, can I use this shield for a line following project ,I just need to calculate centre of gravity (COG) off a black line and make a Robo (with Arduino inside) following that’s line using a camera as sensor, please let me know

    Comment by dzeus on March 25, 2011 at 8:59 AM

  3. @dzeus, yes that sounds like the kind of project you’d be able to do with the Video Experimenter.

    Comment by Michael on March 25, 2011 at 9:14 AM

  4. Amazing work Michael. I’ve had people tell me that this exact thing you did was not possible, and I always thought why not. So great to see the success on the video frame capture on Arduino. I’m trying to cobble together (with various parts and shields) and open-source “Arduino driven PixelVision Camera”. You got the central part, frame capture, done with this shield. Is there a little B/W small CMOS camera part that you’d recommend that could be easily connected to your shield. I know a number would work, but if you were to cherry pick something from digikey (or the like) so that there is an efficient (not an over-kill and relatively plug-play) camera component, would there be one you’d recommend? Thanks!

    Comment by Yury Gitman on May 5, 2011 at 8:11 AM

  5. Yuri, I have this simple CMOS camera (it’s color) and it works just fine with the Video Experimenter:
    The red wire connects to VIN (e.g. 9V) on the Arduino (because it uses less current if you power it with more than 5V), black wire to GND, and the yellow wire connects to the Video Experimenter “INPUT” pin on the breakout header at the right side of the board. The reason I included this breakout header on the board is so you could connect a small camera to input without needing the RCA jack connection.

    I’m sure there are plenty of other cameras that will work fine. As long as it is powered by 9V DC, and has a composite output, then you can use it as input to the Video Experimenter. Have fun!

    Comment by Michael on May 5, 2011 at 7:01 PM

  6. Hey,
    Amazing work. I’m curious if you think that one would be able to relay video information through the serial to the arduino using tvOut to display it to the screen without using your shield.

    Comment by grayson on October 17, 2011 at 7:13 PM

  7. @grayson, I’m not sure I understand the question. Video signal goes from where to where? What kind of video signal? Composite? How would you transmit that over a serial line?

    Comment by Michael on October 17, 2011 at 7:19 PM

  8. I guess what I mean is, if I can break an image down and send each pixel thru serial, would arduino be able to process that fast enough? You say you have a “frame buffer” for your project here, can one accomplish this without the shield?

    Comment by grayson on November 7, 2011 at 8:36 PM

  9. If you are processing the video on a computer, then yes, you can try sending it over serial. At the 128×96 TVout resolution, there are 1536 bytes per frame, so you aren’t going to be able to send data fast enough for realtime video.

    The Video Experimenter is used when you are capturing the pixels using the Arduino, but it sounds like you want to do the video processing on a computer and send info the Arduino to display via TVout.

    Comment by Michael on November 8, 2011 at 10:22 AM

  10. what a great job the author did! Amazing!

    Comment by video snapshot on November 24, 2011 at 9:29 PM

  11. Hi I’m wondering if I can use the interpreted video data to detect movement in 5 vertical zones across the picture, and activate an led if movement is detected in that zone (sensing a change in a certain number of pixels). I have the shield and performed the above experiment (which was very cool btw), however I’m a newbie and don’t know to see/ interpret the data to write this program. Any suggestions?

    Comment by Casey on January 27, 2012 at 3:23 PM

  12. Hi, I would like to ask, if I can send tha packages of the low resolution captured video from the input of Video Experimenter to a web server through arduino Uno rev2, instead of displaying it to TV.


    Comment by Zuss on February 1, 2012 at 8:35 AM

  13. Hi, is it possible to do 640×480 color still images? I just need to take a picture of my hamster, don’t need video.
    Would adding a memory shield with an Xd card on it help?

    Comment by David d on May 29, 2012 at 2:11 PM

  14. No, not even close. Only low-res monochrome.

    Comment by Michael on May 29, 2012 at 2:39 PM

  15. Hi!
    Is it possible to capture the colors? Even if in a small resolution or low frame rate?
    It now necessary to store the image, just set the some outputs high or low depending on the color of the pixel (serial, while “scaning” the image, if I detect a pixel blue in the first row, I set the output high, for instance).
    In the worst case, I need only the “outer frame” of the image, that means, first and last row and first and last columns.

    Thanks in advance.

    Comment by Luis Andrade on June 28, 2012 at 9:11 AM

  16. No, it can only detect brightness. No color detection is possible.

    Comment by Michael on June 28, 2012 at 9:29 AM

  17. Hi everyone,
    I’ve not used the video experimenter shield before, but I’ve purchased it for my project.

    With this shield, I understand I can overlay text and graphics onto video. My project gets data from a sensor connected to Arduino and overlay the results as text onto captured video from camera.

    Then I need to capture video frames of the overlayed video. Can this be possible by using the above program? Or can I capture video frames from video source first then do the overlaying of text?

    Comment by Justeen on August 27, 2012 at 1:29 AM

  18. First, I hope you understand that a captured frame is very low resolution and monochrome. To answer your question, though, I’d capture the frame and then manipulate the frame buffer directly to overlay the text. Capture, then use the TVout library to draw/print to the frame buffer.

    Comment by Michael on August 28, 2012 at 11:39 AM

  19. Good day! You are a great expert! I want to know: you can overlay image saved in a separate file (for example, JPEG, GIF, PNG, and ETH)? PS Sorry for my broken English, I’m student from Russia)))

    Comment by Ruslan on September 2, 2012 at 12:49 PM

  20. Do you have any idea where can I get the Capture library files?
    error: ‘class TVout’ has no member named ‘capture’ :/ thanks.

    Comment by sy. on September 26, 2012 at 9:58 PM

  21. Download the Enhanced TVout library from here:

    Comment by Michael on September 26, 2012 at 10:22 PM

  22. I have downloaded the files but it still have same error. And no other connection required for video experiment to rest above arduino uno board right?

    I tried to copy and paste the overlay program on the website but the words and graph doesn’t appear on the screen. Please advice

    Comment by sy. on September 27, 2012 at 2:43 AM

  23. sy, if you properly install the library with the right structure, then start Arduino IDE, you will get a clean compile and upload. Obviously the overlay program will have no effect if you have not successfully compiled it. Please use the support forum, not this blog, for techinical problems.

    Comment by Michael on September 27, 2012 at 7:40 AM

  24. Hi Michael,
    I got the board, works great ! thanks.(Amazed at how well this works, especially the edge detection). I am now trying to find the simplest way to stream out via serial the frame from the edge-detect program. Could you suggest an efficient way to do that.


    Comment by Em on October 16, 2012 at 5:15 PM

  25. Em, I think you should just try Serial communication. It’s not going to be fast enough for realtime transmission of all the frames though, so maybe some of the frames.

    Comment by Michael on October 17, 2012 at 7:36 AM

  26. thanks, that worked!

    Comment by Em on October 17, 2012 at 11:26 PM

  27. Is there a way to send video over say bluetooth?

    Comment by Rufus on December 15, 2012 at 9:06 PM

  28. Hi, I wanted to know if I could send the captured frames to a window on my PC… I thought i would be cool to see a monochrome image of my xbox and record a window of it with my screen recorder…

    Comment by Mitch on December 31, 2012 at 12:44 PM

  29. This may be what I am looking for. I would love to display the monochrome video onto an LED matrix, maybe like the PeggyII or the 16×32 matrix from Adafruit. Do you know of any attempts at a similar project? At Evilmadscientists Jay Clegg shows a video Peggy modification to feed a video stream serially through the Arduino to the Peggy matrix. The video feed is from a PC, not from a camera.
    Any suggestions are greatly appreciated.

    Comment by martin on March 12, 2013 at 7:09 PM

  30. martin, did you see this project?

    Comment by Michael on March 12, 2013 at 7:32 PM

  31. Hi,
    I have requirement to continuously capture image , process image to detect growing circle & trigger 24 volt signal to external device while circle matches some predefined configuration. Is it possible with
    this board? Do I require PC for this or it can work as standalone device able to send required output signal?
    If yes can you please suggest me exact hardware required & some more info?

    Comment by Anjum on September 10, 2013 at 7:27 AM

  32. hello

    Is there a way of triggering something in the sketch depending of the result of tv.capture ?
    I mean , depending of the average video level, get a warning that the aperture is not good, for example.
    thanks for your work.


    Comment by boogui on February 24, 2014 at 4:32 AM

  33. Yes, you could count the number of “on” pixels in the capture array, or examine the captured frame however you’d like and then take action on it.

    Comment by Michael on February 24, 2014 at 8:19 AM

  34. hello mickael
    thanks for the reply.
    sorry,but i didn’t found where i can get the capture array ? Will it be a 0x00 for black and 0xff for white ?
    Do you have a clue how to get a percentage of on and off ?
    Sorry for those may be obvious questions .


    Comment by boogui on February 25, 2014 at 3:40 AM

  35. You can just call getPixel(x,y) for each pixel and compute a percentage based on total pixels (e.g. 12288 pixels for 128×96 resolution). See all the projects on the Video Experimenter web site.

    Comment by Michael on February 25, 2014 at 11:50 AM

  36. thanks a lot for your help, i’ll give it a try.

    Comment by boogui on March 2, 2014 at 3:47 PM

  37. Hi Michael,
    I have the shield and it works great, I’m using it to track stars and pilot a telescope mount, can you tell me if there is a way to “adjust the threshold potentiometer” by code ?
    maybe not, I’m trying to find a way to change this threshold and I wanted a clever wired solution instead of using a servo to turn it remotely, I’m not an elecronic person I guess this potentiometer is changing a voltage on the board right ?, could I use an analog output from the arduino to have a 0 to 5v entry on the shield or will I burn it all ? ;-)
    thanks in advance for your answer

    Comment by parallax68 on August 8, 2014 at 5:12 PM

  38. You may be able to connect a digital potentiometer. It is an IC that allows you to control a pot digitally. But doing I2C or SPI while doing video is not going to work well unless the video was stopped while adjusting the digital pot.
    No, you can’t use analog output because the Arduino does not have a true analog output. It is just PWM output.

    Comment by Michael on August 8, 2014 at 5:52 PM

  39. Hi Michael,
    recieved a mcp4131 10K digital pot
    will try it soon

    Comment by parallax68 on December 19, 2014 at 8:20 AM

  40. Hello Michael,
    I am having trouble with this — How do I retrieve the monochrome image
    from the SRAM frame buffer, and move it to Flash Memory for storage.
    I would like to compare it with subsequent monochrome images.

    Thank you.

    Comment by alamai on April 20, 2015 at 11:08 AM

  41. You cannot write to flash memory on the Arduino. It is read only.

    Comment by Michael on April 20, 2015 at 11:35 AM

  42. Thanks for the reply.
    My other question is, how do I read/retrieve the monochrome image
    from the frame buffer.

    Thank you.

    Comment by alamai on April 20, 2015 at 12:47 PM

  43. The frame buffer is addressable as display.screen[]. It is simply an array of bytes, one bit per pixel, starting at the upper left of the screen. A 128×96 resolution screen has rows that are 16 bytes wide.

    Comment by Michael on April 20, 2015 at 1:07 PM

  44. I have the snippet of code needed to implement Floyd-Steinberg dithering on the video output. It will give a better picture than basic thresholding. In order to do so, I would need to loop through the buffer array and modify the portion where the image undergoes thresholding. Which line of the .cpp file (TVout.cpp?) tells the pixel to either become black or white? I can post the changes if it works and shows an improved picture.

    Comment by Ryan on November 12, 2015 at 2:35 AM

  45. Low-res monochrome output is all that can be achieved with the Video Experimenter. Pixels can only be white or black and cannot be any smaller. Memory constraints and timing constraints prevent dithering, grayscale, etc. (Trust me.) The video output is the assembly code render_line6c in video_gen.cpp. This code is in assembly because there are only 6 CPU cycles available to read the analog comparator and output the pixel voltage. No time for anything else.

    Comment by Michael on November 12, 2015 at 9:08 AM

  46. Hi Fenx, do you mean to say you want to use the Arduino to connect the sochol network via a wireless connection? What do you mean by connect: what do you want to do?

    Comment by Alex on December 13, 2015 at 4:14 PM

  47. Good morning! Great work!
    Is it possible to increase the resolution both vertically and horizontally? What is the maximum resolution may be, if you use the CPU to the maximum frequency 20mhz for 328p? Of course, if the SRAM is sufficient, for example by changing the data retention algorithm. If I correct understud, You wrote that one pixel require 6 cycles of CPU to nake from comparator and write to SRAM. Then we get about 133 (50uS line/ 0,065uS time of cycle/ 6 cpu cycle) pixels horizontally at 16 Mhz. Or 166 pixel on 20mhz. Is it calculation correct? Is it possible to reduce the number of cycles to obtain and record one pixel up to 4 cpu cycle?


    Comment by Gennady on April 12, 2016 at 1:38 AM

  48. Memory is the constraint on resolution, not clock speed. Increasing clock speed will not allow greater resolution.

    Comment by Michael on April 12, 2016 at 8:59 AM

  49. Good afternoon!
    I understand it. What I wrote above. But if I have to remember, for example, only one or five TV-line of the whole frame, the memory will be enough for storing more than 128 pixels per line. Therefore, I repeat my question is, how much can you remember of pixels in a row at 20MHz CPU based on the speed of your code?

    Comment by Gennady on April 19, 2016 at 8:06 AM

  50. A 128×96 frame buffer takes 1536 bytes. 128 pixels is 16 bytes across x 96 bytes vertically. If you use only a few lines you will have plenty of memory. You will need to experiment yourself. I did not write the TVout library, so it’s not my code.

    Comment by Michael on April 19, 2016 at 8:55 AM

  51. Hello! Would it be possible to use this video shield to overlay an array of 16-bit-color pixels that were generated by the Arduino (not an Uno) onto the captured image frame? Memory and processing power shouldn’t be an issue.

    Comment by Zandman 1 on October 18, 2017 at 6:07 PM

  52. Oops, my bad – instead of the captured image frame I meant the actual video feed.

    Comment by Zandman 1 on October 18, 2017 at 6:11 PM

Leave a Reply