Difficulty Level = 3 [What’s this?]
In addition to overlaying text and graphics onto a video signal, the Video Experimenter shield can also be used to capture image data from a video source and store it in the Arduino’s SRAM memory. The image data can be displayed to a TV screen and can be overlayed onto the original video signal.
Believe it or not, the main loop of the program is this simple:
void loop() { tv.capture(); tv.resume(); tv.delay_frame(5); }
NOTE: On versions of Arduino greater than 1.6, you may need to add a 1ms delay at the end of loop. Just the line “delay(1);”.
For this project, we connect a video camera to the input of the Video Experimenter shield. The output select switch is set to “overlay” and the sync select jumper is set to “video input”. The video output is connected to an ordinary TV. When performing this experiment, turn the analog threshold potentiometer to the lowest value to start, then adjust it to select different brightness thresholds when capturing images.
By moving the output select switch to “sync only”, the original video source is not included in the output, only the captured monochrome image. You will need to adjust the threshold potentiometer (the one with the long shaft) to a higher value when the output switch is in this position. Experiment!
In the VideoFrameCapture.ino sketch below, we capture the image in memory by calling tv.capture(). When this method returns, a monochrome image is stored in the TVout frame buffer. The contents of the frame buffer are not displayed until we call tv.resume(). This project is the example “VideoFrameCapture” in the TVout library for Video Experimenter.
Here is the Arduino code. If you use a television with the PAL standard (that is, you are not in North America), change tv.begin(NTSC, W, H) to tv.begin(PAL, W, H).
#include <TVout.h> #include <fontALL.h> #define W 128 #define H 96 TVout tv; unsigned char x,y; char s[32]; void setup() { tv.begin(NTSC, W, H); initOverlay(); initInputProcessing(); tv.select_font(font4x6); tv.fill(0); } void initOverlay() { TCCR1A = 0; // Enable timer1. ICES0 is set to 0 for falling edge detection on input capture pin. TCCR1B = _BV(CS10); // Enable input capture interrupt TIMSK1 |= _BV(ICIE1); // Enable external interrupt INT0 on pin 2 with falling edge. EIMSK = _BV(INT0); EICRA = _BV(ISC01); } void initInputProcessing() { // Analog Comparator setup ADCSRA &= ~_BV(ADEN); // disable ADC ADCSRB |= _BV(ACME); // enable ADC multiplexer ADMUX &= ~_BV(MUX0); // select A2 for use as AIN1 (negative voltage of comparator) ADMUX |= _BV(MUX1); ADMUX &= ~_BV(MUX2); ACSR &= ~_BV(ACIE); // disable analog comparator interrupts ACSR &= ~_BV(ACIC); // disable analog comparator input capture } ISR(INT0_vect) { display.scanLine = 0; } void loop() { tv.capture(); //tv.fill(INVERT); tv.resume(); tv.delay_frame(5); delay(1); }
Great work!
Hi , that’s interesting stuff, can I use this shield for a line following project ,I just need to calculate centre of gravity (COG) off a black line and make a Robo (with Arduino inside) following that’s line using a camera as sensor, please let me know
thanks
@dzeus, yes that sounds like the kind of project you’d be able to do with the Video Experimenter.
Amazing work Michael. I’ve had people tell me that this exact thing you did was not possible, and I always thought why not. So great to see the success on the video frame capture on Arduino. I’m trying to cobble together (with various parts and shields) and open-source “Arduino driven PixelVision Camera”. You got the central part, frame capture, done with this shield. Is there a little B/W small CMOS camera part that you’d recommend that could be easily connected to your shield. I know a number would work, but if you were to cherry pick something from digikey (or the like) so that there is an efficient (not an over-kill and relatively plug-play) camera component, would there be one you’d recommend? Thanks!
Yuri, I have this simple CMOS camera (it’s color) and it works just fine with the Video Experimenter: http://www.sparkfun.com/products/8739.
The red wire connects to VIN (e.g. 9V) on the Arduino (because it uses less current if you power it with more than 5V), black wire to GND, and the yellow wire connects to the Video Experimenter “INPUT” pin on the breakout header at the right side of the board. The reason I included this breakout header on the board is so you could connect a small camera to input without needing the RCA jack connection.
I’m sure there are plenty of other cameras that will work fine. As long as it is powered by 9V DC, and has a composite output, then you can use it as input to the Video Experimenter. Have fun!
Hey,
Amazing work. I’m curious if you think that one would be able to relay video information through the serial to the arduino using tvOut to display it to the screen without using your shield.
@grayson, I’m not sure I understand the question. Video signal goes from where to where? What kind of video signal? Composite? How would you transmit that over a serial line?
I guess what I mean is, if I can break an image down and send each pixel thru serial, would arduino be able to process that fast enough? You say you have a “frame buffer” for your project here, can one accomplish this without the shield?
If you are processing the video on a computer, then yes, you can try sending it over serial. At the 128×96 TVout resolution, there are 1536 bytes per frame, so you aren’t going to be able to send data fast enough for realtime video.
The Video Experimenter is used when you are capturing the pixels using the Arduino, but it sounds like you want to do the video processing on a computer and send info the Arduino to display via TVout.
what a great job the author did! Amazing!
Hi I’m wondering if I can use the interpreted video data to detect movement in 5 vertical zones across the picture, and activate an led if movement is detected in that zone (sensing a change in a certain number of pixels). I have the shield and performed the above experiment (which was very cool btw), however I’m a newbie and don’t know to see/ interpret the data to write this program. Any suggestions?
Hi, I would like to ask, if I can send tha packages of the low resolution captured video from the input of Video Experimenter to a web server through arduino Uno rev2, instead of displaying it to TV.
Thanks
Zuss
Hi, is it possible to do 640×480 color still images? I just need to take a picture of my hamster, don’t need video.
Would adding a memory shield with an Xd card on it help?
No, not even close. Only low-res monochrome.
Hi!
Is it possible to capture the colors? Even if in a small resolution or low frame rate?
It now necessary to store the image, just set the some outputs high or low depending on the color of the pixel (serial, while “scaning” the image, if I detect a pixel blue in the first row, I set the output high, for instance).
In the worst case, I need only the “outer frame” of the image, that means, first and last row and first and last columns.
Thanks in advance.
Regards,
Luís
No, it can only detect brightness. No color detection is possible.
Hi everyone,
I’ve not used the video experimenter shield before, but I’ve purchased it for my project.
With this shield, I understand I can overlay text and graphics onto video. My project gets data from a sensor connected to Arduino and overlay the results as text onto captured video from camera.
Then I need to capture video frames of the overlayed video. Can this be possible by using the above program? Or can I capture video frames from video source first then do the overlaying of text?
First, I hope you understand that a captured frame is very low resolution and monochrome. To answer your question, though, I’d capture the frame and then manipulate the frame buffer directly to overlay the text. Capture, then use the TVout library to draw/print to the frame buffer.
Good day! You are a great expert! I want to know: you can overlay image saved in a separate file (for example, JPEG, GIF, PNG, and ETH)? PS Sorry for my broken English, I’m student from Russia)))
Do you have any idea where can I get the Capture library files?
error: ‘class TVout’ has no member named ‘capture’ :/ thanks.
Download the Enhanced TVout library from here:
http://nootropicdesign.com/ve/
I have downloaded the files but it still have same error. And no other connection required for video experiment to rest above arduino uno board right?
I tried to copy and paste the overlay program on the website but the words and graph doesn’t appear on the screen. Please advice
sy, if you properly install the library with the right structure, then start Arduino IDE, you will get a clean compile and upload. Obviously the overlay program will have no effect if you have not successfully compiled it. Please use the support forum, not this blog, for techinical problems.
Hi Michael,
I got the board, works great ! thanks.(Amazed at how well this works, especially the edge detection). I am now trying to find the simplest way to stream out via serial the frame from the edge-detect program. Could you suggest an efficient way to do that.
thanks!
Em
Em, I think you should just try Serial communication. It’s not going to be fast enough for realtime transmission of all the frames though, so maybe some of the frames.
thanks, that worked!
Is there a way to send video over say bluetooth?
Hi, I wanted to know if I could send the captured frames to a window on my PC… I thought i would be cool to see a monochrome image of my xbox and record a window of it with my screen recorder…
This may be what I am looking for. I would love to display the monochrome video onto an LED matrix, maybe like the PeggyII or the 16×32 matrix from Adafruit. Do you know of any attempts at a similar project? At Evilmadscientists Jay Clegg shows a video Peggy modification to feed a video stream serially through the Arduino to the Peggy matrix. The video feed is from a PC, not from a camera.
Any suggestions are greatly appreciated.
martin, did you see this project?
http://nootropicdesign.com/projectlab/2012/01/22/displaying-android-video-on-led-matrix/
Hi,
I have requirement to continuously capture image , process image to detect growing circle & trigger 24 volt signal to external device while circle matches some predefined configuration. Is it possible with
this board? Do I require PC for this or it can work as standalone device able to send required output signal?
If yes can you please suggest me exact hardware required & some more info?
hello
Is there a way of triggering something in the sketch depending of the result of tv.capture ?
I mean , depending of the average video level, get a warning that the aperture is not good, for example.
thanks for your work.
regards.
Yes, you could count the number of “on” pixels in the capture array, or examine the captured frame however you’d like and then take action on it.
hello mickael
thanks for the reply.
sorry,but i didn’t found where i can get the capture array ? Will it be a 0x00 for black and 0xff for white ?
Do you have a clue how to get a percentage of on and off ?
Sorry for those may be obvious questions .
regards
You can just call getPixel(x,y) for each pixel and compute a percentage based on total pixels (e.g. 12288 pixels for 128×96 resolution). See all the projects on the Video Experimenter web site. http://nootropicdesign.com/ve
thanks a lot for your help, i’ll give it a try.
Hi Michael,
I have the shield and it works great, I’m using it to track stars and pilot a telescope mount, can you tell me if there is a way to “adjust the threshold potentiometer” by code ?
maybe not, I’m trying to find a way to change this threshold and I wanted a clever wired solution instead of using a servo to turn it remotely, I’m not an elecronic person I guess this potentiometer is changing a voltage on the board right ?, could I use an analog output from the arduino to have a 0 to 5v entry on the shield or will I burn it all ? ;-)
thanks in advance for your answer
Seb
You may be able to connect a digital potentiometer. It is an IC that allows you to control a pot digitally. But doing I2C or SPI while doing video is not going to work well unless the video was stopped while adjusting the digital pot.
No, you can’t use analog output because the Arduino does not have a true analog output. It is just PWM output.
Hi Michael,
recieved a mcp4131 10K digital pot
will try it soon
brgs
Seb
Hello Michael,
I am having trouble with this — How do I retrieve the monochrome image
from the SRAM frame buffer, and move it to Flash Memory for storage.
I would like to compare it with subsequent monochrome images.
Thank you.
You cannot write to flash memory on the Arduino. It is read only.
Thanks for the reply.
My other question is, how do I read/retrieve the monochrome image
from the frame buffer.
Thank you.
The frame buffer is addressable as display.screen[]. It is simply an array of bytes, one bit per pixel, starting at the upper left of the screen. A 128×96 resolution screen has rows that are 16 bytes wide.
I have the snippet of code needed to implement Floyd-Steinberg dithering on the video output. It will give a better picture than basic thresholding. In order to do so, I would need to loop through the buffer array and modify the portion where the image undergoes thresholding. Which line of the .cpp file (TVout.cpp?) tells the pixel to either become black or white? I can post the changes if it works and shows an improved picture.
Low-res monochrome output is all that can be achieved with the Video Experimenter. Pixels can only be white or black and cannot be any smaller. Memory constraints and timing constraints prevent dithering, grayscale, etc. (Trust me.) The video output is the assembly code render_line6c in video_gen.cpp. This code is in assembly because there are only 6 CPU cycles available to read the analog comparator and output the pixel voltage. No time for anything else.
Hi Fenx, do you mean to say you want to use the Arduino to connect the sochol network via a wireless connection? What do you mean by connect: what do you want to do?
Good morning! Great work!
Is it possible to increase the resolution both vertically and horizontally? What is the maximum resolution may be, if you use the CPU to the maximum frequency 20mhz for 328p? Of course, if the SRAM is sufficient, for example by changing the data retention algorithm. If I correct understud, You wrote that one pixel require 6 cycles of CPU to nake from comparator and write to SRAM. Then we get about 133 (50uS line/ 0,065uS time of cycle/ 6 cpu cycle) pixels horizontally at 16 Mhz. Or 166 pixel on 20mhz. Is it calculation correct? Is it possible to reduce the number of cycles to obtain and record one pixel up to 4 cpu cycle?
Thanks!
Memory is the constraint on resolution, not clock speed. Increasing clock speed will not allow greater resolution.
Good afternoon!
I understand it. What I wrote above. But if I have to remember, for example, only one or five TV-line of the whole frame, the memory will be enough for storing more than 128 pixels per line. Therefore, I repeat my question is, how much can you remember of pixels in a row at 20MHz CPU based on the speed of your code?
Thanks!
A 128×96 frame buffer takes 1536 bytes. 128 pixels is 16 bytes across x 96 bytes vertically. If you use only a few lines you will have plenty of memory. You will need to experiment yourself. I did not write the TVout library, so it’s not my code.
Hello! Would it be possible to use this video shield to overlay an array of 16-bit-color pixels that were generated by the Arduino (not an Uno) onto the captured image frame? Memory and processing power shouldn’t be an issue.
Oops, my bad – instead of the captured image frame I meant the actual video feed.
The more I read, the more the greater your substance is.