Difficulty Level = 5 [What’s this?]
The Video Experimenter shield can give your Arduino the gift of sight. In the Video Frame Capture project, I showed how to capture images from a composite video source and display them on a TV. We can take this concept further by processing the contents of the captured image to implement object tracking and edge detection.
The setup is the same as when capturing video frames: a video source like a camera is connected to the video input. The output select switch is set to “overlay”, and sync select jumper set to “video input”. Set the analog threshold potentiometer to the lowest setting.
Object Tracking
Here is an Arduino sketch that captures a video frame and then computes the bounding box of the brightest region in the image.
This project is the example “ObjectTracking” in the TVout library for Video Experimenter. The code first calls tv.capture() to capture a frame. Then it computes a simple bounding box for the brightest spot in the image. After computing the location of the brightest area, a box is drawn and the coordinates of the box are printed to the TVout frame buffer. Finally, tv.resume() is called to resume the output and display the box and coordinates on the screen.
Keep in mind that there is no need to display any output at all — we just do this so we can see what’s going on. If you have a robot with a camera on it, you can detect/track objects with Arduino code, and the output of the Video Experimenter doesn’t need to be connected to anything (although the analog threshold potentiometer would probably need some adjustment).
If you use a television with the PAL standard (that is, you are not in North America), change tv.begin(NTSC, W, H) to tv.begin(PAL, W, H).
#include <TVout.h> #include <fontALL.h> #define W 128 #define H 96 TVout tv; unsigned char x, y; unsigned char c; unsigned char minX, minY, maxX, maxY; char s[32]; void setup() { tv.begin(NTSC, W, H); initOverlay(); initInputProcessing(); tv.select_font(font4x6); tv.fill(0); } void initOverlay() { TCCR1A = 0; // Enable timer1. ICES0 is set to 0 for falling edge detection on input capture pin. TCCR1B = _BV(CS10); // Enable input capture interrupt TIMSK1 |= _BV(ICIE1); // Enable external interrupt INT0 on pin 2 with falling edge. EIMSK = _BV(INT0); EICRA = _BV(ISC01); } void initInputProcessing() { // Analog Comparator setup ADCSRA &= ~_BV(ADEN); // disable ADC ADCSRB |= _BV(ACME); // enable ADC multiplexer ADMUX &= ~_BV(MUX0); // select A2 for use as AIN1 (negative voltage of comparator) ADMUX |= _BV(MUX1); ADMUX &= ~_BV(MUX2); ACSR &= ~_BV(ACIE); // disable analog comparator interrupts ACSR &= ~_BV(ACIC); // disable analog comparator input capture } // Required ISR(INT0_vect) { display.scanLine = 0; } void loop() { tv.capture(); // uncomment if tracking dark objects //tv.fill(INVERT); // compute bounding box minX = W; minY = H; maxX = 0; maxY = 0; boolean found = 0; for (int y = 0; y < H; y++) { for (int x = 0; x < W; x++) { c = tv.get_pixel(x, y); if (c == 1) { found = true; if (x < minX) { minX = x; } if (x > maxX) { maxX = x; } if (y < minY) { minY = y; } if (y > maxY) { maxY = y; } } } } // draw bounding box tv.fill(0); if (found) { tv.draw_line(minX, minY, maxX, minY, 1); tv.draw_line(minX, minY, minX, maxY, 1); tv.draw_line(maxX, minY, maxX, maxY, 1); tv.draw_line(minX, maxY, maxX, maxY, 1); sprintf(s, "%d, %d", ((maxX + minX) / 2), ((maxY + minY) / 2)); tv.print(0, 0, s); } else { tv.print(0, 0, "not found"); } tv.resume(); tv.delay_frame(5); }
What if you want to find the darkest area in an image instead of the brightest? That’s easy — just invert the captured image before processing it. Simply call tv.fill(INVERT).
Edge Detection
The Arduino is powerful enough to do more sophisticated image processing. The following sketch captures a frame then performs an edge detection algorithm on the image. The result is the outline of the brightest (or darkest) parts of the image. This could be useful in object recognition applications or
robotics. The algorithm is quite simple, especially with a monochrome image, and is described in this survey of edge detection algorithms as “Local Threshold and Boolean Function Based Edge Detection”.
This project is the example “EdgeDetection” in the TVout library for Video Experimenter.
Simplest OpenCV for Arduino :)
what resolution and frame rate in object detected case?
The scan rate of TVout is 60 fps. So when you capture an image, the scanning/rendering stops while you process it. The logic to compute the bounding box doesn’t seem to take much time, but then my code displays the result for 2 frames. If you were building a robot or something, you wouldn’t necessarily have any output, so you would not delay 2 frames to display the result (or print the coordinates to the screen, etc.). Bottom line, you should be able to track objects at 20fps or better, but that’s just an estimate.
Thanks! Great work!
Hi;
Could this be used to track a star (or maybe even a comet!) – through a telescope, natch – then use the central coordinates it returns to ‘steer’ (drive) a pair of stepper motors coupled to the telescope mount, to keep the object centred?
Wodgereckon?
@Dave,
Well, you’d have to experiment. Keep in mind this is LOW resolution video.
Hi @5,
This is what I had in mind to track the moon for moonbouncing ( AKA EME by hamradio amateurs).
-ben
Thanks man, this topic has led me to learn a lot more about object tracking using Arduino, and I’m glad I now have a basic understanding!
Thx a million!
I noticed that it tracks either light or dark objects, but would it be possible to track, complicated images, such as human faces? The intent is for the arduino to move a couple of servos to point the camera at the person.
Well, it depends on the code you write. I provided a simple program that looks for the lightest or darkest part of an image. If you write code that can recognize a face in a 1-bit bitmap, then yes you could accomplish what you describe. I don’t know if such a sophisticated bit of code could run on the Arduino, though….this type of thing is typically done using something like OpenCV.
Thanks for telling me about OpenCV.
Upon meditation I just realized that the answer was right in front of me. Use IR illumination. The idea is that many digital cameras can see into the near infrared. So use a relatively strong source of IR light to illuminate the target. The software would choose the geometric center of the brightest shape it sees and run the servos to point the camera at it. This would necesitate two cameras. One to see the infrared spot, the other to see in normal visible light. Using one camera to do both would not be practical because the images of people faces would have the IR wash effect. On the sensing camera one could add an IR admitting filter to reject other visible light sources therefore improving accuracy.
Really interesting project. What I’d like to do is to get the X/Y cords of a series of dots on an image that will correspond to reflective dots on a human face.. Then using the varience between the dots to control an animatronic mask.
So long as the speed approximates real time I’ll be very happy.
Many thanks for this.
Marc
Great JOb, I was wondering what type of low resolution cameras would you recommend? I want to track a black dot on a white background. BUt I would like to use a very small camera. maybe you got some experience with that ? thanks
Kind regards
Blitz
@Blitz,
I have this camera module and it works pretty well. It’s small and has 640×480 color resolution.
http://www.sparkfun.com/products/8739
could i track my retina with this? thanks
i meant my eye’s iris.. sorry this is more accurate
That would depend on many factors, such as the quality/size of the image, the sophistication of the software you write to identify the eye, etc.
thanks for the response,
just one more question, do you think this camera will do?
http://www.lextronic.fr/P186-camera-noir-et-blanc-miniature.html
thanks again for the tips!
That camera will probably work — the yellow wire is probably a composite output. That’s the only requirement of the VE board.
im sorry for all this questions but I cannot know if a camera has the right output…
http://www.amazon.co.uk/Security-Colour-Wireless-Internal-Adjustable/dp/B001F5FZ2M/ref=pd_rhf_se_shvl15
do you think a camera like this would work, how can I know if a camera will work with my video shield? I try to look for the PAL specification but cant find anything.. sorry but i hope this helps others with the same doubt.
thanks again michael
Yes, I think that will work. It looks like it has a composite output. A composite video signal is a single connection, usually yellow.
To track more than one light (5 Laser dots) should I write code using X1, X2….. Y1, Y2….? Do I have to declare those variable under unsigned characters.Sorry really interested in this project but a beginner.
Hi,
I was wondering how to get the x,y coordinates of a bright spot. I am not sure if it is with this (c = tv.get_pixel(x,y);) that the cordinates are acquired, if it is, I wanted to know where is the (0,0) in respect to the coordinates gotten. which corner? I just got my shield but not sure how to use the TVOUT library.
Upper left is (0,0). getPixel returns the value of a pixel.
thanks for the help michael. the pixels printed on the screen refer to the center point of the box being displayed? or is it Xmin Ymin? I will try to look up the tvouts library command description to understand it a bit more. thanks again for your comments.
Did you look at the code? It prints the center of the bounding box.
Hey there,
I was looking to setup a video capture that would find a can or other object that would have light reflecting off of it and was wondering what the output was of your capture device? Can you set the Arduino to move a motor or two to attempt to center the bright spot in its field of view? Also what camera would you recommend? lastly i was hoping to use the Arduino Motor Shield R3http://www.robotshop.com/arduino-motor-shield-v3-2.html. I noticed the brake is using some of the same pins your shield is using. Is it possible to reroute the pins? Do you happen to know if these two shields can work together? I am very new to robotics and regretfully don’t know enough to know what questions to ask. Thank you for your time and expertise.
After capturing a frame, you have a monochrome frame buffer of pixels. This allows you to process the frame, looking for things of interest (like a bright spot). It’s all very crude, but you can have the Arduino do whatever you want as a reaction to this processing.
A shield that uses the same pins as the Video Experimenter shield is incompatible. That’s just how it goes with shields.
When finding the brightest part of the box is it possible to set a min brightness so with no flash light our laser there is no detection?
Brightness detection threshold is adjustable with the potentiometer with the long stem.
Thank you for your reply, I had another follow up question if you had a moment, with this being a very low res image that is being created does it matter if the camera that is taking the image is ahigher res? or will the bitmap that is generated have its resolution reduced for processing with pixels in the image(processed) corresponding porportionally to the higher res camrea image? IE 50% of X axis of low res(processed) image would map to 50% of higher res(raw) x axis? I am just trying to figure out how i can search for an object that isnt in view up front but after scaning with a turret find a point(laser) and then move to that point.
It’s independent or the signal’s resolution. Overlay/capture resolution is not based on the incoming signal. It’s based on NTSC timing.
Is it possible to work a video Experimenter on the Arduino Mega1280?
I tried to work with Atmega328 O.K. but not atmega1280
No, the Video Experimenter does not work with the Arduino Mega. This is clearly stated in the documentation.
O.K. I read diferenses.
if I change Arduino Mega Mega according Seeeduino doc. Experimenter videos
should act
I mean. Install missed data lines directly on pcb
You’d need to solder directly to the ATmega1280 chip. Very small!
it is easy for me. Thank you for answer
made 2 lines PE2, PD4 and it act. but how to correct frame scroll up?
Wiring details for Mega here: http://nootropicdesign.com/projectlab/2011/07/13/ve-on-the-seeeduino-mega/
Yes Its works!!!!!!!!! (make mistake on pin 21) :)
Thanks for your tutorial Michael, it’s new idea and interesting for me !!! :)
I have some question for you..
1. Can that device (arduino + V.Experimenter) detect the pattern of line if I take the video on the road (for example zebra cross/vertikal line) ??
2. How to get the real distance between camera an the object? Can you explained me?
@red:
The Arduino and Video Experimenter can capture low-res monochrome image info in memory. You can write software to look for certain patterns, etc., but it would be difficult to do anything sophisticated.
You can’t measure distance using a camera…!
Thanks for your reply Michael :)
Did you ever use the photodioda / led to replace the light from flashlight??
And then, in your video. The light from flashlight can be detect with V.experimenter and at the TV / screen can show the coordinate from the flash..Can the arduino control the motor servo from the movement of coordinate.. Maybe you can explain me.
Trims for your reply,. :D
Yes, in theory you could use the Arduino to control other devices while it detects the location of a bright spot in the captured image.
Ok Michale..
I have IC MAX7456 for OSD, can that IC replace the LM1881? Coz I have search and ask to my friends and the reply is that IC is difficult to get.. My plan is build the V.Experimenter without the LM1881 and change with MAX7456..
Hope the tvout library more powerfull with the MAX7456,. Thanks for everyone who create the tvout, and maybe it can be update,. :)
Of course you can’t replace the LM1881 with a different chip. The Video Experimenter is designed for the LM1881.
The limitations of resolution and such are due to the Arduino, not due to the LM1881.
The LM1881 is not hard to get — why don’t you just but a Video Experimenter shield? We ship everywhere in the world.
Could this be used in conjunction with a laser/camera for a simple rangefinder type device?
Or perhaps as a sort of low res mapping and navigation using a camera?
@Jonothan,
You could use this to makemthe shields compatible by rewriting and recoding:
https://www.sparkfun.com/products/11002
If you are looking for the dark object. When you are computing the bounding box would you not be looking for when c=0. found=true when a black pixel is detected?? Any help is appreciated.
PD, yes that is one way to find dark objects. Or invert the image and look for white.
I am currently having problems with this code. As of now I am using a nested for loop like in the above code, I then read the pixel at that location. Every pixel I read is being interpreted as a ‘1’ or white. I verified that the image is correct by outputting the video to the tv and the black disc is showing in the picture but at the same time every pixel is being interpreted as a ‘1’ which is white. Has anybody had similar problems or have any suggestions for me. Any help would be appreciated.
There are black pixels on the output but get_pixel returns value of 1 for the black pixels? I don’t see he that could happen since the reference the same memory.
I even put my code to the side and used this code exactly and uncommented the tv(invert) line. It sees everything as a white pixel. I tested this by doing a serial.println(“found black pixel”) inside the if ( c==1) statement and it never gets to this. Could it be my camera? I am currently laying a black disc made of delrin material on a white background. The disk made of delrin is the object my robot has to pick up. I’m using a Black and white camera but I dont see why that would cause any problems.
Also is the video out supposed to be that monochrome of image of either a ‘1’ or ‘0’ because the image showing up on my tv looks just as it would without using the video experimenter and has a gray scale to it. This makes me think maybe I made a soldering error when assembling the experimenter.
Serial comm won’t work with the VE. You need to use poll serial. Have you been able to output any serial data? And the output of the be VE is only monochrome. No grayscale.
I was using the serial comm and I was getting it to work when I changed the code to look for a white pixel. I had it print the x and y when a white pixel was first found and it was always (0,0). So if I’m not getting the monochrome image from the output of the VE then you’re saying there was an assembling error on my part? The reason I ask is because even in the above video when the guy is outputting his video to the computer it is not in monochrome and it seems he is doing it from the output of his VE.I want to thank you for going over this with me and helping.
The output of the Arduino program is monochrome. It can be overlayed onto the original image as in the video above. If you set your switch to sync only then you will only see the monochrome captured frame. What do you see? Are all pixels white? Did you adjust threshold pot? Forget about image processing for now and just try the Video Frame Capture project and set the switch to SYNC ONLY.
Okay I am now getting a monochrome video output with the switch to sync. After adjusting the pot I am no longer getting an all white screen.So thank you. Can you explain to me the poll serial communication or show me a link to the library and explanation? The program I am trying to write will be difficult to troubleshoot or see what is happening without the ability to write to the serial window. Also if I invert the image in the code should the image also show up inverted for the output or is it only inverted internally?
See File->Examples->TVout for pollserial examples. And Google. Inverting inverts the memory which is directly reflected in output.
hey im dong this – light tracking – project for the school… but when I connect this to my computer and want to upload the code i got these errors:
sketch_apr05a:6: error: ‘TVout’ does not name a type
sketch_apr05a.ino: In function ‘void setup()’:
sketch_apr05a:14: error: ‘tv’ was not declared in this scope
sketch_apr05a:14: error: ‘NTSC’ was not declared in this scope
sketch_apr05a:18: error: ‘font4x6’ was not declared in this scope
sketch_apr05a.ino: In function ‘void __vector_1()’:
sketch_apr05a:49: error: ‘display’ was not declared in this scope
sketch_apr05a.ino: In function ‘void loop()’:
sketch_apr05a:54: error: ‘tv’ was not declared in this scope
———————————————————————————————–
and im just wondering if this is normal – when I connect my camera and tv on, will work or did I miss something? I’m a begginer so I dont know much of Arduino programming.
I would be very gratefull if you can reply on my mail, because I really need this up and running.
I appreciate for respond.
Please do not comment to ask for help. See full product details or use the support forum to get help.
Hello!! I just bought the kit and now I´m thinking about measurement. Could I sample (hmx – hmin) and after display at the digit shield, or the pins that I need to use the digit shield is not availeble?
I’m afraid the Video Experimenter shield isn’t going to play nice with the Digit Shield. The VE shield is using a lot of pins, some in conflict with Digit Shield.
Hi there I’m wondering how it would be possible to use this board to track an object, I mean like physically make a robot go to the detected object, is this possible?