Select Page

Difficulty Level = 5 [What’s this?]

The Video Experimenter shield can give your Arduino the gift of sight. In the Video Frame Capture project, I showed how to capture images from a composite video source and display them on a TV. We can take this concept further by processing the contents of the captured image to implement object tracking and edge detection.

The setup is the same as when capturing video frames: a video source like a camera is connected to the video input. The output select switch is set to “overlay”, and sync select jumper set to “video input”. Set the analog threshold potentiometer to the lowest setting.

Object Tracking

Here is an Arduino sketch that captures a video frame and then computes the bounding box of the brightest region in the image.




This project is the example “ObjectTracking” in the TVout library for Video Experimenter. The code first calls tv.capture() to capture a frame. Then it computes a simple bounding box for the brightest spot in the image. After computing the location of the brightest area, a box is drawn and the coordinates of the box are printed to the TVout frame buffer. Finally, tv.resume() is called to resume the output and display the box and coordinates on the screen.

Keep in mind that there is no need to display any output at all — we just do this so we can see what’s going on. If you have a robot with a camera on it, you can detect/track objects with Arduino code, and the output of the Video Experimenter doesn’t need to be connected to anything (although the analog threshold potentiometer would probably need some adjustment).

If you use a television with the PAL standard (that is, you are not in North America), change tv.begin(NTSC, W, H) to tv.begin(PAL, W, H).

#include <TVout.h>
#include <fontALL.h>
#define W 128
#define H 96

TVout tv;
unsigned char x, y;
unsigned char c;
unsigned char minX, minY, maxX, maxY;
char s[32];


void setup()  {
  tv.begin(NTSC, W, H);
  initOverlay();
  initInputProcessing();

  tv.select_font(font4x6);
  tv.fill(0);
}


void initOverlay() {
  TCCR1A = 0;
  // Enable timer1.  ICES0 is set to 0 for falling edge detection on input capture pin.
  TCCR1B = _BV(CS10);

  // Enable input capture interrupt
  TIMSK1 |= _BV(ICIE1);

  // Enable external interrupt INT0 on pin 2 with falling edge.
  EIMSK = _BV(INT0);
  EICRA = _BV(ISC01);
}

void initInputProcessing() {
  // Analog Comparator setup
  ADCSRA &= ~_BV(ADEN); // disable ADC
  ADCSRB |= _BV(ACME); // enable ADC multiplexer
  ADMUX &= ~_BV(MUX0);  // select A2 for use as AIN1 (negative voltage of comparator)
  ADMUX |= _BV(MUX1);
  ADMUX &= ~_BV(MUX2);
  ACSR &= ~_BV(ACIE);  // disable analog comparator interrupts
  ACSR &= ~_BV(ACIC);  // disable analog comparator input capture
}

// Required
ISR(INT0_vect) {
  display.scanLine = 0;
}


void loop() {
  tv.capture();

  // uncomment if tracking dark objects
  //tv.fill(INVERT);

  // compute bounding box
  minX = W;
  minY = H;
  maxX = 0;
  maxY = 0;
  boolean found = 0;
  for (int y = 0; y < H; y++) {
    for (int x = 0; x < W; x++) {
      c = tv.get_pixel(x, y);
      if (c == 1) {
        found = true;
        if (x < minX) {
          minX = x;
        }
        if (x > maxX) {
          maxX = x;
        }
        if (y < minY) {
          minY = y;
        }
        if (y > maxY) {
          maxY = y;
        }
      }
    }
  }

  // draw bounding box
  tv.fill(0);
  if (found) {
    tv.draw_line(minX, minY, maxX, minY, 1);
    tv.draw_line(minX, minY, minX, maxY, 1);
    tv.draw_line(maxX, minY, maxX, maxY, 1);
    tv.draw_line(minX, maxY, maxX, maxY, 1);
    sprintf(s, "%d, %d", ((maxX + minX) / 2), ((maxY + minY) / 2));
    tv.print(0, 0, s);
  } else {
    tv.print(0, 0, "not found");
  }


  tv.resume();
  tv.delay_frame(5);
}

What if you want to find the darkest area in an image instead of the brightest? That’s easy — just invert the captured image before processing it. Simply call tv.fill(INVERT).

Edge Detection

The Arduino is powerful enough to do more sophisticated image processing. The following sketch captures a frame then performs an edge detection algorithm on the image. The result is the outline of the brightest (or darkest) parts of the image. This could be useful in object recognition applications or
robotics. The algorithm is quite simple, especially with a monochrome image, and is described in this survey of edge detection algorithms as “Local Threshold and Boolean Function Based Edge Detection”.

This project is the example “EdgeDetection” in the TVout library for Video Experimenter.