Arduino Computer Vision


Difficulty Level = 5 [What’s this?]

The Video Experimenter shield can give your Arduino the gift of sight. In the Video Frame Capture project, I showed how to capture images from a composite video source and display them on a TV. We can take this concept further by processing the contents of the captured image to implement object tracking and edge detection.

The setup is the same as when capturing video frames: a video source like a camera is connected to the video input. The output select switch is set to “overlay”, and sync select jumper set to “video input”. Set the analog threshold potentiometer to the lowest setting.

Object Tracking

Here is an Arduino sketch that captures a video frame and then computes the bounding box of the brightest region in the image.




This project is the example “ObjectTracking” in the TVout library for Video Experimenter. The code first calls tv.capture() to capture a frame. Then it computes a simple bounding box for the brightest spot in the image. After computing the location of the brightest area, a box is drawn and the coordinates of the box are printed to the TVout frame buffer. Finally, tv.resume() is called to resume the output and display the box and coordinates on the screen.

Keep in mind that there is no need to display any output at all — we just do this so we can see what’s going on. If you have a robot with a camera on it, you can detect/track objects with Arduino code, and the output of the Video Experimenter doesn’t need to be connected to anything (although the analog threshold potentiometer would probably need some adjustment).

If you use a television with the PAL standard (that is, you are not in North America), change tv.begin(NTSC, W, H) to tv.begin(PAL, W, H).

#include <TVout.h>
#include <fontALL.h>
#define W 128
#define H 96

TVout tv;
unsigned char x, y;
unsigned char c;
unsigned char minX, minY, maxX, maxY;
char s[32];


void setup()  {
  tv.begin(NTSC, W, H);
  initOverlay();
  initInputProcessing();

  tv.select_font(font4x6);
  tv.fill(0);
}


void initOverlay() {
  TCCR1A = 0;
  // Enable timer1.  ICES0 is set to 0 for falling edge detection on input capture pin.
  TCCR1B = _BV(CS10);

  // Enable input capture interrupt
  TIMSK1 |= _BV(ICIE1);

  // Enable external interrupt INT0 on pin 2 with falling edge.
  EIMSK = _BV(INT0);
  EICRA = _BV(ISC01);
}

void initInputProcessing() {
  // Analog Comparator setup
  ADCSRA &= ~_BV(ADEN); // disable ADC
  ADCSRB |= _BV(ACME); // enable ADC multiplexer
  ADMUX &= ~_BV(MUX0);  // select A2 for use as AIN1 (negative voltage of comparator)
  ADMUX |= _BV(MUX1);
  ADMUX &= ~_BV(MUX2);
  ACSR &= ~_BV(ACIE);  // disable analog comparator interrupts
  ACSR &= ~_BV(ACIC);  // disable analog comparator input capture
}

// Required
ISR(INT0_vect) {
  display.scanLine = 0;
}


void loop() {
  tv.capture();

  // uncomment if tracking dark objects
  //tv.fill(INVERT);

  // compute bounding box
  minX = W;
  minY = H;
  maxX = 0;
  maxY = 0;
  boolean found = 0;
  for (int y = 0; y < H; y++) {
    for (int x = 0; x < W; x++) {
      c = tv.get_pixel(x, y);
      if (c == 1) {
        found = true;
        if (x < minX) {
          minX = x;
        }
        if (x > maxX) {
          maxX = x;
        }
        if (y < minY) {
          minY = y;
        }
        if (y > maxY) {
          maxY = y;
        }
      }
    }
  }

  // draw bounding box
  tv.fill(0);
  if (found) {
    tv.draw_line(minX, minY, maxX, minY, 1);
    tv.draw_line(minX, minY, minX, maxY, 1);
    tv.draw_line(maxX, minY, maxX, maxY, 1);
    tv.draw_line(minX, maxY, maxX, maxY, 1);
    sprintf(s, "%d, %d", ((maxX + minX) / 2), ((maxY + minY) / 2));
    tv.print(0, 0, s);
  } else {
    tv.print(0, 0, "not found");
  }


  tv.resume();
  tv.delay_frame(5);
}

What if you want to find the darkest area in an image instead of the brightest? That’s easy — just invert the captured image before processing it. Simply call tv.fill(INVERT).

Edge Detection

The Arduino is powerful enough to do more sophisticated image processing. The following sketch captures a frame then performs an edge detection algorithm on the image. The result is the outline of the brightest (or darkest) parts of the image. This could be useful in object recognition applications or
robotics. The algorithm is quite simple, especially with a monochrome image, and is described in this survey of edge detection algorithms as “Local Threshold and Boolean Function Based Edge Detection”.

This project is the example “EdgeDetection” in the TVout library for Video Experimenter.






Published by Michael, on March 20th, 2011 at 3:02 pm. Filed under: Arduino,Level 5,Video. | 76 Comments |





Video Frame Capture using the Video Experimenter Shield


Difficulty Level = 3 [What’s this?]

In addition to overlaying text and graphics onto a video signal, the Video Experimenter shield can also be used to capture image data from a video source and store it in the Arduino’s SRAM memory. The image data can be displayed to a TV screen and can be overlayed onto the original video signal.

Believe it or not, the main loop of the program is this simple:

void loop() {
  tv.capture();
  tv.resume();
  tv.delay_frame(5);
}

NOTE: On versions of Arduino greater than 1.6, you may need to add a 1ms delay at the end of loop. Just the line “delay(1);”.

 
For this project, we connect a video camera to the input of the Video Experimenter shield. The output select switch is set to “overlay” and the sync select jumper is set to “video input”. The video output is connected to an ordinary TV. When performing this experiment, turn the analog threshold potentiometer to the lowest value to start, then adjust it to select different brightness thresholds when capturing images.

By moving the output select switch to “sync only”, the original video source is not included in the output, only the captured monochrome image. You will need to adjust the threshold potentiometer (the one with the long shaft) to a higher value when the output switch is in this position. Experiment!

Monochrome video frame capture in Arduino memory




In the VideoFrameCapture.ino sketch below, we capture the image in memory by calling tv.capture(). When this method returns, a monochrome image is stored in the TVout frame buffer. The contents of the frame buffer are not displayed until we call tv.resume(). This project is the example “VideoFrameCapture” in the TVout library for Video Experimenter.

Here is the Arduino code. If you use a television with the PAL standard (that is, you are not in North America), change tv.begin(NTSC, W, H) to tv.begin(PAL, W, H).

#include <TVout.h>
#include <fontALL.h>
#define W 128
#define H 96

TVout tv;
unsigned char x,y;
char s[32];


void setup()  {
  tv.begin(NTSC, W, H);
  initOverlay();
  initInputProcessing();

  tv.select_font(font4x6);
  tv.fill(0);
}


void initOverlay() {
  TCCR1A = 0;
  // Enable timer1.  ICES0 is set to 0 for falling edge detection on input capture pin.
  TCCR1B = _BV(CS10);

  // Enable input capture interrupt
  TIMSK1 |= _BV(ICIE1);

  // Enable external interrupt INT0 on pin 2 with falling edge.
  EIMSK = _BV(INT0);
  EICRA = _BV(ISC01);
}

void initInputProcessing() {
  // Analog Comparator setup
  ADCSRA &= ~_BV(ADEN); // disable ADC
  ADCSRB |= _BV(ACME); // enable ADC multiplexer
  ADMUX &= ~_BV(MUX0);  // select A2 for use as AIN1 (negative voltage of comparator)
  ADMUX |= _BV(MUX1);
  ADMUX &= ~_BV(MUX2);
  ACSR &= ~_BV(ACIE);  // disable analog comparator interrupts
  ACSR &= ~_BV(ACIC);  // disable analog comparator input capture
}

ISR(INT0_vect) {
  display.scanLine = 0;
}

void loop() {
  tv.capture();
  //tv.fill(INVERT);
  tv.resume();
  tv.delay_frame(5);
  delay(1);
}



Published by Michael, on March 20th, 2011 at 2:52 pm. Filed under: Arduino,Level 3,Video. | 52 Comments |





TV Blaster


Difficulty Level = 3 [What’s this?]

The Video Experimenter shield gives you new ways to interact with your TV. How many times did you wish you could blast someone or something on your TV screen? Now you can! This project lets you control a laser sight using a Wii nunchuk controller and fire an imaginary laser at the screen by pulling the trigger. Have fun.




A Wii nunchuk can be connected to your Arduino using the I2C pins (analog pins 4 and 5). I use a little Wiichuck PCB to make it easy (these are available in the nootropic design store because we sell them for use with Hackvision). Don’t connect the Wiichuck PCB directly on the Arduino analog pins 2-5 because analog pin 2 is used by the Video Experimenter.

Connect a nunchuk to your Arduino on analog pins 4 and 5

 

Data from the nunchuk is read using the Hackvision Controllers library.

The code is in the example “TVBlaster” in the TVout library for Video Experimenter.




Published by Michael, on March 20th, 2011 at 1:48 pm. Filed under: Arduino,Level 3,Video. | 2 Comments |





Text and Graphics Overlay on Video


Difficulty Level = 2 [What’s this?]

The Video Experimenter shield makes it easy to overlay text and graphics onto any composite video signal. Any source of composite video should work — video camera, VCR, DVD player, DVR, cable box, etc.

Text and graphics overlayed onto a TV signal.

 

Video Experimenter projects require an enhanced version of the TVout library which can be downloaded here. All of the usual TVout drawing primitives can be used to add text or graphics to the screen. Here’s a video of a demo where I had the output of a VCR connected to the Video Experimenter input, then the Video Experimenter output connected to my TV.

 
The OverlayDemo sketch is in the TVout examples folder.

Here’s the OverlayDemo sketch source code. If you use a television with the PAL standard (that is, you are not in North America), change tv.begin(NTSC, W, H) to tv.begin(PAL, W, H).

#include <TVout.h>
#include <fontALL.h>

#define W 136
#define H 96

TVout tv;
unsigned char x,y;
unsigned char originx = 5;
unsigned char originy = 80;
unsigned char plotx = originx;
unsigned char ploty = 40;
char s[32];
int index = 0;
int messageLen = 32;
char message[] = "...OVERLAY TEXT AND GRAPHICS ON A VIDEO SIGNAL...OVERLAY TEXT AND GRAPHICS ON A VIDEO SIGNAL";
char saveChar;
byte ledState = LOW;

void setup()  {
  tv.begin(NTSC, W, H);
  initOverlay();
  tv.select_font(font6x8);
  tv.fill(0);
  drawGraph();
  randomSeed(analogRead(0));
}

// Initialize ATMega registers for video overlay capability.
// Must be called after tv.begin().
void initOverlay() {
  TCCR1A = 0;
  // Enable timer1.  ICES0 is set to 0 for falling edge detection on input capture pin.
  TCCR1B = _BV(CS10);

  // Enable input capture interrupt
  TIMSK1 |= _BV(ICIE1);

  // Enable external interrupt INT0 on pin 2 with falling edge.
  EIMSK = _BV(INT0);
  EICRA = _BV(ISC01);
}

// Required to reset the scan line when the vertical sync occurs
ISR(INT0_vect) {
  display.scanLine = 0;
}


void loop() {
  saveChar = message[index+22];
  message[index+22] = '\0';

  for(int x=6;x>=0;x--) {
    if (x<6) {
      tv.delay_frame(1);
    } 
    tv.print(x, 87, message+index);

    for(byte y=87;y<96;y++) {
      tv.draw_line(0, y, 5, y, 0);
      tv.draw_line(128, y, 134, y, 0);
    }

  }

  message[index+22] = saveChar;
  index++;
  if (index > 45) {
    index = 0;
  }

  sprintf(s, "%ums", millis());
  tv.print(0, 0, s);


  if (plotx++ > 120) {
    tv.fill(0);
    drawGraph();
    plotx = originx + 1;
    return;
  }
  byte newploty = ploty + random(0, 7) - 3;
  newploty = constrain(newploty, 15, originy);
  tv.draw_line(plotx-1, ploty, plotx, newploty, 1);
  ploty = newploty;
}


void drawGraph() {
  tv.draw_line(originx, 15, originx, originy, 1);
  tv.draw_line(originx, originy, 120, originy, 1);
  for(byte y=originy;y>15;y -= 4) {
    tv.set_pixel(originx-1, y, 1);
    tv.set_pixel(originx-2, y, 1);
  }
  for(byte x=originx;x<120;x += 4) {
    tv.set_pixel(x, originy+1, 1);
    tv.set_pixel(x, originy+2, 1);
  }
}





Published by Michael, on March 20th, 2011 at 12:45 pm. Filed under: Arduino,Level 2,Video. | 63 Comments |