I look at one of the examples and see that it calls tv.capture().
What I’m curious about is are the API calls (or some other method) to get the results of the capture, perhaps as a bit map, that can then be operated on by the program? So for example, could I store a bitmap of frame in Arduino memory, then do another capture. Then do some programmatic analysis on which bits changed?
Anyone know if that is possible? I didn’t see anything for addressing the data in the frame in the TVout library.
If it is possible, anyone know how fast it would be to get the frame data?
thanks for the reply. I want to start hacking around with something for a project. I’ve been reading bits and pieces and trying to decide whether Arduino or Raspberry Pi would be better. I get the impressions that Arduino is easier for simple device stuff but if I need more power, I’d have to use Raspberry Pi (at the expense of deal with a whole operating system and such.)
Another thing to think about is that I don’t need to hold the whole frame in memory. Probably collapsing it to 64×48 would do, either by sampling every fifth pixel or averaging five pixels.
I need to read more about both devices to see if either is powerful enough to do what I want.
How about just changing the letters to black without using the invert=2 or the white = 1 anywhere on the screen?
Haven’t gotten as far as to know what you were talking about in previous messages but it sounds interesting. Another passage to take someday.