The first piece of the developed software was dedicated to reading the original data. To this end, Fig.A presents an example of the output of this software, where two images, the original video stream on the left and the raw difference between the two frames on the right, are changing dynamically. Five control buttons were added to allow scrolling data back and force in attempt to find the range of the frames for further processing. The ultimate goal of the project is to reliably find the frame coordinates of multiple pop-up objects (when they occur) with respect to multiple surveyed reference points. While these reference points (appearing on the left plot on Fig.A as the white dots) are relatively easy to find, the pop-up objects themselves have little contrast and of course their location is not known a priori. That is why the difference between two frames presented on the right plot on Fig.A was though to be an important tool allowing detecting new objects more easily. On the snapshot presented on Fig.A the difference is taken for the intensity of one of the channels (RGB - red, green or blue) between the frame 70 and frame 40 (30 frames or 1 second apart). Two popped-up points are seen near x=150, y=325 area. The thought is that if we could correlate two consequent frames, so that everything on them appears on the same place except some newly introduced objects, then we could simply subtract one frame from another and have a reliable detection tool.
Figure A. The developed Playback Tool.
As seen from Fig.A, the simple subtraction of one frame from another still leaves a lot of objects having much higher intensity compared to that of the popped-up points. The reason for that is that the observer camera is not still – it moves and therefore disturbs the image (rotates and translates it to say the least). Hence, the images have to be preprocessed or normalized.