While represented as colors in the above Figures 10 and 11, the color segmentation process actually produces five monochrome (1-bit) images - one for each region defined previously: Doc's hat, Doc's suit, Doc's shoes, Doc's skin, and Doc's beard, shown in Figure 12. These images contain a number of pixels forming areas called blobs which have been classified as belonging to the set of colors which define one or more of Doc's regions.
One wishes to identify the location of each blob in each of these region images. This is accomplished through application of two algorithms. First, contour extraction is employed to find the bounding contour for each blob, and as a result, obtain the maximum and minimum X and Y locations of pixels belonging to that particular blob. Then a simple bounding box algorithm is used to form a box which covers this area, defining an internal representation of each blob as a rectangle in a list. Contour extraction is defined algorithmically as follows:
Figure 13 shows the contour extraction process with five panels: original blobs, extracted contour of blob 1, minimum enclosing box for blob 1, extracted contour of blob 2, and the minimum enclosing box for blob 2.
Figure 14 shows the blobs extracted by the above method for our familiar test scene. Additionally, Figures 15 and 16 show results when Doc is in more realistic test scenes.