The nonperiodic blending functions for k = 3 are obtained as

1. Distance transform

s uk (m, n) = uo (m, n) + min {uk- 1 (i, j); ((i, j):6.(m, n; i, j) 1)},

(9.84) u0 (m, n) = u(m, n) k = l, 2, ... 6,

ll.(m, n;

i, J)

where 6.(m, n ; i, j) is the distance between (m, n) and (i, j). The transform is done when k equals the maximum thickness of the region.

2. The skeleton is the set of points:

(9.85) Figure

{(m, n) : uk (m, n) � uk (i, j), 6.(m, n ; i, j) s 1}

9.31 shows an example of the preceding algorithm when 6.(m, n ; i, j) represents the Euclidean distance. It is possible to recover the original image

given its skeleton and the distance of each skeleton point to its contour. It is simply obtained by taking the union of the circular neighborhoods centered on the skeleton points and having radii equal to the associated contour distance.

Thus the skeleton is a regenerative representation of an object. Thinning algorithms. Thinning algorithms transform an object to a set of

simple digital arcs, which lie roughly along their medial axes. The structure ob-

2 2 1 1 1 1 1 1 - 1 2 2 2 2 2 1 -- 1 2 3 3 3 2 1 - 1 2 3 3 3 2 1 ---+-

9.31 Skeleton examples.

382 Image Analysis and Computer Vision Chap. 9

P3 P2

Pg

Pa Ps Ps P1 (a) Labeling point P1 and its neighbors.

(i) (ii)

(iii)

(b) Examples where P1 is not deletable ( P1 = 1) . ( i ) Deleting P1 will tend to split the region; (ii) deleting P1 will shorten arc ends; (iii) 2 .,;;; NZ(P1 ) ..;;.6 but P1 is not deletable.

D (i) D (ii)

(c) Example of thinning. (i) Original; (ii) thinned.

Figure

9.32 A thinning algorithm.

tained is not influenced by small contour inflections that may be present on the initial contour. The basic approach [42] is to delete from the object X simple border points that have more than one neighbor in X and whose deletion does not locally disconnect X. Here a connected region is defined as one in which any two points in the region can be connected by a curve that lies entirely in the region. In this way, endpoints of thin arcs are not deleted. A simple algorithm that yields connected arcs while being insensitive to contour noise is as follows [43] .

Referring to Figure 9.32a, let ZO(P1) be the number of zero to nonzero transitions in the ordered set Pi, g, P4, . . . , P9, P2• Let NZ(P1) be the number of nonzero neighbors of P1• Then P1 is deleted if (Fig. 9.32b)

2 ::5 NZ(P1) ::5 6

and

ZO(P1) = 1

(9.86) and

Pi · P4 · Ps = 0 or ZO(Pi) =t 1

and

Pi · P4 · P6 = 0 or ZO(P4) =t 1

The procedure is repeated until no further changes occur in the image. Figure 9.32c gives an example of applying this algorithm. Note that at each location such as P1 we

end up examining pixels from a 5 x 5 neighborhood. Sec. 9.9

Structure 383

Morphological Processing The term morphology originally comes from the study of forms of plants and

animals. In our context we mean study of topology or structure of objects from their images. Morphological processing refers to certain operations where an object is hit with a structuring element and thereby reduced to a more revealing shape.

Basic operations. Most morphological operations can be defined in terms of two basic operations, erosion and dilation [44]. Suppose the object X and the structuring element B are represented as sets in two-dimensional Euclidean space.

Let B .. denote the translation of B so that its origin is located at x. Then the erosion of X by B is defined as the set of all points x such that Bx is included in X, that is,

(9.87) Similarly, the dilation of X by B is defined as the set of all points x such that B .. hits

Erosion: X 8 B � {x : Bx C X}

X, that is, they have a nonempty intersection:

(9.88) Figure 9.33 shows examples of erosion and dilation. Clearly, erosion is a shrinking

Dilation: X Et) B � {x : B .. n X # <!>}

operation, whereas dilation is an expansion operation. It is also obvious that ero­ sion of an object is accompanied by enlargement or dilation of the background.

Properties. The erosion and dilation operations have the following proper- ties:

1. They are translation invariant, that is, a translation of the object causes the same shift in the result.

2. They are not inverses of each other.

3. Distributivity:

X Et) (B U B') = (X Et) B) U (X Et) B') (9.89)

X 8 (B U B') = (X 8 B) n (X 8 B')

4. Local knowledge:

(X n Z) 8 B = (X 8 B) n (Z 8 B)

5. Iteration:

\fB ]

6. Increasing:

If X C X'

� \fB

\fX (9.92b)

384 Image Analysis and Computer Vision Chap. 9

•• ./'"•"

o••• ooo••••

••• {"'

or191n

HIT - MISS

or191n

•••• /or corners) (searching

OBJECT STRUCTURE

8CIJNDARY

ELEMENT •••••

• /.origin

eooo• eoooe

CONVEX l-f.Jl..L

/ origin

9.33 Examples of some morpho­

logical operations .

Sec. 9.9 Structure 385

{Break small islands and

narrow (Block up channels!

G oooeooo

•• y '"''"

Ske1e1on may

be a digi1a1 clisconnecled on orid (�e:n ii lhe objec1 is oonnecll?d).

o••••••••••••o ••••••••••••••

(Remove noisy btanch)

THICK OBJECT

386 Image Analysis and Computer Vision Chap. 9

7. Duality: Let xc denote the complement of X. Then

(9.93) This means erosion and dilation are duals with respect to the complement oper­

xc c±> B = (X 8 B)'

ation. Morphological Transforms

The medial axes transforms and thinning operations are just two examples of morphological transforms. Table 9.10 lists several useful morphological transforms that are derived from the basic erosion and dilation operations. The hit-miss trans­

form tests whether or not the structure B0b belongs to X and Bbk belongs to xc. The opening of X with respect to B, denoted by X8, defines the domain swept by all translates of B that are included in X. Closing is the dual of opening. Boundary gives the boundary pixels of the object, but they are not ordered along its contour. This table also shows how the morphological operations could be used to obtain the previously defined skeletonizing and thinning transformations. Thickening is the

dual of thinning. Pruning operation smooths skeletons or thinned objects by re­ moving parasitic branches.

Figure 9.33 shows examples of morphological transforms. Figure 9.34 shows an application of morphological processing in a printed circuit board inspection application. The observed image is binarized by thresholding and is reduced to a single-pixel-wide contour image by the thinning transform. The result is pruned to obtain clean line segments, which can be used for inspection of faults such as cuts (open circuits), short circuits, and the like.

We now give the development of skeleton and thinning algorithms in the context of the basic morphological operations.

Skeletons. Let rDx denote a disc of radius r at point x. Let s, (x) denote the set of centers of maximal discs rDx that are contained in X and intersect the boundary of X at two or more locations. Then the skeleton S (X) is the set of centers s, (x).

S (X) = U s, (x)

r>O

= U {(X 8 rD )/(X 8 rD )drD}

r>O

where u and I represent the set union and set difference operations, respectively, and drD denotes opening with respect to an infinitesimal disc. To recover the original object from its skeleton , we take the union of the circular neighborhoods centered on the skeleton points and having radii equal to the associated contour distance.

We can find the skeleton on a digitized grid by replacing the disc rD in (9.94) by the 3 x 3 square grid G and obtain the algorithm summarized in Table 9. 10. Here the operation (X 8 n G) denotes the nth iteration (X 8 G) 8 G 8 · · · 8G and (X 8 n G)G is the opening of (X 8 nG) with respect to G.

Sec. 9.9 Structure 387

TABLE 9.1 0 Some Useful Morphological Transforms Operation

Properties & Usage Hit-Miss

Definition

Searching for a match or a specific configuration. Bob : set formed from pixels in B that should be­ long to the object. Bbk : ... background.

Open Xa = (X 8 B) (f) B Smooths contours, suppress small islands and sharp caps of X. Ideal for object size distribution study.

Close Blocks up narrow channels and thin lakes. Ideal for the study of inter object distance.

Boundary aX = XIX 8 G Gives the set of boundary points. Convex Hull

B 1, 82, ••• are rotated versions of

X\=X

X ! + I = (X ) @ B1) the structuring element B.

C is an appropriate structuring cH = j�l U

4 X . X1x

element choice for B. Skeleton

S(X) = 11 U =0 Sn ( x )

nmax : max size after which x erodes

down to an empty set.

n=O u [(X 8 n G)/(X 8 n G)c] The skeleton is a regenerative repre­ sentation of the object.

Thin XOB = X/X @ B To symmetrically thin X a sequence XO{B} = (( . . . ((XOB1)0B2) . . . )OBn)

of structuring elements, {B} = {B;, 1 :::; i :::; n}, is used in

cascade, where B; is a rotated version of B; - 1•

A widely used element is L. Thick

XOB = X U X @ B Dual of thinning. Prune

X1 = XO {B} E is a suitable structuring element. X2 = 8

X2 (X1 @ Pi ) : end points ;-1 U

Xpn: pruned object with Parasite

branches suppressed. The symbols "/" and " U " represent the set difference and the set union operations

Xpn = X1 U [(X2 (fl {G}) n X)

respectively. Examples of structuring elements are

G= [ 1 1 1 , ] E= 0 1 O ]

where 1 , 0, and d signify the object, background and 'don't care' states, respectively.

388 Image Analysis and Computer Vision Chap. 9

Figure

9.34 Morphological processing for printed circuit board inspection. (a) Original; (b) preprocessed (thresholded); (c) thinned; (d) pruned.

Thinning. In the context of morphological operations, thinning can be de­ fined as

(9.96) where B is the structuring element chosen for the thinning and © denotes the

XOB = X/(X © B)

hit-miss operation defined in Table 9.10.

To thin X symmetrically, a sequence of structuring elements, {B} � {B;,

1 :::; i :::; n}, is used in cascade, where B; is a rotated version of B;- 1.

X O {B} = (( . . . ((X 0 81) 0 B2) •.. ) 0 Bn) (9.97)

A suitable structuring element for the thinning operation is the L-structuring ele­ ment shown in Table 9.10. The thinning process is usually followed by a pruning operation to trim the resulting arcs (Table 9. 10). In general, the original objects are likely to have noisy boundaries, which result in unwanted parasitic branches in the thinned version. It is

the job of this step to clean up these without disconnecting the arcs. Syntactic Representation [45] The foregoing techniques reduce an object to a set of structural elements, or

primitives. By adding a syntax, such as connectivity rules, it is possible to obtain a syntactic representation, which is simply a string of symbols, each representing a primitive (Figure 9.35). The syntax allows a unique representation and interpreta­ tion of the string. The design of a syntax that transforms the symbolic and the syntactic representations back and forth is a difficult task. It requires specification of a complete and unambiguous set of rules, which have to be derived from the

understanding of the scene under study. Sec. 9.9

Structure 389

.,...-........_

Primitive structural

symbols:

Object structure

ab

Syntactic representation: a c

de acab a

Figure

9.35 Syntactic representation.

9.1 0 SHAPE FEATURES The shape of an object refers to its profile and physical structure. These character­

istics can be represented by the previously discussed boundary, region, moment, and structural representations. These representations can be used for matching shapes, recognizing objects, or for making measurements of shapes. Figure 9.36 lists several useful features of shape.

Shape representation

Regenerative features Measurement features

Boundaries

Moments · Structural and syntactic

Regions · Moments

Geometry

Perimeter

Center of mass

Area

Orientation

Max-min radii

Bounding rectangle

and eccentricity

Best-fit ellipse

Corners

Eccentricity

Roundness Bending energy

Holes Euler number

Symmetry

Figure 9.36

390 Image Analysis and Computer Vision Chap. 9

Geometry Features In many image analysis problems the ultimate aim is to measure certain geometric

attributes of the object, such as the following:

1. Perimeter

(9.98) where t is necessarily the boundary parameter but not necessarily its length.

T = f x2 (t) + y2 (t) dt

2. Area

A = JJ dx dy = J (t) dt -

J x (t)

y(t) dxd

dt (9.99)

where 9l and a 9f denote the object region and its boundary, respectively.

3. Radii Rmin , Rmax are the minimum and maximum distances, respectively, to boundary from the center of mass (Fig. 9.37a). Sometimes the ratio Rmax 1Rmin is used as a measure of eccentricity or elongation of the object.

(a) Maximum and m i n i m um r a di i

Ll li__u__,

idt)

(b) Cuivature functions for corner detection

Square

A has 4-fold symmetry Circle 8 is rotationally symmetric

Small circles C1 , ••• , C4 have

4-fold symmetry

Triangles A have 2-fold symmetry

(c) Types of symmetry Figure

9.37 Geometry features.

Sec. 9. 1 0 Shape Features 391

4. Number of holes nh

5. Euler number

6 � number of connected regions - nh

6. Corners These are locations on the boundary where the curvature K(t) becomes unbounded. When t represents distance along the boundary, then from (9.57) and (9.58), we can obtain

(�;J (�::r

IK(t)l2 �

In practice, a corner is declared whenever jK(t)I assumes a large value (Fig. 9.37b).

7. Bending energy This is another attribute associated with the curvature.

=� f

E 1K(t)l2dt

In terms of {a(k)}, the FDs of u (t), this is given by

E = I ia(k)l2 ( r

8. Roundness, or compactness

4ir(area)

For a disc, 'Y is minimum and equals 1.

9. Symmetry There are two common types of symmetry of shapes, rotational and mirror. Other forms of symmetry are twofold, fourfold, eightfold, and so on (Fig. 9.37c). Distances from the center of mass to different points on the boundary can be used to analyze symmetry of shapes. Corner locations are also useful in determining object symmetry.

Moment-Based Features Many shape features can be conveniently represented in terms of moments. For a

shape represented by a region 9?. containing N pixels, we have the following:

1. Center of mass m = _!_ LL m,

N(m,n) (9. 105)

n = _!_ LL n N(m,n) E Yr

E Yr

The (p, q) order central moments become

µp, q = LL (m - my (n - n)q

(m,n) (9. 106)

E YI'

2. Orientation Orientation is defined as the angle of axis of the least moment of inertia. It is obtained by minimizing with respect to 0 (Fig. 9.38a) the sum

392 Image Analysis and Computer Vision Chap. 9

(a) Orientation

(b) Boundary rectangle

f3

(c) Best fit ellipse

Figure

9.38 Moment-based features.

/(0) = LL D2 ( m, n) = LL [(n - n) cos 0 - ( m - m) sin 0]2 (9.107)

(m, n) E l'l'

(m, n) E s-1'

The result is (9.108)

3. Bounding rectangle The bounding rectangle is the smallest rectangle en­ closing the object that is also aligned with its orientation (Fig. 9.38b ). Once 0 is known we use the transformation

a =x cos 0 + y sin 0

13 =- x sin 0 + y cos 0

on the boundary points and search for amin' am.., 13 min• and l3 max- These give the locations of points A:), Al, A2, and A4, respectively, in Fig. 9.38b. From these

Sec. 9. 1 0 Shape Features 393 Sec. 9. 1 0 Shape Features 393

= amax - amin

width wb

The ratio (h = �max - �min· wb /area) is also a useful shape feature.

4. Best-fit ellipse The best-fit ellipse is the ellipse whose second moment equals that of the object. Let and b denote the lengths of semimajor and semiminor axes, respectively, of the best-fit ellipse (Fig. 9.38c). The least and the greatest a moments of inertia for an ellipse are

b lmax = �a3 (9.110)

For orientation

0, the above moments can be calculated as

l;.,in = LL [(n E !A - n) cos e - (m m) sin 0]2

(m, n)

= LL [(n - n) sin 0+ ( 1:., m - m) cos 012

For the best-fit ellipse we want

lmin = l;.,in• lmax = 1:.,.., (i) lmin = l;.,in• lmax = 1:.,..,

l/4[(1:., .. )3] 118

a = I' .

mm (9.112) lmax

1T

5. Eccentricity

A (µ E= 2,0 - µ0,2)2 + 4µ1,1

area

Other representations of eccentricity are

and Rmax1Rmin• 1:., alb. 11:.,i"'

The foregoing shape features are very useful in the design of vision systems for object recognition.

9.1 1 TEXTURE Texture is observed in the structural patterns of surfaces of objects such as wood,

grain, sand, grass, and cloth. Figure 9.39 shows some examples of textures [46]. The term texture generally refers to repetition of basic texture elements called texels. A texel contains several pixels, whose placement could be periodic, quasi-periodic or random. Natural textures are generally random, whereas artificial textures are often deterministic or periodic. Texture may be coarse, fine, smooth, granulated, rippled, regular, irregular, or linear. In image analysis, texture is broadly classified into two main categories, statistical and structural (47].

Statistical Approaches Textures that are random in nature are well suited for statistical characterization,

for example, as realizations of random fields. Figure 9.40 lists several statistical measures of texture. We discuss these briefly next.

394 Image Analysis and Computer Vision Chap. 9

Figure

9.39 Brodatz textures.

Classification of texture

Other ACF

Statistical

Structural

Mosaic models Transforms Edge-ness Concurrence matrix

Periodic

Random

Texture transforms

Primitives:

Random field models

Edge density

G ray levels

Extreme density

Shape

Run lengths

Homogeneity Placement rules: Period Adjacency Closest distances

Figure 9.40

The autocorrelation function (ACF). The spatial size of the tonal prim­ itives (i.e., texels) in texture can be represented by the width of the spatial ACF

r(k, l) = m2 (k, l)lm2 (0, 0) [see (9.7)]. The coarseness of texture is expected to be proportional to the width of the ACF which can be represented by distances x0, y0 such that r(x0, 0) = r(O, y0) = !. Other measures of spread of the ACF are obtained via the moment-generating function

Sec. 9. 1 1 Texture 395 Sec. 9. 1 1 Texture 395

µz � L L nr(m, n)

Features of special interest are the profile spreads M (2, 0) and M (0, 2), the cross­ relation M(l, 1), and the second-degree spread M(2, 2). The calibration of the ACF

spread on a fine-coarse texture scale depends on the resolution of the image. This is because a seemingly flat region (no texture) at a given resolution could appear as fine texture at higher resolution and coarse texture at lower resolution. The ACF by itself is not sufficient to distinguish among several texture fields because many different image ensembles can have the same ACF.

Image transforms. Texture features such as coarseness, fineness, or orientation can be estimated by generalized linear filtering techniques utilizing

image transforms (Fig. 9.4). A two-dimensional transform v (k, I) of the input image

is passed through several band-pass filters or masks g; (k, l), i=

1, 2, 3, . . . , as

(9.114) Then the energy in this Z; (k, l) represents a transform feature. Different types of

Z; (k, /) =V (k, /)g; (k, /)

masks appropriate for texture analysis are shown in Fig. 9.Sa. With circular slits we measure energy in different spatial frequency or sequency bands. Angular slits are useful in detecting orientation features. Combinations of angular and circular slits are useful for periodic or quasi-periodic textures. Image transforms have been applied for discrimination of terrain types-for example, deserts, farms, moun­ tains, riverbeds, urban areas, and clouds [48]. Fourier spectral analysis has been found useful in detection and classification of black lung disease by comparing the textural patterns of the diseased and normal areas [49].

Edge density. The coarseness of random texture can also be represented by the density of the edge pixels. Given an edge map [see (9. 17)] the edge density is measured by the average number of edge pixels per unit area.

Histogram features. The two-dimensional histogram discussed in Section

9.2 has proven to be quite useful for texture analysis. For two pixels u1 and u2 at relative distance r and orientation 0, the distribution function [see (9.9)] can be explicitly written as

(9.115) Some useful texture features based on this function are

Pu (xi, X2) = f(r, 0; Xi, X2)

Inertia: I(r, 0) � LLlx1 - x212f(r, 0;xi,x2)

x1.x2

Meant distribution: µ(r;xi.x2) = Lf(r, 0;xi,x2)

t The symbol No represents the total number of orientations.

396 Image Analysis and Computer Vision Chap. 9

Variance distribution: a2 (r;xi,x2) =

N l L [f(r, 6;xi,x2) 0 8

(9.1 18) - µ(r;Xi,X2)]2 Spread distribution: TJ(r;xi,x2) = max{f(r, 6;x17x2)} 8 (9. 119)

- min 8 {f (r, 6; Xi, X2)} The inertia is useful in representing the spread of the function f(r, 6; xi, x2) for a

given set of (r, 6) values. I(r, 6) becomes proportional to the coarseness of the texture at different distances and orientations. The mean distribution µ(r; xi, x2) is useful when the angular variations in textural properties are unimportant. The

variance a 2 (r; Xi, x2) indicates the angular fluctuations of textural properties. The function TJ(r; x1, x2) gives a measure of orientation-independent spread.

Random texture models. It has been suggested that visual perception of random texture fields may be unique only up to second-order densities [50]. It was observed that two textured fields with the same second-order probability distribu­ tions appeared to be indistinguishable. Although not always true, this conjecture has proven useful for synthesis and analysis of many types of textures. Thus two

different textures can often be discriminated by comparing their second-order histograms.

A simple model for texture analysis is shown in Fig. 9 .4 la [ 51]. The texture field is first decorrelated by a filter a ( m, n), which can be designed from the knowl­ edge of the ACF. Thus if r ( m, n) is the ACF, then

(9. 120) is an uncorrelated random field. From Chapter 6 (see Section 6.6) this means that

u ( m, n ) ® a (m, n ) � E(m, n )

any WNDR of u ( m, n ) would give an admissible whitening (or decorrelating) filter.

ACF measurement

Texture Texture

u(m, n)

Feature

pattern extraction

A(z, , z2l decorrelation

(a) Texture analysis by decorrelation

White

c(m, n)

1 u(m, n)

noise

( Known distribution)

(b) Texture synthesis using linear filters

Figure

9.41 Random texture models.

Sec. 9.1 1 Texture 397

Such a filter is not unique, and it could have causal, semicausal, or noncausal structure. Since the edge extraction operators have a tendency to decorrelate im­ ages, these have been used [51] as alternatives to the true whitening filters. The ACF features such as M (O, 2), M(2, 0), M(l, 1), and M (2, 2) [see (9. 113)] and the

features of the first-order histogram of e (m, n), such as average mi, deviation v'µ;,

skewness µ3, and kurtosis µ4 -

3, have been used as the elements of the texture feature vector x in Fig. 9.41a. Random field representations of texture have been considered using one­ dimensional time series as well as two-dimensional random field models (see [52],

[53] and bibliography of Chapter 6). Following Chapter 6, such models can be identified from the given data. The model coefficients are then used as features for texture discrimination. Moreover these random field models can synthesize random

texture fields when driven by the uncorrelated random field e (m, n) of known probability density (Fig. 9.4lb).

Example 9.6 Texture synthesis via causal and semicausal models Figure 9.42a shows a given 256 x 256 grass texture. Using estimated covariances, a

(p, q) = (3, 4)-order white Gaussian noise-driven causal model was designed and used to synthesize the texture of Fig. 9.42b. Figure 9.42c shows the texture synthesized via a

(p, q) = (3, 4) semicausal white noise-driven model. This model was designed via the Wiener-Doob homomorphic factorization method of Section 6.8.

Structural Approaches [4, 47, 54] Purely structural textures are deterministic texels, which repeat according to some

placement rules, deterministic or random. A texel is isolated by identifying a group of pixels having certain invariant properties, which repeat in the given image. The texel may be defined by its gray level, shape, or homogeneity of some local prop­ erty, such as size, orientation, or second-order histogram (concurrence matrix). The placement rules define the spatial relationships between the texels. These spatial relationships may be expressed in terms of adjacency, closest distance, penod-

(a) Original g ross texture

(b) Texture synthesized by

(c) Texture synthesized by

semicausal model Figure

causal model

9.42 Texture synthesis using causal and semicausal models.

398 Image Analysis and Computer Vision Chap. 9 398 Image Analysis and Computer Vision Chap. 9

For randomly placed texels, the associated texture is called weak and the placement rules may be expressed in terms of measures such as the following:

1. Edge density

2. Run lengths of maximally connected texels

3. Relative extrema density, which is the number of pixels per unit area showing gray levels that are locally maxima or minima relative to their neighbors. For example, a pixel u(m, n) is a relative minimum or a relative maximum if it is, respectively, less than or greater than its nearest four neighbors. ( In a region of constant gray levels, which may be a plateau or a valley, each pixel counts as

an extremum. ) This definition does not distinguish between images having a few large plateaus and those having many single extrema. An alternative is to count each plateau as one extremum. The height and the area of each extremum

may also be considered as features describing the texels. Example 9.7 Synthesis for quasiperiodic textures

The raffia texture (Fig. 9.43a) can be viewed as a quasiperiodic repetition of a deterministic pattern. The spatial covariance function of a small portion of the image was analyzed to estimate the periodicity and the randomness in the periodic rate. A

17 x 17 primitive was extracted from the parent texture and repeated according to the quasiperiodic placement rule to give the image of Fig. 9.43b.

Other Approaches

A method that combines the statistical and the structural approaches is based on what have been called mosaic models [55]. These models represent random geo­ metrical processes. For example, regular or random tessellations of a plane into bounded convex polygons give rise to cell-structured textures. A mosaic model

(a) Original 256 x 256 raffia (b) Synthesized raffia by quasi-periodic placement of a primitive

Figure 9.43 Texture synthesis by structural approach.

Sec. 9. 1 1 Textu re 399 Sec. 9. 1 1 Textu re 399

oped (3]. Such grammars give a few rules for combining certain primitive shapes or symbols to generate several complex patterns.