ICTS2005 The Proceeding
ISSN 1858 -1633
Proceeding of Annual International Conference
Information and Communication Technology Seminar
Volume 1, Number 1, August 2005
Executive Board
Rector of ITS
Dean of Information Technology Faculty (FTIF) ITS
Head of Informatics Department FTIF ITS
Editorial Board
Achmad Benny Mutiara
Gunadharma University, Indonesia
Agus Zainal
Sepuluh Nopember Institute of Technology,
Indonesia
Akira Asano
Hiroshima University, Japan
Archi Delphinanto
Eindhoven University of Technology, The
Netherlands
Arif Djunaidy
Sepuluh Nopember Institute of Technology,
Indonesia
Daniel Siahaan
Sepuluh Nopember Institute of Technology,
Indonesia
Handayani Tjandrasa
Sepuluh Nopember Institute of Technology
Happy Tobing
Cendrawasih University, Indonesia
Hideto Ikeda
Ritsumeikan University, Japan
Johny Moningka
Indonesia University, Indonesia
Joko Lianto
Sepuluh Nopember Institute of Technology,
Indonesia
Kridanto Surendro
Bndung Institute of Technology, Indonesia
Marco J Patrick
V-SAT Company, Portugal
Mauridhi Hery Purnomo
Sepuluh Nopember Institute of Technology,
Indonesia
Muchammad Husni
Sepuluh Nopember Institute of Technology,
Indonesia
Nanik Suciati
Sepuluh Nopember Institute of Technology,
Indonesia
Riyanarto Sarno
Sepuluh Nopember Institute of Technology,
Indonesia
Rothkrantz
Delft University of Technology, Indonesia
Retantyo Wardoyo
Gajah Mada University, Indonesia
Siska Fitriana
Delft University of Technology, The
Netherlands
Supeno Djanali
Sepuluh Nopember Institute of Technology,
Indonesia
Zainal Hasibuan
Indonesia University, Indonesia
Yudhi Purwananto
Sepuluh Nopember Institute of Technology,
Indonesia
(2)
ISSN 1858 -1633
Editor-in-Chief
Umi Laili Yuhana
Sepuluh Nopember Institute of Technology, Indonesia
Contact Address
Informatics Department FTIF, ITS
Gedung Teknik Informatika ITS, Jl. Raya ITS, Sukolilo
Surabaya 60111, INDONESIA
Telp. (031) 5939214
Fax (031) 5913804
Homepage: http://if.its.ac.id/icts
email:icts@if.its.ac.id
(3)
ISSN 1858 -1633
PREFACE
This proceedings contain sorted papers from Information and Communication Technology
Seminar (ICTS) 2005. ICTS 2005 is the first annual international event from Informatics
Department, Faculty of Information Technology, ITS. This event is a forum for computer
science, information and communication technology community for discussing and exchanging
the information and knowledge in their areas of interest. It aims to promote activities in research,
development, and application on computer science, information and communication technology.
This year, the seminar is held to celebrate the 20th Anniversary of Informatics Department,
Faculty of Information Technology, ITS.
There are 41 papers accepted and finally 36 papers are fit and proper to be presented. The topics
of those papers are: (1) Artificial Intelligence (2) Image Processing (3) Computing (4) Computer
Network and Security (5) Software Engineering and (6) Mobile Computing.
We would like to thank to the keynote speakers, the authors, the participants, and all parties for
the success of ICTS 2005.
(4)
Proceeding of Annual International Conference
Information and Communication Technology Seminar
Volume 1, Number 1, August 2005
Table of Content
Mathematical Morphology And Its Applications
...1-9
Akira Asano
Molecular Dynamics Simulation On A Metallic
Glass-System:
Non-Ergodicity
Parameter
...10-16
Achmad Benny Mutiara
Tomographic Imaging Using Infra Red Sensors
...17-19
Dr. Sallehuddin Ibrahim and Md. Amri Md. Yunus
Mammographic Density Classification Using Multiresolution Histogram Technique
...20-23
Izzati Muhimmah, Erika R.E. Denton, and Reyer Zwiggelaar
Ann Soft Sensor To Predict Quality Of Product
Based On
Temperature Or Flow Rate
Correlation
...24-28
Totok R. Biyanto
Application Of Soft Classification
Techniques For Forest Cover Mapping
...29-36
Arief Wijaya
Managing
Internet Bandwidth: Experience In Faculty Of Industrial Technology, Islamic
University Of Indonesia
...37-40
Mukhammad Andri Setiawan
Mue: Multi User Uml Editor
...41-45
Suhadi Lili, Sutarsa, and Siti Rochhimah
Designing Secure Communication Protocol For Smart Card System, Study Case: E-Purse
Application
...46-48
Daniel Siahaan, and I Made Agus
Fuzzy Logics Incorporated To Extended Weighted-Tree Similarity Algorithm For Agent
Matching In Virtual Market
...49-54
Sholeh Hadi Setyawan and Riyanarto Sarno
Shape Matching Using Thin-Plate Splines Incorporated To Extended Weighted-Tree
Similarity Algorithm For Agent Matching In Virtual Market
...55-61
Budianto and Riyanarto Sarno
Text-To-Video Text To Facial
Animation
Video Convertion
...62-67
Hamdani Winoto, Hadi Suwastio, and Iwan Iwut T.
(5)
ISSN 1858 -1633
Share-It: A UPnP Application For Content Sharing
...68-71
Daniel Siahaan
Modified Bayesian Optimization Algorithm For Nurse Scheduling
...72-75
I N. Sutapa, I. H. Sahputra, and V. M. Kuswanto
Politeness In Phoning By Using Wap And Web
...76-80
Amaliah Bilqis, and Husni Muhammad
Impelementation Of Hierarchy Color Image Segmentation For Content Based Image
Retrieval System
...81-85
Nanik Suciati and Shanti Dewi
Decision Support System For Stock Investment On Mobile Device
...86-90
Ivan Satria and Dedi Trisnawarman
Fuzzy Logic Approach To Quantify Preference Type Based On Myers Briggs Type
Indicator (MBTI)
...91-93
Hindriyanto Dwi Purnomo, Srie Yulianto Joko Prasetyo
Security Concern Refactoring
...94-100
Putu Ashintya Widhiartha, and Katsuhisa Maruyama
A Parallel Road Traffic Simulator Core
...101-104
Dwi Handoko, Wahju Sediono, and Made Gunawan
C/I Performance Comparison Of An Adaptive And Switched Beam In The Gsm Systems
Employing Smart Antenna
...105-110
Tito Yuwono, Mahamod Ismail, and Zuraidah bt Zainuddin
Identification Of Solvent Vapors Using Neural Network Coupled Sio
2Resonator Array
...111-114
Muhammad Rivai, Ami Suwandi JS , and Mauridhi Hery Purnomo
Comfortable Dialog For Object Detection
...115-122
Rahmadi Kurnia
A Social Informatics Overview Of E-Government Implementation: Its Social Economics
And Restructuring Impact
...123-129
Irwan Sembiring and Krismiyati
Agent Based Programming For Computer Network Monitoring
...130-134
Adang Suhendra
Computer Assisted Diagnosis System Using Morphology Watershed For Breast Carcinoma
Tumor
...135-139
Sri Yulianto and Hindriyanto
(6)
Evaluation Of Information Distribution Algorithms Of A Mobile Agent-Based
Demand-Oriented Information Service System
...140-144
I. Ahmed and M J. Sadiq
Online Mobile Tracking On Geographics Information System Using Pocket Pc
...145-150
M. Endi Nugroho and Riyanarto Sarno
A Simple Queuing System To Model The Traffic Flow At The Toll-Gate: Preliminary
Results
...151-153
Wahju Sediono, Dwi Handoko
Multimodal-Eliza Perceives And Responds To Emotion
...154-158
S. Fitrianie and L.J.M. Rothkrantz
Motor Dc Position Control Based On Moving Speed Controlled By Set Point Changing
Using Fuzzy Logics Control System
...159-166
Andino Maseleno, Fajar Hayyin, Hendra, Rahmawati Lestari, Slamet Fardyanto, and Yuddy
Krisna Sudirman
A Variable-Centered Intelligent Rule System
...167-174
Irfan Subakti
Multiple Null Values Estimating In Generating Weighted Fuzzy Rules Using Genetic
Simulated Annealing
...175-180
Irfan Subakti
Genetic Simulated Annealing For Null Values Estimating In Generating Weighted Fuzzy
Rules From Relational Database Systems
...181-188
Irfan Subakti
Image Thresholding By Measuring The Fuzzy Sets Similarity
...189-194
Agus Zainal Arifin and Akira Asano
Development Of Scheduler For Linux Virtual Server In Ipv6 Platform Using Round Robin
Adaptive Algorithm, Study Case: Web Server
...195-198
Royyana Muslim Ijtihadie and Febriliyan Samopa
(7)
Information and Communication Technology Seminar, Vol. 1 No. 1, August 2005 1
MATHEMATICAL MORPHOLOGY AND ITS APPLICATIONS
Akira Asano
Division of Mathematical and Information Sciences, Faculty of Integrated Arts and Sciences, Hiroshima University
Kagamiyama 1-7-1, Higashi-Hiroshima, Hiroshima 739-8521, JAPAN email: asano@mis.hiroshima-u.ac.jp
ABSTRACT
This invited talk presents the concept of mathematical morphology, which is a mathematical framework of quantitative image manipulations. The basic operations of mathematical morphology, the relationship to image processing filters, the idea of size distribution and its application to texture analysis are explained.
1. INTRODUCTION
Mathematical morphology treats an effect on an image as an effect on the shape and size of objects contained in the image. Mathematical morphology is a mathematical system to handle such effects quantitatively based on set operations [1–5].The word stem “morpho-” originates in a Greek word meaning “shape,” and it appears in the word “morphing,” which is a technique of modifying an image into another image smoothly.
The founders of mathematical morphology, G. Math´eron and J. Serra, were researchers of l ´ Ecole Nationale Sup´erieure des Mines de Paris in France, and had an idea of mathematical morphology as a method of evaluating geometrical characteristics of minerals in ores [6]. Math´eron is also the founder of the random closed set theory, which is a fundamental theory of treating random shapes, and kriging, which is a statistical method of estimating a spatial distribution of mineral deposits from trial diggings. Mathematical morphology has relationships to these theories and has been developed as a theoretical framework of treating spatial shapes and sizes of objects. The International Symposium on Mathematical Morphology (ISMM), which is the topical international symposium focusing on mathematical morphology only, has been organized almost every two years, and its seventh symposium was held in April 2005 in Paris as a cerebration of 40 years anniversary of mathematical morphology [7].
The paper explains the framework of mathematical morphology, especially opening, which is the fundamental operation of describing operations on shapes and sizes of objects quantitatively, in Sec. 2. Section 3. proves “filter theorem,” which guarantees that all practical image processing filters can be constructed by combinations of morphological operations. Examples of expressing median filters and average filters by combinations of morphological operations are also shown in this section. Section 4.
explains granulometry, which is a method of measuring the distribution of sizes of objects in an image, and shows an application to texture analysis by the author.
2. BASIC OPERATIONS OF
MATHEMATICAL MORPHOLOGY
The fundamental operation of mathematical morphology is “opening,” which discriminates and extracts object shapes with respect to the size of objects. We explain opening on binary images at first, and basic operations to describe opening.2.1 Opening
In the context of mathematical morphology, an object in a binary image is regarded as a set of vectors corresponding to the points composing the object. In the case of usual digital images, a binary image is expressed as a set of white pixels or pixels of value one. Another image set expressing an effect to the above image set is considered, and called structuring element. The structuring element corresponds to the window of an image processing filter, and is considered to be much smaller than the target image to be processed.
Let the target image set be X, and the structuring element be B. Opening of X by B has a property as follows:
where Bz indicates the translation of B by z, defined as follows:
This property indicates that the opening of X with respect to B indicates the locus of B itself sweeping all the interior of X, and removes smaller white regions than the structuring element, as illustrated in Fig. 1. Since opening eliminates smaller structures and smaller bright peaks than the structuring element, it has a quantitative smoothing ability.
(8)
Fig. 1. Effect of opening.
2.2. Fundamental Operations of
Mathematical Morphology
Although the property of opening in E. (1) is intuitively understandable, this is not a pixelwise operation. Thus opening is defined by a composition of simpler pixelwise operations. In order to define opening, Minkowski set subtraction and addition are defined as the fundamental operations of mathematical morphology.
Minkowski set subtraction has the following property: It follows from x ∈ Xb that x − b ∈ X. Thus
the definition of Minkowski set subtraction in Eq. (3) can be rewritten to the following pixelwise operation:
The reflection of B, denoted ˇB , is defined as follows:
Using the above expressions, Minkowski set subtraction is expressed as follows:
Since we get from the definition of reflection in
Eq. (6) that , it follows
that . We get the
relationship in Eq. (7) by substituting it into Eq. (5).This relationship indicates that is the locus of the origin of when sweeps all the interior of X.
For Minkowski set addition, it follows that
Thus we get
Fig. 2. opening composed of fundamental operations. It indicates that is composed by pasting a copy of B at every point within X.
Using the above operations, erosion and dilation
of X with respect to B are defined as and .respectively.
We get from Eq. (7) that
It indicates that is the locus of the origin of B when B sweeps all the interior of X. The opening
XB is then defined using the above fundamental
operations as follows:
The above definition of opening is illustrated in Fig. 2. A black dot indicates a pixel composing an image object in this figure. As shown in the above, the erosion of X by B is the locus of the origin of B when B sweeps all the inside of X. Thus the erosion in the first step of opening produces every point where a copy of B included in X can be located. The Minkowski addition in the second step locates a copy of B at every point within . Thus the opening of X with respect to B indicates the locus of B itself sweeping all the interior of X, as described at the beginning of this section. In other words, the opening removes regions of X which are too small to include a copy of B and preserves the others.
The counterpart of opening is called closing, defined as follows:
The closing of X with respect to B is equivalent to the opening of the background, and removes smaller spots than the structuring element within image objects. This is because the following relationship between opening and closing holds:
where Xc indicate the complement of X and defined as The relationship of Eq. (13) is called duality of opening and closing1.
1There is another notation system which denotes
(9)
Mathematical Morphology and Its Application – Akira Asano 3
Figure 3 summarizes the illustration of the effects of basic morphological operations2.
Fig. 3. Effects of erosion, dilation, opening, and closing
Fig. 4. Umbra. The spatial axis x is illustrated one-dimensional for simplicity.
2.3. In the Case of Gray Scale Images
In the case of gray scale image, an image object is defined by the umbra set. If the pixel value distribution of an image object is denoted as f(x),where is a pixel position, its umbra U[f(x)]
is defined as follows:
Consequently, when we assume a “solid” whose support is the same as a gray scale image object and whose height at each pixel position is the same as the pixel value at this position, the umbra is equivalent to this solid and the whole volume
below this solid within the support, as illustrated in Fig. 4.
A gray scale structuring element is also defined in the same manner. Let f(x) be the gray scale pixel value at pixel position x and g(y) be that of the structuring element. Erosion of f by g is defined for the umbrae similarly to the binary case, and reduced to the following operation [4, 5]:
Dilation is also reduced to
where w(g) is the support of g. These equations indicate that the logical AND and OR operations in the definition for binary images are replaced with the infimum and supremum operations (equivalent to minimum and maximum in the case of digital images), respectively.
Expanding the idea, mathematical morphological operations can be defined for sets where the infimum and supremum among the elements are defined in some sense. For example, morphological operations for color images cannot be defined straightforwardly from the above classical definitions,
since a color pixel value is defined by a vector and the infimum and supremum are not trivially defined. The operations can be defined if the infimum and supremum among colors are defined [8, 9]. Such set is called lattice, and mathematical morphology is generally defined as operations on a lattice [10].
3. MATHEMATICAL MORPHOLOGY
AND IMAGE PROCESSING FILTER
3.1. Morphological filter
Image processing filter is generally an operation at each pixel by applying a calculation to the pixel and the pixels in its neighborhood and replacing the pixel value with the result of calculation, for the purposes of noise removal, etc. Morphological filter in broader sense is restricted to translation-invariant and
increasing one. An operation Ψon a set (image) X is
translation-invariant if
In other words, it indicates that the effect of the operation is invariant wherever the operation is applied. An operation Ψis increasing if
In other words, the relationship of inclusion is preserved before and after applying the operation.
Let us consider a noise removing filter for example; Since noise objects in an image should be removed wherever it is located, the translation-invariance is naturally required for noise removing filters. An increasing operation can express an operation that removes smaller objects and preserves larger objects, but cannot express an operation that removes larger and preserves smaller. Noise objects are, however, usually smaller than meaningful objects. Thus it is also natural to consider increasing operations only3.
Morphological filter in narrower sense is all translation-invariant, increasing, and idempotent
operations. The filter Ψ is idempotent if
2There is another definition of morphological operations which
denotes the erosion in the text as X _ B and call the Minkowski set addition in the text “dilation.” The erosion and dilation are not dual in this definition,
3
Edge detecting filter is not increasing, since it removes all parts of objects
(10)
Consequently, iterative operations of Ψ is equivalent to one operation of Ψ . The opening and closing are the most basic morphological filters in narrower sense.
3.2. Filter theorem
The filter theorem states that all morphological filters (in broader sense) can be expressed by OR of erosions and AND of dilations. It guarantees that almost all practical filters can be expressed by morphological operations, i. e. mathematical morphology is really a fundamental operation set of image object manipulations. Let Ψ(X) be a filter on the image X.
The theorem states that there exists for all Ψ(X) a set family Ker[Ψ] satisfying the following.
It also states that there exists for all Ψ(X) a set family Ker[Ψ] satisfying the following.
Here the set family Ker[Ψ] is called kernel of filter Ψ, defined as follows:
where “0” indicates the origin of X. The proof of the filter theorem in Eq. (20) is presented in the following. A more general proof is found in Chap. 4
of [10]. Let us consider an arbitrary vector (pixel) for a structuring element
. From the definition of . Consequently,
Since Ψ is increasing, the relationship B ⊆ X−h is
invariant by filter Ψ. Thus we get . Since Ψ is
translation-invariant, we get by translating
. From the above discussion, for all structuring
element . Thus
. Let us consider an arbitrary vector . Since Ψ is translation-invariant,
. Thus we get
. Since ,
and is satisfied if , we get . By denoting by
, we get .
Consequently, there exists a structuring element such that
, i. e. any pixel in
Ψ(X) can be included in by using a certain structuring element . Thus
From the above discussion, it holds that
3.3. Morphological expressions of median
filter and average filter
The filter theorem guarantees that all translation-invariant increasing filters can be constructed by morphological operations. However, the kernel is generally redundant and each practical filter is often expressed by morphological operations with fewer numbers of structuring elements. In this subsection, morphological expressions of median filter and average filter are shown with examples. These filters are usually applied to gray scale images, morphological operations and logical operations are reduced to minimum and maximum operations. Details of the proof are found in [11, 12].
3.3.1. Median filter: The median filter whose
window size is n pixel is expressed as follows: the minimum of maximum ( or the maximum of minimum) in every possible subwindow of [n/2 + 1] pixels in the window.
The operations deriving the maximum and minimum in each subwindow at every pixel are the set addition and set subtraction using the subwindow as the structuring element, respectively. Since the maximum and minimum operations are extensions of logical OR and AND operations based on fuzzy logic, respectively, the median filter is expressed by the combination of morphological and logical operations, as shown in Figs. 5 and 6.
(11)
Mathematical Morphology and Its Application – Akira Asano 5
Fig. 5. Subwindows of [n/2 + 1] pixels.
3.3.2. Average filter:The simplest average filter
operation, that is, the average of two pixel values x
and y, is expressed by the minimum and the supremum, as follows:
or
as shown in Fig. 7.
Fig. 6. Median expressed by the maximum and minimum.
Fig. 7. Average expressed by the maximum and minimum.
4. GRANULOMETRY AND TEXTURE
ANALYSIS
Texture is an image composed by repetitive appearance of small structures, for example surfaces of textiles, microscopic images of ores, etc. Texture analysis is a fundamental application of mathematical morphology, since it was developed for the analysis of minerals in ores. In this section, the concept of size in the sense of mathematical morphology and the idea of granulometry for measuring granularity of image objects are explained. An example of texture analysis applying granulometry by the author is also presented.
4.1. Granulometry and size distribution
Opening of image X with respect to structuring element B means residue of X obtained by removing smaller structures than B. It indicates that opening works as a filter to distinguish object structures by their sizes. Let 2B, 3B, . . . , be homothetic magnifications of the basic structuring element B. We then perform opening of X with respect to the homothetic structuring elements, and obtain the image sequence XB, X2B, X3B, . . . . In this sequence, XB is obtained by removing the regions smaller than B, X2B is obtained by removing the regions smaller than X2B, X3B is obtained by removing the regions smaller than 3B, . . . . If B is convex, it holds that X XB X2B X3B . . . . The size of rB is defined as r, and this sequence of opening is called granulometry [10]. We then calculate the ratio of the area (for binary case) or the sum of pixel values (for gray scale case) of XrB to that of the original X at each r. The area of an image is defined
by the area occupied by an image object, i. e. the number of pixels composing an image object in the case of discrete images. The function from a size r to the corresponding ratio is monotonically decreasing, and unity when the size is zero. This function is called size distribution function. The size distribution function of size r indicates the area ratio of the regions whose sizes are greater than or equal to r.
(12)
Fig. 8. Granulometry and size density function
The r-times magnification of B, denoted rB, is usually defined in the context of mathematical morphology as follows:
where {0}denotes a single dot at the origin. Let us consider a differentiation of the size distribution function. In the case of discrete sizes, it is equivalent to the area differences of the image pairs corresponding to adjacent sizes in XB, X2B, X3B, ....
For example, the area difference between X2B and X3B
corresponds to the part included inX2B but excluded
fromX3B, that is, the part whose size is exactly 2. The
sequence of the areas corresponding to each size exactly, derived as the above, is called pattern spectrum [13], and the sequence of the areas relative to the area of the original object is called size density function [14]. An example of granulometry and size density function is illustrated in Fig. 8.
Size distribution function and size density function have similar properties to probability distribution function and probability density function, respectively, so that such names are given to these functions. Similarly to probability distributions, the average and the variance of size of objects in an image can be considered. Higher moments of a size distribution can be also defined, which are called granulometric moments, and image objects can be characterized using these moments [14–16].
4.2. Application to texture analysis
As described in Sec. 2., morphological opening is a regeneration of an image by arranging the structuring element, and removes smaller white regions in binary case or smaller regions composed of brighter pixels than its neighborhood in gray scale case than the
structuring element. Thus opening is effective for eliminating noisy pixels that are brighter than its neighborhood.
Since opening generates the resultant image by an arrangement of the structuring element, the hape and pixel value distribution of the structuring element directly appear in the resultant image. It causes artifacts if the shape and pixel value distribution are not related to the original image.
This artifact can be suppressed by using a structuring element resembling the shape and pixel value distribution contained in the original image. Such structuring element cannot be defined generally, but can be estimated for texture images, since texture is composed an arrangement of small objects appearing repetitively in the texture.
We explain in this subsection a method of developing the optimal artifact-free opening for noise removal in texture images [17]. This method estimates the structuring element which resembles small objects appearing repetitively in the target texture. This is achieved based on Primitive, Grain, and Point
Configuration (PGPC) texture model, which we have
proposed to describe a texture, and an optimization method with respect to the size distribution function.
The optimal opening of suppressing the artifacts is achieved by using the estimated structuring element. In the case of noise removal, the primitive cannot be estimated by the target image itself, since the original uncorrupted image corresponding to the target image is unknown. This problem is similar to that of the image processing by learning, which estimates the optimal filter parameters by giving an example of corrupted image and its ideal output to a learning mechanism [18–20]. In the case of texture image, however, if a sample uncorrupted part of the texture similar to the target corrupted image is available, the primitive can be estimated from this sample, since the sample and the target image are different realization but have common textural characteristics.
4.2.1. PGPC texture model and estimation
of the optimal structuring element: The PGPC
texture model regards a texture as an image composed by a regular or irregular arrangement of objects that are much smaller than the size of image and resemble each other. The objects arranged in a texture are called
grains, and the grains are regarded to be derived from one or a few typical objects called primitives.
We assume here that the grains are derived from one primitive by homothetic magnification.We also assume that the primitive is expressed by a structuring element B, and let X be the target texture image. In this case, XrB is regarded as the texture image
composed by the arrangement of rB only. It follows that rB −(r + 1)B indicates the region included in the arrangement of rB but not included in that of (r+1)B. Consequently, XrB −X(r+1)B is the region where r-size
(13)
Mathematical Morphology and Its Application – Akira Asano 7
arrangement of grains which are preferably large magnifications of the primitive. The sequence X − XB,
XB − X2B, . . . , XrB − X(r+1)B, . . . , is the decomposition
of the target texture to the arrangement of the grains of each size.
Since the sequence can be derived by using any structuring element, it is necessary to estimate the appropriate primitive that is a really typical representative of the grains. We employ an idea that the structuring element yielding the simplest grain arrangement is the best estimate of the primitive, similarly to the principle of minimum description length (MDL). The simple arrangement locates a few number of large magnifications for the expression of a large part of the texture image, contrarily to the arrangement of a large number of small-size magnifications. We derive the estimate by finding the structuring element minimizing the integral of 1 −
F(r), where F(r) is the size distribution function with respect to size r. The function 1 − F(r) is 0 for r = 0
and monotonically increasing, and 1 for the maximum size required to compose the texture by the magnification of this size. Consequently, if the integral of 1−F(r) is minimized as illustrated in Fig. 9, the sizes of employed magnifications concentrate to relatively large sizes, and the structuring element in this case expresses the texture using the largest possible magnifications. We regard this structuring element as the estimate of primitive.
We estimate the gray scale structuring element in two steps: the shape of structuring element is estimated by the above method in the first step, and the gray scale value at each pixel in the primitive estimated in the first step is then estimated. However, if the above method is applied to the gray scale estimation, the estimate often has a small number of high-value pixel and other pixels whose values are almost zero. This is because the umbra of any object can be composed by arranging the umbra of one-pixel structuring element, as illustrated in Fig. 10. This is absolutely not a desired estimate. Thus we modify the above method, and minimize 1 − F(1), i. e. the residual area of XB instead of the above method. In
this case, the composition by this structuring element and its magnification is the most admissible when the residual area is the minimum, since the residual region cannot be composed of even the smallest magnification.
The exploration of the structuring element can be performed by the simulated annealing, which iterates a modification of the structuring element and find the best estimate minimizing the evaluation function described in the above [21].
Fig. 9. Function 1 − F(r). Size r is actually discrete for digital images.(a)Function and its integral. (b) Minimization of the integral.
Fig. 10. Any object can be composed by arranging one-pixel structuring element.
4.2.2. Experimental results: Figures 11 and 12 show the example experimental results of noise removal using the estimated primitives as the structuring elements. All images contains 64×64 8-bit gray scale pixels. In each example, the gray scale primitive shown in (b) is estimated for the example image (a). Each small square in (b) corresponds to one pixel in the primitive, and the shape is expressed by the arrangement of white squares. The primitive is explored from connected figures of nine pixels within 5 × 5-pixel square. The gray scale value is explored by setting the initial pixel value to 50 and modifying the value in the range of 0 to 100.
The opening using the primitive (b) as the structuring element is performed on the corrupted image (c). This image is generated by adding a uniformly distributed random value, which is in the range between 0 and 255, to 1000 randomly selected pixels of an image extracted from an image which is a different realization of the same texture as (a). Opening eliminates brighter peaks of small extent, so that this kind of noise is employed for this experiment.
(14)
The result using the estimate primitive is shown in (d), that using the flat structuring element whose shape is the same as (b) is shown in (e), and that using the 3 × 3-pixel square flat structuring element is shown in (f). The “MSE” attached to each resultant image is defined as the sum of pixelwise difference between each image and the original uncorrupted image of (c), which is not shown here, divided by the number of pixels in the image.
Fig. 11. Experimental results (1).
The results (d) show high effectiveness of our method in noise removal and detail preservation. The results using the square structuring element contain artifacts since the square shape appears directly in the results, and the results using the binary primitives yield regions of unnaturally uniform pixel values. The comparison of (d) and (e) indicates that the optimization of binary structuring elements is insufficient and the grayscale optimization is necessary.
Fig. 12. Experimental results (2).
In these examples, the assumptions that the grains are derived from one primitive by homothetic magnification and the primitive is expressed by one structuring element are not exactly satisfied. However, the results indicate that our method is applicable to these cases practically.
5. CONCLUSIONS
This invited talk has explained the fundamental concept of mathematical morphology, the filter theorem and the relationship to image processing filters, and the concept of size distribution and its application to texture analysis, which is one of the author’s research topics. The importance of mathematical morphology is that it gives a “mathematical” framework based on set operations to operations on shapes and sizes of image objects. Mathematical morphology has its origin in the research of minerals; If the researchers of mathematical morphology had concentrated to practical problems only and had not made efforts of mathematical formalization, mathematical morphology could not have been extended to general image processing or spatial statistics. It suggests that researches on any topic considering general frameworks is always important.
Acknowledgements
The author would like to thank Dr. Daniel Siahaan, the Organizing Committee Chairman, and all the Committee members, for this opportunity of the invited talk in ICTS2005.
REFERENCES
[1] J. Serra, Image analysis and mathematical morphology, Academic Press, 1982. ISBN0-12-637242-X
[2] J. Serra, ed., Image analysis and mathematical morphology Volume 2, Technical advances, Academic Press, 1988. ISBN0-12-637241-1 [3] P. Soille, Morphological Image Analysis, 2nd
Ed., Springer, 2003.
[4] P. Maragos, Tutorial on advances in morphological image processing and analysis, Optical Engineering, 26, 1987, 623–632. [5] R. M. Haralick, S. R. Sternberg, and X. Zhuang,
Image Analysis Using Mathematical Morphology, IEEE Trans. Pattern Anal. Machine Intell., PAMI-9, 1987, 532–550. [6] G. Matheron, J. Serra, The birth of
mathematical morphology, Proc. 6th International Symposium on Mathematical Morphology, 1–16, CSIRO Publishing 2002. ISBN0-643-06804-X
[7] International Symposium on Mathematical Morphology, 40 years on (http://ismm05.esiee.fr).
(15)
Mathematical Morphology and Its Application – Akira Asano 9
[8] M. L. Corner and E. J. Delp, Morphological operations for color image processing, Journal of Electronic Imaging, 8(3), 1999, 279–289. [9] G. Louverdis, M. I. Vardavoulia, I. Andreadis,
and Ph. Tsalides, A new approach to morphological color image processing, Pattern Recognition, 35, 2002, 1733–1741.
[10] H. J. A. M. Heijmans, Morphological Image Operators, Academic Press (1994). ISBN0-12-014599-5
[11] P. Maragos and R. W. Schafer, Morphological Filters- Part I, Their Set-Theorectic Analysis and Relations to Linear Shift-Invariant Filters, IEEE Trans. Acoust., Speech, Signal Processing, ASSP-35(8), 1987, 1153–1169 . [12] P. Maragos and R. W. Schafer, Morphological
Filters- Part II, Their Relations to Median, Order-Statistic, and Stack Filters, IEEE Trans. Acoust., Speech, Signal Processing, ASSP-35(8), 1987, 1170–1184 .
[13] P. Maragos, Pattern spectrum and multiscale shape representation, IEEE Trans. Pattern Anal. Machine Intell., 11, 1989, 701–716. [14] E. R. Dougherty, J. T. Newell, and J. B. Pelz,
Morphological texturebased maximuml-likelihood pixel classification based on local granulometric moments, Pattern Recognition, 25, 1992, 1181–1198.
[15] F. Sand and E. R. Dougherty, Asymptotic granulometric mixing theorem, morphological estimation of sizing parameters and mixture
proportions, Pattern Recognition, 31, 1998, 53–61.
[16] F. Sand and E. R. Dougherty, Robustness of granulometric moments, Pattern Recognition, 32, 1999, 1657–1665.
[17] A. Asano, Y. Kobayashi, C. Muraki, and M. Muneyasu, Optimization of gray scale morphological opening for noise removal in texture images, Proc. 47th IEEE International Midwest Symposium on Circuits and Systems, 1, 2004, 313–316.
[18] A. Asano, T. Yamashita, and S. Yokozeki, Learning optimization of morphological filters with grayscale structuring elements, Optical Engineering, 35(8), 1986, 2203–2213.
[19] N. R. Harvey and S. Marshall, The use of genetic algorithms in morphological filter design, Signal Processing, Image Communication, 8, 1996, 55–71.
[20] N. S. T. Hirata, E. R. Dougherty, and J. Barrera, Iterative Design of Morphological Binary Image Operators, Optical Engineering, 39(12), 2000, 3106–3123.
[21] A. Asano, T. Ohkubo, M. Muneyasu, and T. Hinamoto, Primitive and Point Configuration texture model and primitive estimation using mathematical morphology, Proc. 13th Scandinavian Conf. on Image Analysis, G¨oteborg, Sweden; Springer LNCS 2749, 2003, 178–185.
(16)
MOLECULAR DYNAMICS SIMULATION ON A METALLIC
GLASS-SYSTEM: NON-ERGODICITY PARAMETER
Achmad Benny Mutiara
Dept. of Informatics Engineering, Faculty of Industrial Technology, Gunadarma University Jl.Margonda Raya No.100, Depok 16424, West-Java Indonesia
E-mail: amutiara@staff.gunadarma.ac.id
ABSTRACT
At the present paper we have computed non-ergodicity paramater from Molecular Dynamics (MD) Simulation data after the mode-coupling theory (MCT) for a glass transition. MCT of dense liquids marks the dynamic glass-transition through a critical temperature Tc that is reflected in the temperature-dependence of various physical quantities.
Here, molecular dynamics simulations data of a model adapted to Ni0.2Zr0.8 are analyzed to deduce Tc from the temperature-dependence of corresponding quantities and to check the consistency of the statements. Analyzed is the diffusion coefficients. The resulting values agree well with the critical temperature of the non-vanisihing non-ergodicity parameter determined from the structure factors in the asymptoticsolution of the mode-coupling theory with memorykernels in “One-Loop” approximation.
Keywords: Glass Transition, Molecular Dynamics Simulation, MCT
1. INTRODUCTION
The transition from a liquid to an amorphous solid that sometimes occurs upon cooling remains one of the largely unresolved problems of statistical physics [1,2]. At the experimental level, the so-called glass transition is generally associated with a sharp increase in the characteristic relaxation times of the system, and a concomitant departure of laboratory measurements from equilibrium. At the theoretical level, it has been proposed that the transition from a liquid to a glassy state is triggered by an underlying thermodynamic (equilibrium) transition [3]; in that view, an “ideal” glass transition is believed to occur at the so-called Kauzmann temperature, TK. At TK, it is
proposed that only one minimum-energy basin of attraction is accessible to the system. One of the first arguments of this type is due to Gibbs and diMarzio [4], but more recent studies using replica methods have yielded evidence in support of such a transition in Lennard-Jones glass formers [3,5,6]. These observations have been called into question by experimental data and recent results of simulations of polydisperse hard-core disks, which have failed to detect any evidence of a thermodynamic transition up to extremely high packing fractions [7]. Oneof the questions that arises is therefore whether the discrepancies between the reported simulated behavior
of hard-disk and soft-sphere systems is due to fundamental differences in the models, or whether they are a consequence of inappropriate sampling at low temperatures and high densities.
Different, alternative theoretical considerations have attempted to establish a connection between glass transition phenomena and the rapid increase in relaxation times that arises in the vicinity of a theoretical critical temperature (the so-called “mode-coupling” temperature, Tc), thereby giving rise to a
“kinetic” or “dynamic” transition [8]. In recent years, both viewpoints have received some support from molecular simulations. Many of these simulations have been conducted in the context of models introduced by Stillinger andWeber and by Kob and Andersen [9]; such models have been employed in a number of studies that have helped shape our current views about the glass transition [5,10–14].
In the full MCT, the remainders of the transition and the value of Tc have to be evaluated, e.g., from the
approach of the undercooled melt towards the idealized arrested state, either by analyzing the time and temperature dependence in the β-regime of the structural fluctuation dynamics [15–17] or by evaluating the temperature dependence of the so-called gm-parameter [18,19]. There are further
posibilities to estimates Tc, e.g., from the temperature
dependence of the diffusion coefficients or the relaxation time of the final α-decay in the melt, as these quantities for T > Tc display a critical behaviour
|T − Tc|±. However, only crude estimates of Tc can be
obtained from these quantities, since near Tc the
critical behaviour is masked by the effects of transversale currents and thermally activated matter transport, as mentioned above.
On the other hand, as emphasized and applied in [20–22], the value of Tc predicted by the idealized
MCT can be calculated once the partial structure factors of the system and their temperature dependence are sufficiently well known. Besides temperature and particle concentration, the partial structure factors are the only significant quantities which enter the equations of the so-called nonergodicity parameters of the system. The latter vanish identically for temperatures above Tc and their
calculation thus allows a rather precise determination of the critical temperature predicted by the idealized theory.
At this stage it is tempting to consider how well the estimates of Tc from different approaches fit
(17)
Molecular Dynamics Simulation on A Metallic Glass-System: Non-Ergodicity Parameter – Achmad Benny Mutiara
11
together and whether the Tc estimate from the
non-ergodicity parameters of the idealized MCT compares to the values from the full MCT. Regarding this, we here investigate a molecular dynamics (MD) simulation model adapted to the glass-forming Ni0.2Zr0.8 transition metal system. The NixZr1−x
-system is well studied by experiments [23,24] and by MD-simulations [25–29], as it is a rather interesting system whose components are important constituents of a number of multi-component ’massive’ metallic glasses. In the present contribution we consider, in particular, the x = 0.2 compositions and concentrate on the determination of Tc from evaluating and
analyzing the non-ergodicity parameter and the diffusion coefficients.
In the literature, similar comparison of Tc estimates
already exist [20–22] for two systems. The studies come, however, to rather different conclusions. From Mdsimulations for a soft spheres model, Barrat et.al. [20] find an agreement between the different Tc
estimates within about 15%. On the other hand, for a binary Lennard-Jones system, Nauroth and Kob [22] get from their MD simulations a significant deviation between the Tc estimates by about a factor of 2.
Regarding this, the present investigation is aimed at clarifying the situation for at least one of the important metallic glass systems. Our paper is organized as follows: In Section II, we present the model and give some details of the computations.
Section III gives a brief discussion of some aspects of the mode coupling theory as used here. Results of our MD-simulations and their analysis are then presented and discussed in Section IV.
2. SIMULATIONS
The present simulations are carried out as state-of-theart isothermal-isobaric (N, T, p) calculations. The Newtonian equations of N = 648 atoms (130 Ni and 518 Zr) are numerically integrated by a fifth order predictorcorrector algorithm with time step ∆t = 2.5x10−15s in a cubic volume with periodic boundary
conditions and variable box length L. With regard to the electron theoretical description of the interatomic potentials in transition metal alloys by Hausleitner and Hafner [30], we model the interatomic couplings as in [26] by a volume dependent electron-gas term Evol(V )
and pair potentials φ(r) adapted to the equilibrium distance, depth, width, and zero of the Hausleitner-Hafner potentials [30] for Ni0.2Zr0.8 [31]. For this
model simulations were started through heating a starting configuration up to 2000 K which leads to a homogeneous liquid state. The system then is cooled continuously to various annealing temperatures with cooling rate −∂tT = 1.5x1012 K/s. Afterwards the
obtained configurations at various annealing temperatures (here 1500-800 K) are relaxed by carrying out additional isothermal annealing run at the selected temperature. Finally the time evolution of
these relaxed configurations is modelled and analyzed. More details of the simulations are given in [31].
3. THEORY
In this section we provide some basic formulae that permit calculation of Tc and the non-ergodicity
parameters fij(q) for our system. A more detailed
presentation may be found in Refs. [20–22,32,33]. The central object of the MCT are the partial intermediate scattering functions which are defined for a binary system by [34]
where
is a Fourier component of the microscopic density of species i. The diagonal terms α= βare denoted as the incoherent intermediate scattering function
The normalized partial- and incoherent intermediate
scattering functions are given by
where the Sij(q) = Fij(q, t = 0) are the partial static
structure factors. The basic equations of the MCT are the set of nonlinear matrix integrodifferential equations given by
where F is the 2×2 matrix consisting of the partial intermediate scattering functions Fij(q, t), and the
frequency matrix Ω2 is given by
S(q) denotes the 2 × 2 matrix of the partial structure factors Sij(q), xi = Ni/N and mi means the
atomic mass of the species i. The MCT for the idealized glass transition predicts [8] that the memory kern M can be expressed at long times by
(18)
where ρ = N/V is the particle density and the vertex Viα(q, k) is given by
and the matrix of the direct correlation function is de- fined by
The equation of motion for Fs i (q; t) has a similar
form as eq.(6), but the memory function for the incoherent intermediate scattering function is given by:
In order to characterize the long time behaviour of the intermediate scattering function, the non-ergodicity parameters f (q) are introduced as
These parameters are the solution of eqs. (6)-(10) at long times. The meaning of these parameters is the following:
if fij(q) = 0, then the system is in a liquid state with
density fluctuation correlations decaying at long times. If fij(q) > 0, the system is in an arrested,
nonergodic state, where density fluctuation correlations are stable for all times. In order to compute fij(q), one can use the following iterative
procedure [22]:
where the matrix A(q), B(q),C(q), D(q), N(q) is given by
This iterative procedure, indeed, has two type of solutions, nontrivial ones with f (q) > 0 and trivial solutions f (q) = 0. The incoherent non-ergodicity parameter fs i (q) can be evaluated by the following
iterative procedure:
As indicated by eq.(20), computation of the incoherent non-ergodicity parameter fs i (q) demands
that the coherent non-ergodicity parameters are determined in advance.
4. RESULTS AND DISCUSSIONS
4.1 Partial structure factors and
intermediate scattering functions
First we show the results of our simulations concerning the static properties of the system in terms of the partial structure factors Sij(q) and partial correlation functions gij(r). To compute the partial structure factors Sij(q) for a binary system we use the following definition [35]
where
are the partial pair correlation functions. The MD simulations yield a periodic repetition of the atomic distributions with periodicity length L. Truncation of the Fourier integral in Eq.(21) leads to an oscillatory behavior of the partial structure factors at small q. In order to reduce the effects of this truncation, we compute from Eq.(22) the partial pair correlation functions for distance r up to Rc = 3=2L. For
numerical evaluation of eq.(21), a Gaussian type damping term is included
(19)
Molecular Dynamics Simulation on A Metallic Glass-System: Non-Ergodicity Parameter – Achmad Benny Mutiara
13
FIG. 1. Partial structure factors at T = 1400 K, 1300 K, 1200 K, 1100 K, 1000 K, 900 K and 800 K (from top to bottom); a) Ni-Ni-part, the curves are vertically shifted by 0.05 relative to each other; b) Ni-Zr-part, the curves are vertically shifted by 0.1 relative to each
other; and c) Zr-Zr-part, the curves are vertically shifted by 0.5 relative to each other.
FIG. 2. Comparison between our MD-simulations and experimental results [23] of the total Faber-Ziman structure factor SFZ tot (q) and
the partial Faber-Ziman structur factors aij(q) for Ni0:2Zr0:8.
with R = Rc=3. Fig.1 shows the partial structure
factors Sij(q) versus q for all temperatures
investigated. The figure indicates that the shape of
Sij(q) depends weakly on temperature only and that, in
particular, the positions of the first maximum and the first minimum in Sij(q) are more or less temperature
independent. In order to compare our calculated structure factors with experimental ones, we have determined the Faber- Ziman partial structure factors
aij(q) [37]
and the Faber-Ziman total structure factor SFZ tot
(q) [36]. For a binary system with coherent scattering length bi of species i the following relationship holds:
In the evaluation of aij(q), we applied the same
algorithm as for Sij(q). By using aij(q) and with aids
(20)
length b one can compute the total structure factor. Here we take bi from the experimental data of
Kuschke [23]. b for natural Ni is 1.03 (10¡12 cm) and
for Zr 0.716 (10¡12 cm). Fig.2 compares the results of
our simulations with the experimental results by Kuschke [23] for the same alloy system at 1000 K. There is a good agreement between the experimental and the simulations results which demonstrates that our model is able to reproduce the steric relations of the considered system and the chemical order, as far is visible in the partial structure factors.
4.2 Non-ergodicity parameters
The non-ergodicity parameters are defined over Eq.(13) as a non-vanishing asymptotic solution of the MCT-eq.(6). Phenomenologically, they can be estimated by creating a master curve from the intermediate scattering functions with fixed scattering vector q at different temperatures. The master curves are obtained by plotting the scattering functions ©(q; t) as function of the normalized time t=¿®. Fig. 3 presents the estimated q-dependent nonergodicity parameters from the coherent scattering functions of Ni and Zr, Fig. 4 those from the incoherent scattering functions. In Fig. 3 and 4 are also included the deduced Kohlrausch-Williams-Watts amplitudes A(q) from the master curves and from the intermediate scattering functions at T=1100 K. (The further fit-parameters can be found in [31].) In order to compute the non-ergodicity parameters fij(q) analytically, we followed for our binary system the self-consistent method as formulated by Nauroth and Kob [22] and as sketched in Section III.A. Input data for our iterative determination of fij(q) = Fij(q;1) are the temperature dependent partial structure factors Sij(q) from the previous subsection. The iteration is started by arbitrarily setting FNi¡Ni(q;1)(0) = 0:5SNi¡Ni(q), FZr¡Zr(q;1)(0) = 0:5SZr¡Zr(q), FNi¡Zr(q;1)(0) = 0.
FIG. 3. Non-ergodicity parameter fcij for the coherent intermediate scattering functions as solutions of eqs. (7) and (8)(solid line),
KWW-parameter A(q) of the master curves (diamond), Von Schweidler-parameter fc(q) of the master curves (square), and KWW-parameter A(q) for ©ij(q) at 1100 K (triangle up); a)
(21)
Molecular Dynamics Simulation on A Metallic Glass-System: Non-Ergodicity Parameter – Achmad Benny Mutiara
15
FIG. 4. The same as fig.3 but for the incoherent intermediate scattering function; a) Ni-part and b) Zr-part.
For T > 1200 K we always obtain the trivial solution fij(q) = 0 while at T = 1100 K and below we
get stable non-vanishing fij(q) > 0. The stability of the
non-vanishing solutions was tested for more than 3000 iteration steps. From this results we expect that Tc for
our system lies between 1100 and 1200 K. To estimate
Tc more precisely, we interpolated Sij(q) from our MD
data for temperatures between 1100 and 1200 K by use of the algorithm of Press et.al. [39]. We observe that at T = 1102 K a non-trivial solution of fij(q) can
be found, but not at T = 1105 K and above. It means that the critical temperature Tc for our system is
around 1102 K. The non-trivial solutions fij(q) for this
temperature shall be denoted the critical non-ergodicity parameters fcij(q). They are included in Fig.
3. As can be seen from Fig. 3, the absolute values and the q-dependence of the calculated fcij(q) agree rather
well with the estimates from the scattering functions master curve and, in particular, with the deduced Kohlrausch-Williams-Watts amplitudes A(q) at 1100 K. By use of the critical non-ergodicity parameters
fcij(q), the computational procedure was run to
determine the critical non-ergodicity parameters fs ci(q) for the incoherent scattering functions at T =
1102 K . Fig. 4 presents our results for so calculated fs ci(q). Like Fig. 3 for the coherent non-ergodicity
parameters, Fig. 4 demonstrates for the fs ci(q) that
they agree well with the estimates from the incoherent scattering functions master curve and, in particular, with the deduced Kohlrausch-Williams-Watts amplitudes A(q) at 1100 K.
4.3 Diffusion-coeffient
From the simulated atomic motions in the computer experiments, the diffusion coefficients of the Ni and Zr species can be determined as the slope of the atomic mean square displacements in the asymptotic long-time limit
FIG. 5. Diffusion coefficients Di as a function of 1000=T . Symbols are MD results for Ni (square) and Zr (diamond); the full line are a
power-law approximation for Ni and for Zr. resp..
Fig. 5 shows the thus calculated diffusion coefficients of our Ni0:2Zr0:8 model for the
temperature range between 800 and 2000 K. At temperatures above approximately 1250 K, the diffusion coefficients for both species run parallel with temperature in the Arrhenius plot, indicating a fixed ratio DNi=DZr ¼ 2:5 in this temperature regime.
At lower temperatures, the Zr atoms have a lower mobility than the Ni atoms, yielding around 900 K a value of about 10 for DNi=DZr. That means, here the
Ni atoms carry out a rather rapid motion within a relative immobile Zr matrix. According to the MCT, above Tc the diffusion coeffi- cients follow a critical
power law
with non-universal exponent ° [9,38]. In order to estimate Tc from this relationship, we have adapted the
(22)
simulated diffusion data for 1050 K and above. The results of the fit are included in Fig. 5 by dashed lines. According to this fit, the system has a critical temperature of 950 K. The parameters ° turn out as 1.8 for the Ni subsystem and 2.0 for the Zr system.
5. CONCLUSION
The results of our MD-simulations show that our system behave so as predicted by MCT in the sense that the diffusion coefficients follow the critical power law. After analizing this coefficient we found that the system has critical temperature of 950 K. diffusion-processes. Our analysis of the ergodic region ( T > Tc
) and of the non-ergodic region ( T < Tc ) lead to Tc-
estimations which agree each other within 10 %. These Tc-estimations are also in acceptable
compliance with the Tc-estimation from the dynamic
phenomenons. Within the scope of the precision of our analysis, the critical temperatur Tc of our system is
about 1000 K.
REFERENCES
[1] W. G¨otze and M. Sperl, J. Phys.: Condens. Matter 16, 4807 (2004); W. G¨otze and M. Sperl, Phys.Rev.Lett. 92, 105701 (2004) [2] P. G. Debenedetti and F.H. Stillinger, Nature
410(6825), 259 (2001).
[3] M. Mezard and G. Parisi, Phys.Rev.Lett. 82(4), 747 (1999).
[4] J.H. Gibbs and E. A. DiMarzio, J.Chem.Phys. 28(3), 373 (1958).
[5] B.Colluzi, G.Parisi, and P.Verrochio, Phys.Rev.Lett.84(2), 306(2000).
[6] T. S. Grigera and G. Parisi, Phys. Rev. E 63, 045102(R) (2001).
[7] L. Santen and W. Krauth, Nature 405(6786), 550 (2000).
[8] W. G¨otze and L. Sj¨ogren, Rep. Prog. Phys. 55(3), 241 (1992)
[9] W. Kob und H.C. Andersen, Phys. Rev. E 51(5), 4626 (1995).
[10] S. Sastry, P. G. Debendetti and F. H. Stillinger, Nature 393(6685), 554 (1998).
[11] F. Sciortino, W. Kob, and P. Tartaglia, Phys. Rev. Lett. 83(16), 3214 (1999).
[12] C. Donati, S. C. Glotzer, P. H. Poole, W. Kob, and S. J. Plimpton, Phys.Rev. E 60(3), 3107 (1999)
[13] B. Colluzi, G. Parisi, and P. Verrochio, Phys. Rev. Lett. 112(6), 2933 (2000).
[14] R. Yamamoto and W. Kob, Phys.Rev. E 61(5), 5473 (2000)
[15] T. Gleim and W. Kob, Eur. Phys. J. B 13, 83 (2000).
[16] A. Meyer, R. Busch, and H. Schober, Phys. Rev. Lett. 83, 5027 (1999); A. Meyer, J. Wuttke, W. Petry, O.G. Randl, and H. Schober, Phys. Rev. Lett. 80, 4454 (1998). [17] H.Z. Cummins, J. Phys.Cond.Mat. 11, A95
(1999).
[18] H. Teichler, Phys. Rev. Lett. 76, 62(1996). [19] H. Teichler, Phys. Rev. E 53, 4287 (1996). [20] J.L. Barrat und A. Latz, J. Phys. Cond. Matt. 2,
4289 (1990).
[21] M. Fuchs, Thesis, TU-Muenchen (1993); M. Fuchs und A. Latz, Physica A 201, 1 (1993). [22] M. Nauroth and W. Kob, Phys. Rev. E 55, 657
(1997).
[23] M. Kuschke, Thesis, Universit¨at Stuttgart (1991).
[24] Yan Yu, W.B. Muir und Z. Altounian, Phys. Rev. B 50, 9098 (1994).
[25] B. B¨oddekker, Thesis, Universit¨at G¨ottingen (1999); B¨oddekker and H. Teichler, Phys. Rev. E 59, 1948 (1999)
[26] H. Teichler, phys. stat. sol. (b) 172, 325 (1992) [27] H. Teichler, in: Defect and Diffusion Forum
143-147, 717 (1997)
[28] H. Teichler, in: Simulationstechniken in der Materialwissenschaft, edited by P. Klimanek and M. Seefeldt (TU Bergakadamie, Freiberg, 1999).
[29] H. Teichler, Phys. Rev. B 59, 8473 (1999). [30] Ch. Hausleitner and Hafner, Phys. Rev. B 45,
128 (1992).
[31] A.B. Mutiara, Thesis, Universit¨at G¨ottingen (2000).
[32] W. G¨otze, Z. Phys. B 60, 195 (1985).
[33] J. Bosse und J.S. Thakur, Phys. Rev. Lett. 59, 998(1987).
[34] B. Bernu, J.-P. Hansen, G. Pastore, and Y. Hiwatari, Phys. Rev A 36, 4891 (1987); ibid. 38, 454 (1988).
[35] J.P. Hansen and I.R. McDonald, Theory of Simple Liquids, 2nd Ed. ( Academic Press, London, 1986).
[36] T.E. Faber und J.M. Ziman, Phil. Mag. 11, 153 (1965).
[37] Y. Waseda, The Structure of Non-Crystalline Materials, (McGraw-Hill, New York, 1980). [38] J.-P. Hansen and S. Yip, Transp. Theory Stat.
Phys. 24, 1149 (1995).
[39] W.H. Press, B.P. Flannery, S.A. Teukolsky und W.T. Vetterling, Numerical Recipes, 2nd.Edition (University Press, Cambrigde, New York, 1992)
(23)
Information and Communication Technology Seminar, Vol. 1 No. 1, August 2005 17
TOMOGRAPHIC IMAGING USING INFRA RED SENSORS
Sallehuddin Ibrahim & Md. Amri Md. Yunus
Department of Control and Instrumentation Engineering Faculty of Electrical Engineering, Universiti Teknologi Malaysia
81310 UTM Skudai, Johor, Malaysia salleh@fke.utm.my
ABSTRAK
This paper is concerned with the development of a tomographic imaging system in order to measure a two-phase flow involving solid particles flowing in air. The general principle underlying a tomography system is to attach a number of non-intrusive transducers in plane formation to the vessel to be investigated and recover from those sensors an image of the corresponding cross section through the vessel. The method made use of infra red sensors. The sensors were configured as a combination of two orthogonal and two rectilinear light projection system. It has the ability to accurately visualize concentration profiles in a pipeline. The imaging is carried out on-line without invading the fluids. The system used a combination of two orthogonal and two rectilinear infra-red projections. The sensors were installed around a vertical transparent flow pipe. Several results are presented in this paper showing the capability of the system to visualize the concentration profiles of solids flowing in air.
Keywords: Infra red, imaging, tomography, sensor.
1.
INTRODUCTION
Flow imaging is gaining importance in process industries. As such suitable systems must be developed for this purpose. Flow measurements belong to the most difficult ones because the medium being measured can occur in various physical states, which complicate the measuring procedure. They are, namely, temperature, density, viscosity, pressure, multi-component media (liquid-gas, solid-gas), the type of flow, etc. The choice of the method is further directed by specific requirements for the flowmeter, e.g. the measuring range, minimum loss of pressure, the shortest possible recovery section, a sensor without moving parts, continuous operation of the sensor, etc.
Tomography methods have been developed rapidly for visualizing two-phase flow of various industrial processes, e.g. gas/oil flows in oil pipelines [1], gas/solid flows in pneumatic conveyors [2], and separation/mixing processes in chemical vessels [3]. Infra red tomography involves the measurement of light attenuation detected by various infra red sensors installed around a flow pipe, and the reconstruction of cross-sectional images using the measured data and a suitable algorithm.
Tomography began to be considered seriously as it can be used to directly analyze the internal characteristics of process flow so that resources will be utilized in a more efficient manner and in order to meet the demand and regulations for product quality. The use of tomography to analyze the flow regime began in the late 1980s. Besides, concern about environmental pollution enhanced the need to find alternative methods of reducing industrial emission and waste. In those applications, the system must be robust and does not disturb the flow in pipelines. It should be able to operate in aggressive and fast moving fluids. This is where tomography can play an important role as it can unravel the complexities of flow without invading it.
Tomography can be combined with the characteristics of infra red sensors to explore the internal characteristics of a process flow. Infra red tomography is conceptually straightforward and inexpensive. It has a dynamic response and can be more portable compared to other types of radiation-based tomography system. The image reconstruction by an infra red tomography system should be directly associated to visual images observed in transparent sections of the pipelines. It has other advantages such as negligible response time relative to process variations, high resolution, and immunity to electrical noise and interference. This paper will explain how such a system can be designed and constructed to measure the distribution of solids in air flowing in a pipeline.
2.
IMAGE RECONSTRUCTION
The projection of infra red beam from the emitter towards the detector can be illustrated mathematically in Figure 1. The coordinate system can be utilized to describe line integrals and projections. The object of interest is represented by a two-dimensional function f(x,y) and each line integral is represented by the (Ø, x’) parameters. Line AB can be expressed as
x
cos
φ
+
y
sin
φ
=
x
'
(1) where⎥
⎦
⎤
⎢
⎣
⎡
⎥
⎦
⎤
⎢
⎣
⎡
−
=
⎥
⎦
⎤
⎢
⎣
⎡
y
x
y
x
φ
φ
φ
φ
cos
sin
sin
cos
'
'
(2)(24)
=
∫
line xdx
y
x
f
x
p
) ' , ()
,
(
)
'
(
φφ (3)
Using a delta function, this can be rewritten as
(4) where
N = total number of horizontal cells/pixel M = total number of vertical cells/pixel
The algorithm for reconstruction is performed by approximating the density at a point by summing all the ray sum of the ray through the point. This has been termed the discrete back projection method and can be formulated mathematically as
(5)
(5)
Figure 1. Infra red path from emitter to detector
3.
SYSTEM DESIGN
The measurement system composed of four subsystems: (1) sensor, (2) signal conditioning, (3) data acquisition system, and (4) image reconstruction and display.
Figure 2 depicts the measurement section around the flow pipe which contains 64 pairs of infra red sensors configured in a combination of orthogonal and rectilinear projection. A similar number of sensors are installed downstream. The output of the downstream sensors should be a replica of the output from the upstream sensors but experienced a time delay. Both signals can be cross-correlated to obtain the velocity
of the flow. The flow pipe has an external diameter of 80 mm and an internal diameter of 78mm. Since the infra red sensors are the critical part, the selection of infra red transmitters and receivers are considered carefully. The sensors should be arranged such that they cover the whole pipe. In tomography, the more number of sensors used means higher resolution is achieved. Another set of sensors were constructed 100mm downstream to measure velocity using the cross-correlation method. The sensors used must be of high performance, compact, require minimum maintenance or calibration and be intrinsically safe. For this purpose, the emitter SFH 4510 and detector SFH 2500 was chosen due to its low cost and fast switching time.
The infra red emitters and detectors are arranged in pairs. They are linked to the measurement section through optical fibers. Each receiver circuit consists of a photodiode, pre-amplification, amplification and a filter. The receiver circuits are connected to a data acquisition card manufactured by Keithley. The card is of the type DAS-1800. Light generated by the emitters passed to flow pipe and is attenuated if it hits an object. The light reaches the receiver and is then converted by a photodiode into current by the receiving circuit. The signal is processed by a signal conditioning circuit. Data then entered the data acquisition system and is converted into a digital form prior to entering the computer. A linear back projection algorithm was developed which processed the digitized signal and displayed the concentration profile of the solids flowing in air. The algorithm is programmed in the Visual C++ language which is a powerful tool for such purpose.
Figure 2. Tomographic measurement section
The solid particles consisting of plastic beads were dropped onto a gravity flow rig shown in Figure 3. The rig costing about RM100,000 was supplied by Optosensor. The beads were filled into a hopper and a rotary valve controls the amount of beads flowing into the rig. Thus user can set various flow rates. The measurement section was installed around the flow rig.
∑ ∑
− = − = ∆ ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ ∂ + ∆ = 1 0 1 0 ) sin cos ( ) , ( ) ' ( M M N N y x y x y x f xpφ φ φ
y’ X’ x y A x’ B ) ' (x1
pφ
) , (x y f ° φ Projections φ φ φ
φ ⎥∆
⎦ ⎤ ⎢ ⎣ ⎡ ∂ + − ∆ =
∑ ∑
− = − = 1 0 1 0 ' ) ' sin cos ( ) ' , ( 1 ) , ( ˆ M m N n x x y x x p M y x f(25)
Tomographic Imaging Using Infra Red Sensors – Sallehuddin Ibrahim & Md. Amri Md. Yunus
19
Figure 3. Gravity fvlow rig
Figure 4. The regression line of sensors output versus measured mass flow rates
Figure 5(a). Concentration profile at a flow rate of 27 g/s
Figure 5(b). Concentration profile at a flow rate of 49 g/s
Figure 5(c). Concentration profile at a flow rate of 71 g/s
4.
RESULT
Various experiments were carried out using various algorithms and at various flow rates. The regression graph is shown in Figure 4. The concentration profiles at various flow rates are shown in Figures 5(a) to 5(c).
Figure 4 shows that the output from upstream sensors have similar values to that of the downstream sensors and the sensors output is proportional to the mass flow rates. Figures 5(a) to 5(c) show that the infra red tomographic system is able to locate the position of the beads inside the flow pipe. At higher flow rates, the flow rig released more plastic beads that at lower flow rates. As such more pixels are occupied in Figures 5(b) and 5(c) compared to Figure 5(a).
5.
CONCLUSION
A tomographic imaging system using infra red sensors has been designed. The system is capable of producing tomographic images for two-phase flow. The spatial resolution and measurement accuracy can be enhanced by adding more sensors. However, there must be a compromise between the spatial resolution, accuracy and real-time capability of the system. The system has the potential of being applied in various process industries.
REFERENCES
[1] S. Ibrahim and M. A. Md. Yunus , Preliminary Result for Infrared Tomography,
Elektrika., 6 (1), 2004, 1 – 4.
[2] S. McKee, T. Dyakowski, R.A. Williams, T.A. Bell and T. Allen,Solids flow imaging and attrition studies in a pneumatic conveyor,
Powder Technology, 82, 1995, 105 – 113.
[3] F. Dickin, Electrical resistance tomography for process applications, Meas. Sci. Technol., 7(3), 1996, 247-260.
Pipeline
Tank Hopper
(1)
Information and Communication Technology Seminar, Vol. 1 No. 1, August 2005 194
[8] L.K. Huang and M.J. Wang, Image thresholding by minimizing the measure of fuzziness, Pattern Recognition, 28, 1995, 41-51.
[9] J. Fan and W. Xie, Distance measure and induced fuzzy entropy, Fuzzy sets and systems, 104, 1999, 305-314.
[10] S.H. Kwon, Threshold selection based on cluster analysis, Pattern Recognition Letters, 25, 2004, 1045-1050.
(2)
DEVELOPMENT OF SCHEDULER FOR LINUX VIRTUAL SERVER IN
IPV6 PLATFORM USING ROUND ROBIN ADAPTIVE ALGORITHM,
STUDY CASE: WEB SERVER
Royyana Muslim Ijtihadie, Febriliyan Samopa
Faculty Of Information Technology, Sepuluh Nopember Institute of Technology Kampus FTIF, Keputih, Sukolilo Surabaya
email : roy@its-sby.edu, iyan@its-sby.edu
ABSTRACT
With the explosive growth of the Internet, the potential number of clients that a server must support has dramatically increased; some hot sites have already received hundreds of thousands of simultaneous client connections.Server will get overload sometimes. One of the solutions is using server cluster of LVS to reduce the load, and balance the load at each server.one of the scheduling algorithm used by LVS is round-robin algorithm.
This paper discuss about the implementation of adaptif algorithm used with round robin algorithm. This adaptif algorithm calculate the weight and server load, and implements it to the currently running round robin scheduler of LVS.
The result of this research is still shows leak of performance in throughput in using adaptif round-robin instead of using regular LVS. But this adaptif round-robin algorithm shows good of performance in request served per second, and time used per request. Keywords: Linux Virtual Server, IPv6, performance,round robin.
1. INTRODUCTION
With the explosive growth of the Internet, the potential number of clients that a server must support has dramatically increased; some hot sites have already received hundreds of thousands of simultaneous client connections.Server will get overload sometimes..
There are two solutions, first, upgrade the hardware. The first solution seems fine in a short period of time. In the future, the same problems happens again, and the server need to be upgraded again.
The second solution is using multi server. When a server started to overload. We can add another server running same services to reduce the load of the first server. It is possible because the request will be divided by load balancer, and distributed to the server with the same load. In this solution, the concern is the users don’t need to know that there are many servers behind. Users only know that only 1 server they can connect to [1].
Virtual server is a server with high scalability and high availability. Virtual server built from
server-server joined together named server-server cluster. The architecture of the cluster should be transparent to end-user, therefore end-user just only seen existence of 1 machine [1].
Virtual server distributes the load using scheduling algorithm. Most used is round-robin. Round robin divide the load exactly same between cluster member machines. But this algorithm is not adaptive to system load of machines.
IP version 6 (IPv6) is a new version of the Internet Protocol, designed as the successor to IP version 4 (IPv4) [5].
IPv6 will be widely used in the future because IPv6 has Expanded Addressing Capabilities that increase the scope of the IP addresses. Simple format makes IPv6 can reduce the overhead of common processing cost of packet handling. Improved Suppoer for extension and options can flexible the way of introducing new option in the future.flow labeling capability makes IPv6 labels the packet for particular traffic.and the important issue is authentication and privacy. IPv6 had extensions to suppoer authentication, integrity and data confidentiality[5].
The main focus of this research is to find the right modifications to the round robin scheduling algorithm, so that the response of the server will be concerned by round robin.
2. DESIGN
AND
IMPLEMENTATION
The development of adaptif round robin algorithm, consists of three parts, System Load collector, Server, dan Monitor.2.1. System Load Collector
System load collector act as a system monitor. it collects cpu time, memory usage, and file handle usage. This application collects those informations from /proc filesystem.
System load collector, collects data from /proc filesystem. It collects data about cpu time, memory usage, and file handle usage and then write them to the file.
(3)
Information and Communication Technology Seminar, Vol. 1 No. 1, August 2005 196
Figure 1 Flowchart of system load collector
2.2. Server
This application actually is a service opened in port 7888. the aim is to serve the request asking for data-data about cpu time, memory usage, and file handle usage.this application needed by monitor (section 2.3) to obtain the data to be implement in adaptif round-robin algorithm
Figure 2 Flowchart of Server
2.3. Monitor
Monitor is the implementation of adaptif round robin algorithm. It collects information about cpu time, memory usage, and file handle, and also the weight of the monitored value (2.4). after collecting those data, monitor calcs the load of the server, and then updates the value to the LVS director.
Figure 3 Flowchart of monitor
2.4. Adaptive Round-Robin Algoirhtm
Basically, the algorithm would not directly implements the round robin algorithm. This algorithm monitors the server’s system load, calcs the value representing the overall load of the server, and implements the weight through the ip6vsadm command (provided by LVS6).The monitored value (CPU time, memory usage, file handle usage) has its own weight. The monitored value weight represents the load capacity that affects the server. This value defined by system administrator. The results is load server value. The load of the server calculation , based of formula (1).
(
) (
) (
)
⎟ ⎠ ⎞ ⎜ ⎝ ⎛ + + + + = BobotFHAN BobotMem BobotCPU BebanFHAN x BobotFHAN BebanMem x BobotMem BebanCPU x BobotCPU B where:B : Load of server
BobotCPU : weight assigned for CPU time BobotMem : weight assigned for memory usage BobotFHAN : weight assigned for file handle usage BebanCPU : CPU time value collected from server BebanMem : Memory value collected from server BebanFHAN : File handle value collected from
server
The weight of server i obtained from formula (2) ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ − =
∑
= 100 ] [ 100 1 x n B Bi Wi nserver n (2) where:Wi : Weight server i
Bi : Load of Server i
B[n] : Sum of all server monitored Load
3. RESULT
This research uses Time per request, number of request per second, and throughput to show the
(4)
performace indicator. traffic simulator was used to perform the benchmark and experiment process. The parameter of the experiment is number of request performed. The number of request will be 100, 3000, and 5000.all of this benchmark will use the IPv6 platform
The result will show the comparison between benchmarking using LVS and using LVS with adaptif round robin algorithm, all compared with benchmarking using single server.
The results shows that LVS using adaptif round robin algorithm can decrease time used for processing a request (table 1). Compared with using single server, using LVS adaptif can decrease the time per request 90%, compared with using regular LVS, LVS adaptif can decrease to 83%.
The request served per second is increased (table 2).Compared with using single server, using LVS adaptif can increase the number of request served per second 941%. Compared with using regular LVS, LVS adaptif can increase to 509%.
The throughput may the important think to watch. Because the throughput is general way to measure the performance of the server. In table 3, compared with single server, using LVS adaptif decreased the throughput 81%,compared with regular LVS, the LVS adaptif decreased the throughput 88%
5. CONCLUSION
Using algorithm adaptif shows leak of performance in throughput, but seems good in number of request per second, and time used per request. the decreased performance may caused by server overhead, handling request from monitor.
REFERENCE
[1] LVS, 2004,Linux virtual server, http://www.linuxvirtualserver.org [2] Stevens, W. Richard, 1994, TCP/IP
Illustrated Volume I, Addison Wesley. [3] Stevens, W. Richard, 1994, TCP/IP
Illustrated Volume II, Addison Wesley. [4] Stevens, W. Richard, 1994, TCP/IP
Illustrated Volume III, Addison Wesley.
Table 1 comparison of time per request
time per
request #req single LVS
LVS adaptif
+/- Single
% +/- LVS %
1000 182.08 104.10 15.91 -91.26 -84.7166
3000 172.65 102.00 17.54 -89.84 -82.8039
5000 171.08 101.60 17.08 -90.02 -83.189
rata-rata 175.27 102.57 16.84 -90.39 -83.5782
Table 2 comparison of number of request per second
#req per
second #req single LVS
LVS Adaptive +/- Single % +/- LVS %
1000 5.49 9.61 62.83 1043.92 553.7981
3000 5.79 9.80 57.01 884.63 481.7347
5000 5.85 9.84 58.55 901.71 495.0203
Avg 5.71 9.75 59.46 941.54 509.8803
Table 3 comparison of throughput
throug #req single LVS LVS adaptif
+/- Single
% +/- LVS %
1000 582.53 982.90 115.1 -80.24 -88.2898
3000 594.35 989.10 104.4 -82.43 -89.4449
5000 593.83 989.90 107.3 -81.93 -89.1605
(5)
Information and Communication Technology Seminar, Vol. 1 No. 1, August 2005 198
[5] S. Deering, R. Hinden,1995, RFC 1883 -
Internet Protocol, Version 6 (IPv6) Specification,1995.
[6] Peterson, Larry L., Davie, Bruce S.,1999,
Computer networks A System Approach,
Morgan-Kauffman.
[7] Avi Silberschatz, Peter, 2000, Applied Operating System Concepts, John Wiley & Sons.
[8] James Aweya, Michel Ouellette, Delfin Y. Montuno, Bernard Doray, Kent Felske, 2002,
An adaptive load balancing scheme for web
servers ,International Journal of Network
Management volume 12 issue 1
[9] Hemant M. Chaskar, Upamanyu Madhow, 2003, Fair scheduling with tunable latency: a
round-robin approach,IEEE/ACM Transactions on Networking Volume 11
(6)