Design of Automated Process in Supply Chain Application based on Supplier’s Ranked and Quota.

(1)

Sr. No

Topic

1.

Scope of the Journal

2.

The Model

3.

The Advisory and Editorial Board

4.

Papers

First Published in the United States of America. Copyright © 2012

Foundation of Computer Science Inc.


(2)

International Journal of Computer Applications (IJCA) creates a place for publication of papers which covers the frontier issues in Computer Science and Engineering and their applications which will define new wave of breakthroughs. The journal is an initiative to identify the efforts of the scientific community worldwide towards inventing new-age technologies. Our mission, as part of the research community is to bring the highest quality research to the widest possible audience. International Journal of Computer Applications is a global effort to consolidate dispersed knowledge and aggregate them in a search-able and index-able form.

The perspectives presented in the journal range from big picture analysis which address global and universal concerns, to detailed case studies which speak of localized applications of the principles and practices of computational algorithms. The journal is relevant for academics in computer science, applied sciences, the professions and education, research students, public administrators in local and state government, representatives of the private sector, trainers and industry consultants.

Indexing

International Journal of Computer Applications (IJCA) maintains high quality indexing services such as Google Scholar, CiteSeer, UlrichsWeb, DOAJ (Directory of Open Access Journals) and Scientific Commons Index, University of St. Gallen, Switzerland. The articles are also indexed with SAO/NASA ADS Physics Abstract Service supported by Harvard University and NASA, Informatics and ProQuest CSA Technology Research Database. IJCA is constantly in progress towards expanding its contents worldwide for the betterment of the scientific, research and academic communities.

Topics

International Journal of Computer Applications (IJCA) supports a wide range of topics dealing in computer science applications as: Embedded Systems, Pattern Recognition, Signal Processing, Robotics and Micro-Robotics, Theoretical Informatics, Quantum Computing, Software Testing, Computer Vision, Digital Systems, Pervasive Computing etc.


(3)

Open Review

International Journal of Computer Applications approach to peer review is open and inclusive, at the same time as it is based on the most rigorous and merit-based ‘blind’ peer-review processes. Our referee processes are criterion-referenced and referees selected on the basis of subject matter and disciplinary expertise. Ranking is based on clearly articulated criteria. The result is a refereeing process that is scrupulously fair in its assessments at the same time as offering a carefully structured and constructive contribution to the shape of the published paper.

Intellectual Excellence

The result is a publishing process which is without prejudice to institutional affiliation, stage in career, national origins or disciplinary perspective. If the paper is excellent, and has been systematically and independently assessed as such, it will be published. This is why International Journal of Computer Applications has so much exciting new material, much of it originating from well known research institutions but also a considerable amount of brilliantly insightful and innovative material from academics in lesser known institutions in the developing world, emerging researchers, people working in hard-to-classify interdisciplinary spaces and researchers in liberal arts colleges and teaching universities.


(4)

includes members of research center heads, faculty deans, department heads, professors, research scientists, experienced software development directors and engineers.

Dr. T. T. Al Shemmeri, Staffordshire University, UK Bhalaji N, Vels University

Dr. A.K.Banerjee, NIT, Trichy Dr. Pabitra Mohan Khilar, NIT Rourkela Amos Omondi, Teesside University Dr. Anil Upadhyay, UPTU

Dr Amr Ahmed, University of Lincoln Cheng Luo, Coppin State University Dr. Keith Leonard Mannock, University of London Harminder S. Bindra, PTU

Dr. Alexandra I. Cristea, University of Warwick Santosh K. Pandey, The Institute of CA of India Dr. V. K. Anand, Punjab University Dr. S. Abdul Khader Jilani, University of Tabuk Dr. Rakesh Mishra, University of Huddersfield Kamaljit I. Lakhtaria, Saurashtra University Dr. S.Karthik, Anna University Dr. Anirban Kundu, West Bengal University of

Technology

Amol D. Potgantwar, University of Pune Dr Pramod B Patil, RTM Nagpur University Dr. Neeraj Kumar Nehra, SMVD University Dr. Debasis Giri, WBUT

Dr. Rajesh Kumar, National University of Singapore Deo Prakash, Shri Mata Vaishno Devi University

Dr. Sabnam Sengupta, WBUT Rakesh Lingappa, VTU

D. Jude Hemanth, Karunya University P. Vasant, University Teknologi Petornas

Dr. A.Govardhan, JNTU Yuanfeng Jin, YanBian University

Dr. R. Ponnusamy, Vinayaga Missions University Rajesh K Shukla, RGPV

Dr. Yogeshwar Kosta, CHARUSAT Dr.S.Radha Rammohan, D.G. of Technological Education

T.N.Shankar, JNTU Prof. Hari Mohan Pandey, NMIMS University

Dayashankar Singh, UPTU Prof. Kanchan Sharma, GGS Indraprastha

Vishwavidyalaya

Bidyadhar Subudhi, NIT, Rourkela Dr. S. Poornachandra, Anna University Dr. Nitin S. Choubey, NMIMS Dr. R. Uma Rani, University of Madras Rongrong Ji, Harbin Institute of Technology, China Dr. V.B. Singh, University of Delhi

Anand Kumar, VTU Hemant Kumar Mahala, RGPV

Prof. S K Nanda, BPUT Prof. Debnath Bhattacharyya, Hannam University

Dr. A.K. Sharma, Uttar Pradesh Technical University

Dr A.S.Prasad, Andhra University

Rajeshree D. Raut, RTM, Nagpur University Deepak Joshi, Hannam University Dr. Vijay H. Mankar, Nagpur University Dr. P K Singh, U P Technical University Atul Sajjanhar, Deakin University RK Tiwari, U P Technical University

Navneet Tiwari, RGPV Dr. Himanshu Aggarwal, Punjabi University

Ashraf Bany Mohammed, Petra University Dr. K.D. Verma, S.V. College of PG Studies & Research Totok R Biyanto, Sepuluh Nopember R.Amirtharajan, SASTRA University

Sheti Mahendra A, Dr. B A Marathwada University Md. Rajibul Islam, University Technology Malaysia

Koushik Majumder, WBUT S.Hariharan, B.S. Abdur Rahman University

Dr.R.Geetharamani, Anna University Dr.S.Sasikumar, HCET

Rupali Bhardwaj, UPTU Dakshina Ranjan Kisku, WBUT

Gaurav Kumar, Punjab Technical University A.K.Verma, TERI Prof. B.Nagarajan, Anna University Vikas Singla, PTU

Dr H N Suma, VTU Dr. Udai Shanker, UPTU

Anu Suneja, Maharshi Markandeshwar University Prof. Rachit Garg, GNDU

Aung Kyaw Oo, DSA, Myanmar Dr Lefteris Gortzis, University of Patras, Greece. Suhas J Manangi, Microsoft Mahdi Jampour, Kerman Institute of Higher Education Prof. D S Suresh, Pune University Prof.M.V.Deshpande, University of Mumbai


(5)

Prof. Surendra Rahamatkar, VIT Prof. Shishir K. Shandilya, RGPV

M.Azath, Anna University Liladhar R Rewatkar, RTM Nagpur University

R. Jagadeesh K, Anna University Amit Rathi, Jaypee University

Dr. Dilip Mali, Mekelle University, Ethiopia. Dr. Paresh Virparia, Sardar Patel University Morteza S. Kamarposhti , Islamic Azad University

of Firoozkuh, Iran

Dr. D. Gunaseelan Directorate of Technological Education, Oman

Dr. M. Azzouzi, ZA University of Djelfa, Algeria. Dr. Dhananjay Kumar, Anna University

Jayant shukla, RGPV Prof. Yuvaraju B N, VTU

Dr. Ananya Kanjilal, WBUT Daminni Grover, IILM Institute for Higher Education Vishal Gour, Govt. Engineering College Monit Kapoor, M.M University

Dr. Binod Kumar, ISTAR Amit Kumar, Nanjing Forestry University, China.

Dr.Mallikarjun Hangarge, Gulbarga University Gursharanjeet Singh, LPU

Dr. R.Muthuraj, PSNACET Mohd.Muqeem, Integral University

Dr. Chitra. A. Dhawale, Symbiosis Institute of Computer Studies and Research

Dr.Abdul Jalil M. Khalaf, University of Kufa, IRAQ.

Dr. Rizwan Beg, UPTU R.Indra Gandhi, Anna University

V.B Kirubanand, Bharathiar University Mohammad Ghulam Ali, IIT, Kharagpur Dr. D.I. George A., Jamal Mohamed College Kunjal B.Mankad, ISTAR

Raman Kumar, PTU Lei Wu, University of Houston – Clear Lake, Texas.

G. Appasami , Anna University S.Vijayalakshmi, VIT University Dr. Gurpreet Singh Josan, PTU Dr. Seema Shah, IIIT, Allahabad Dr. Wichian Sittiprapaporn, Mahasarakham

University, Thailand.

Chakresh Kumar, MRI University, India

Dr. Vishal Goyal, Punjabi University, India Dr. A.V.Senthil Kumar, Bharathiar University, India R.C.Tripathi, IIIT-Allahabad, India Prof. R.K. Narayan , B.I.T. Mesra, India


(6)

Optimization of Function by using a New MATLAB based Genetic Algorithm Procedure Authors : G.N Purohit, Arun Mohan Sherry, Manish Saraswat

1-5

An Early Screening System for the Detection of Diabetic Retinopathy using Image Processing Authors : B. Ramasubramanian, G. Prabhakar

6-10

Automatic Generation Control of an Interconnected Power System Before and After Deregulation Authors : Pardeep Nain, K. P. Singh Parmar, A K. Singh

11-16

Design of Automated Process in Supply Chain Application based on Supplier’s Ranked and Quota

Authors : Putu Angelina Widya G., I Made Sukarsa, I Nyoman Piarsa

17-23

Design and Comparison of Advanced Color based Image CAPTCHAs Authors : Mandeep Kumar, Renu Dhir

24-29

An Analysis of Scan Converting a Line with Multi Symmetry Authors : Md. Khairullah

30-33

Signal Strength based Scanning Considering Free Capacity for Handover Execution in WiMAX Networks

Authors : P.P. Edwin Winston, K.S. Shaji

34-37

An Intelligent Tender Evaluation System using Evidential Reasoning Approach Authors: Md. Shahadat Hossain, Md. Salah Uddin Chowdury, Smita Sarker,

38-43


(7)

Optimization of Function by using a New MATLAB based

Genetic Algorithm Procedure

G.N Purohit

Banasthali University Rajasthan

India

Arun Mohan Sherry

Institute of Management Technology Ghaziabad, (U.P)

India

Manish Saraswat

Geetanjali Institute of Technical Studies

Udaipur, (Raj.) India

ABSTRACT

As the applications of systems are increasing in various aspects of our daily life, it enhances the complexity of systems in Software design (Program response according to environment) and hardware components (caches, branch predicting pipelines).

Within the past couple of years the Test Engineers have developed a new testing procedure for testing the correctness of systems: namely the evolutionary test.

The test is interpreted as a problem of optimization, and employs evolutionary computation to find the test data with extreme execution times.

Evolutionary testing denotes the use of evolutionary algorithms, e.g., Genetic Algorithms (GAs), to support various test automation tasks. Since evolutionary algorithms are heuristics, their performance and output efficiency can vary across multiple

runs, there is strong need a environment that can be handle these complexities, Now a day’s MATLAB is widely used for this purpose.

This paper explore potential power of Genetic Algorithm for optimization by using new MATLAB based implementation

of Rastrigin’s function, throughout the paper we use this

function as optimization problem to explain some key definitions of genetic transformation like selection crossover and mutation.

General Terms:

Software testing, Evolutionary algorithm.

Keywords:

Rastrigin’s function, Genetic Algorithm (GA).

1.

INTRODUCTION

Genetic algorithms are an approach to optimization and learning based loosely on principles of biological evolution, these are simple to construct, and its implementation does not require a large amount of storage, making them a sufficient choice for an optimization problems.

Optimal scheduling is a nonlinear problem that cannot be

solved easily yet, a GA could serve to find a decent solution

in a limited amount of time Genetic algorithms are inspired

by the Darwin’s theory about the evolution “survival of

fittest”, it search the solution space of a function through the use of simulated evolution (survival of the fittest) strategy. Generally the fittest individuals of any population have greater chance to reproduce and survive, to the next generation thus it contribute to improving successive generations However inferior individuals can by chance survive and also reproduce, Genetic algorithms have been shown to solve linear and nonlinear problems by exploring all regions of the state space and exponentially exploiting

promising areas through the application of mutation, crossover and selection operations to individuals in the population

The development of new software technology and the new software environments (e.g. MATLAB) provide the platform to solving difficult problems in real time. It integrates numerical analysis, matrix computation and graphics in an easy to use environment.

MATLAB functions are simple text files of interpreted instructions Therefore; these functions can be re-implemented from one hardware architecture to another without even a recompilation step.

MATLAB (Matrix Laboratory), a product of Mathworks, it is a scientific software package developed to provide an integrated environment for numeric computation and graphics visualization in high-level programming language. Originally it was written by Dr Cleve Moler, Chief scientist at MathWorks, Inc., to provide easy access to matrix software developed in the LINPACK and EISPACK projects [2]. MATLAB has a wide collection of functions useful to the genetic algorithm practitioner and those wishing to experiment with the genetic algorithm for the first time.

In MATLAB’s high-level language, problems can be coded in

m-files in a fraction of the time that it would take to create C or FORTRAN programs for the same purpose. It also provide advanced data analysis, visualization tools and special purpose application domain toolboxes

This paper is organized into three parts: Part I describes the usefulness of GA and features of new software MATLAB. Part II discusses the implementation issues of GA in various available languages, tools and software. Finally GAs is implemented using MATLAB for the Rastrigin’s function as case study for optimization. The Part III concludes the objectives of paper.

2. OVERVIEW OF PROGRAMMING

LANGUAGES USED IN

IMPLEMENTATION OF GA

The implementation of genetic algorithm on high-performance computers is a difficult and time-consuming task. The implementing languages must be closely as possible to the mathematical description of the problem, simple and

easy-to-use. The C/C++, FORTRAN are lower-level compiled

programming languages (sometimes classified as a 3rd generation language) that is widely used in academia, industry, commerce and GA is also implemented by using these of category of languages.

The main advantage of using compiled low-level languages is their execution speed and efficiency (for example embedded


(8)

research and industry and it is an example of a high-level

“scripting” or “4thgeneration” language.

The most prominent difference between compiled languages and interpreted languages (4th generation) is that the interpreter program reads the source code and translates it into machine instructions on the fly, i.e. no compilation is required. This decreases the execution speed but it make the programmer free from memory management, allows dynamic typing and interactive sessions.

It is also important that the programs written in scripting languages are usually significantly shorter [3] than equivalent programs written in compiled languages and also take significantly less time to code and debug. In short, there is a trade-off between the execution time (small for compiled languages) and the development time (small for interpreted languages).

Another important feature of MATLAB (and other interpreted languages like Pythan) is the ability to have interactive sessions. The user can type one or several commands at the command prompt and after pressing return, these commands are executed immediately. By this it allows the programmer for interactive testing of small parts of the code (without any delay stemming from compilation) and encourages experimentation [9].

The MATLAB package comes with sophisticated libraries for matrix operations, general numeric methods and plotting of data, therefore MATLAB become first choice of programmer to implement scientific, graphical and mathematical applications and for the GA implementation MATLAB is come with special tool that is GA-tool or Optimtool

2.1

Things to consider for Genetic Algorithms

Implementation in MATLAB

The first thing must do in order to use a GA is to decide if it is possible to automatically build solutions on problem. For example, in the Traveling Salesman Problem, every route that passes through the cities in question is potentially a solution, although probably not the optimal one. It is must to do that because a GA requires an initial population P of solutions. Then must decide what "gene" representation will use we have a few alternatives like binary, integer, double, permutation, etc. The binary and double being the most commonly used since they are the most flexible. After selecting the gene representation it must be decide:

The method to select parents from the population P (Cost Roulette Wheel, Stochastic Universal Sampling, Rank Roulette Wheel, Tournament Selection, etc.), the way these parents will "mate" to create descendants, the mutation method (optional but useful), the method will use to populate the next generation and the algorithm's termination condition (number of generations, time limit, acceptable quality threshold).

Now second thing is Processor and operating system that must be capable of running the program the algorithm is coded in MATLAB.

Matlab provides an optimization toolbox that includes a GA-based solver. The toolbox can be start by typing optimtool in the Matlab's command line and pressing enter. As soon as the optimization window appears, we can select the solver ga Genetic Algorithm and now matlab are ready to go. The user should program (by writing m files) any extended functionality required.

The Rastrigin’s functions in the proper field and number of variable is 2 is implemented here. The population type is double vector (figure 1). The equation of this function and Matlab (m-file) code is given as below:

Ras(x) =20+x12+x22-10(cos2πx1+cos2πx2)

Figure: 1 GAs in Matlab's Optimization Toolbox MATLAB Code:

function y = rast(x) % the default value of n = 2. n = 2;

s = 0; for j = 1:n

s = s+(x(j)^2-10*cos(2*pi*x(j))); end

y = 10*n+s;

Now it is ready (the default settings in every-thing else is adequate). Press the Start button. The algorithm starts, the plots are pop-up and soon the results are displayed as in figure 2.

The best fitness function value (the smallest one since we minimize) and the termination condition met are printed, together with the solution (Final Point – it is very close to (0, 0)). Since the method is stochastic, don't expect to be able to reproduce any result found in a different run. Now check the two plots on the left.

It is obvious that the population converges, since the average distance between individuals (solutions) in term of the fitness value is reduced, as the generations pass.

This is a measure of the diversity of a population. It is hard to avoid convergence but keeping it low or postponing its appearance is better. Having diversity in the population allows the GA to search better in the solution space.


(9)

Figure 2: Rastrigin’s function optimization with default

setting

It is seen from figure-2 the fitness value as it gradually gets smaller. It is an indication that optimization takes place since not only the fitness value of the best individual was reduced, even the mean (average) fitness of the population was also reduced (that is, in terms of the fitness value, the whole population was improved we have better solutions in the population, at the end).

Population diversity – size – range, fitness scaling The performance of a GA is affected by the diversity of the initial population. If the average distance between individuals is large, it is indication of high diversity; if the average distance is small its represent low diversity in the population.

If the diversity is too high or too low, the genetic algorithm might not perform well. We will explain this by the following: By default, the Optimization Tool creates a random initial population using a creation function. We can limit this by setting the Initial range field in Population options. Set it to (1; 1.1). By this we actually make it harder for the GA to search equally well in all the solutions space. Leave the rest settings as previous (figure: 1) except Options-Stopping Criteria-Stall Generations which should be set to 100. This will allow the algorithm run for 100 generation providing us with better results (and plots). Now click the Start button. The GA returns the best fitness function value of approximately 2 and displays the plots in as in figure 3.

Figure 3: Rastrigin’s function optimization with default setting, except Stopping Criteria-Stall Generations set 100

and initial range set [1; 1.1]

The upper plot, which displays the best fitness at each generation, show little progress in lowering the fitness value (black dots). The lower plot shows the average distance between individuals at each generation, which is a good measure of the diversity of a population. For this setting of initial range, there is too little diversity for the algorithm to make progress. The algorithm was trapped in a local minimum due to the initial range restriction.

Next, set Initial range to [1; 100] and run the algorithm again. The GA returns the best fitness value of approximately 3.3 and displays the following plots as in figure: 4:

Figure 4: Rastrigin’s function optimization with default setting, except Stopping Criteria-Stall Generations set 100

and initial range set [1; 100]

This time, the genetic algorithm makes progress, but because the average distance between individuals is so large, the best individuals are far from the optimal solution. Note though that if we let the GA to run for more generations (by setting Generations and Stall Generations in Stopping Criteria to 200) it will eventually find a better solution.

Set Initial range to [1; 2] and run the GA. This returns the best fitness value of approximately 0.012 and displays the plots that follow as in figure 5.

Fig 5: Rastrigin’s function optimization with default

setting, except Stopping Criteria-Stall Generations set 100 & initial range set [1; 2]


(10)

The diversity in this case is better suited to the problem, so the genetic algorithm returns a much better result than in the previous two cases.

In all the examples above, we had the Population Size (Options-Population) set to 20 (the default). This value determines the size of the population at each generation. Increasing the population size enables the genetic algorithm to search more points and thereby obtain a better result. However, the larger the population size, the longer the genetic algorithm takes to compute each generation.

It is important to note that Population Size to be at least the value of Number of variables, so that the individuals in each population span the space being searched.

Finally, another parameter that affects the diversity of the population is the Fitness Scaling. If the fitness values vary too widely Figure: 6, the individuals with the lowest values (recall that we minimize) reproduce too rapidly, taking over the population pool too quickly and preventing the GA from searching other areas of the solution space.

On the other hand, if the values vary only a little, all individuals have approximately the same chance of reproduction and the search will progress very slowly.

Fig 6: Raw fitness value lower

Fig 7: Raw fitness value Higher

It is clear from the Figure: 6 Raw fitness values (lower is better) vary too widely on the. Scaled values (figure: 7) do not alter the selection advantage of the good individuals (except that now bigger is better). They just reduce the diversity we have on the above. This prevents the GA from converging too early.

The Fitness Scaling adjusts the fitness values (scaled values) before the selection step of the GA. This is done without changing the ranking order, that is, the best individual based on the raw fitness value remains the best in the scaled rank, as well. Only the values are changed, and thus the probability of an individual to get selected for mating by the selection procedure. This prevents the GA from converging too fast which allows the algorithm to better search the solution space.

Now continue Rastrigin’s function implantation in MATLAB,

Use the following settings leaving every-thing else in its default value (Fitness function: @rastriginsfcn, Number of Variables: 2, Initial Range: [1; 20], Plots: Best Fitness, Distance).

The Selection panel in Options controls the Selection Function, that is, how individuals are selected to become parents. Note that this mechanism works on the scaled values, as described previously.

Most well-known methods are presented (uniform, roulette and tournament). An individual can be selected more than once as a parent, in which case it contributes its genes to more than one child.

Figure 8: Stochastic uniform selection method. For 6 parents we step the selection line with steps equal to 15/6. The default selection option, Stochastic Uniform, lays out a line (Figure 8) in which each parent corresponds to a section of the line of length proportional to its scaled value.

For example, assume a population of 4 individuals with scaled values 7, 4, 3 and 1. The individual with the scaled value of 7 is the best and should contribute its genes more than the rest. We create a line of length 1+3+4+7=15. Now, let's say that we need to select 6 individuals for parents. We step over this line in steps of 15/6 and select the individual for crossover. The Reproduction panel in Options control how the GA creates the next generation. Here you specify the amount of elitism and the fraction of the population of the next generation that is generated through mating (the rest is generated by mutation).The options are:

Elite Count: the number of individuals with the best fitness values in the current generation that are guaranteed to survive to the next generation. These individuals are called elite children.

The default value of Elite count is 2. Try to solve the Rastrigin's problem by changing only this parameter. Try values of 10, 3 and 1. we will get results like those depicted in Figure 5. It is obvious that you should keep this value low. 1 or 2 (depending on the population size).


(11)

Figure 9: Elite count 10

Figure 10: Elite count 3

Figure 11: Elite count 1

From the figure 9, 10, 11 it clear that too much elitism results in early convergence which can make the search less effective.

Crossover Fraction: is the fraction of individuals in the next generation, other than elite children, that are created by crossover (remaining is generated by mutation). A crossover

fraction of ‘1’ indicates means that all children other than elite

individuals are crossover children. A crossover fraction of ‘0’ indicates that all children are mutation children.

Two-point crossover - two crossover points are selected, binary string from the beginning of the chromosome to the first crossover point is copied from the first parent, the part from the first to the second crossover point is copied from the other parent and the rest is copied from the first parent again

Mutation- It is the Random change one or more digits in the string representing an individual.

3. CONCLUSION

The main objective in this paper is to illustrate that how the new technology of MATLAB can be used in order to implement a genetic algorithm in optimization problems. It uses the power of genetic algorithms to generate fast and efficient solutions in real time.

The experimental results show that GATool can improve fitness value by providing quickly a set of near optimum solutions. Concerning the effect of different GA parameter configurations, it found that an increase in population size can improve performance of the system. The parameter of crossover rate does not affect seriously the quality of the solution.

Genetic Algorithms are easy to apply to a wide range of optimization problems, like the traveling salesperson problem, inductive concept learning, scheduling, and layout problems. The result shows that the proposed GAs with the specification can find solutions with better quality in shorter time. The developer uses this information to search, locate, and debug the faults that caused the failures. While each of these areas for future consideration could be further investigated with respect to applicability for software testing, because it is also an optimization problem with the objective that the efforts consumed should be minimized and the number of faults detected should be maximized.

Finally, it would be interesting for further research to test a series of different systems in order to see the correlation between genetic algorithm and system performances.

4. REFERENCES

[1] D.E. Goldberg, Genetic Learning in optimization, search and machine learning. Addisson Wesley, 1994.

[2] J.J. Grefenstette. Genetic algorithms for changing environments. In R. Manner abd B. Manderick, editor, Parallel Problem Solving from Nature 2, pages 465-501. Elsevier Science Publishers.

[3] Mathworks, The: Matlab - UserGuide. Natick, Mass.:

The Mathworks, Inc., 1994-1999.

http://www.mathworks.com

[4] Henriksson, D., Cervin, A., Arzen, K.E.: TrueTime: Real-time control system simulation with MATLAB/Simulink. In: Proceedings of the Nordic MATLAB Conference, Copenhagen, Denmark (2005)

[5] A.J. Chipperfield, P. J. Fleming and H. Pohlheim, “A

Genetic Algorithm Toolbox for MATLAB,” Proc.

International Conference on Systems Engineering, Coventq, UK, 6-8 Sept 1998.

[6] Papadamou, S. and Stephanides, G., A New Matlab-Based Toolbox For Computer Aided Dynamic Technical Trading,

[7] K. Lakhotia, M. Harman, and P. McMinn. A multi-objective approach to search-based test data generation. In Proc. 9th Annual Conf. on Genetic and Evolutionary

Computation (GECCO’07), pages 1098–1105, ACM,

2007.

[8] Pohlheim, H.: Genetic and Evolutionary Algorithm Toolbox for use with Matlab - Documentation. Technical www.geatbx.com.

[9] A Comparison of C, MATLAB, and Python as Teaching Languages in Engineering Hans Fangohr University of Southampton, Southampton SO17 1BJ, UK.


(12)

An Early Screening System for the Detection of Diabetic

Retinopathy using Image Processing

ABSTRACT

Diabetic Retinopathy (DR) is a leading cause of vision loss. Exudates are one of the significant signs of diabetic retinopathy which is a main cause of blindness that could be prevented with an early screening process In our method, the knowledge of digital image processing is used to diagnose exudates from images of retina. An automatic system to detect and localize the presence of exudates from color fundus images with non-dilated pupils is proposed. First, the image is preprocessed and segmented using CIE Lab color space. The segmented image along with Optic Disc (OD) is chosen. Feature vector based on color and texture are extracted from the selected segment using GLCM . The selected feature vector are then classified as exudates and non-exudates using a K-Nearest Neighbors Classifier. Using a clinical reference model, images with exudates were detected with 97% success rate. The proposed method performs best by segmenting even smaller area of exudates.

Keywords

CIE Lab Color Space, CLAHE, Diabetic Retinopathy (DR), Exudates, GLCM, k-NN.

1.

INTRODUCTION

Diabetic retinopathy is one of the major causes of legal blindness in the working age population around the world. The International Diabetes Federation reports that over 50 million people in India have this disease and it is growing rapidly (IDF 2009a) [2]. In [7], it is estimated that the number of people with diabetes is likely to increase to 366 million by the year 2030 from 171 million at the turn of century. In India, there will be 79 million people with diabetes by 2030 making it the diabetic capital of the world. Even though Diabetic Retinopathy is not a completely curable disease, Photocoagulation using laser can prevent major vision loss. Therefore the timely diagnosis and referral for management of diabetic retinopathy can prevent 98% of severe visual loss. Diabetic Retinopathy is mainly caused by the changes in the blood vessels of the retina that occurs by the increased blood glucose level. Exudates are one of the primary sign of Diabetic Retinopathy [3]. Exudates are yellow-white lesions with relatively distinct margins. Exudates are lipids and proteins that deposits and leaks from the damaged blood vessels within the retina. Detection of Exudates by ophthalmologists is a laborious process as they have to spend a great deal of time in manual analysis and diagnosis. Moreover, manual detection requires using chemical dilation material which takes time and has negative side effects on patients.Hence automaticscreening techniques for exudates

detection have great significance in saving costs, time and labour in addition to avoiding the side effects on patients. Figure 1 depicts a typical retinal image labelled with various feature components of Diabetic Retinopathy. Micro aneurysm are small saccular pouches and appears as small red dots. This may lead to big blood clots called hemorrhages. The bright circular region from where the blood vessels emanate is called optic disk (OD). Macula is the centre portion of the retina and has photoreceptors called cons that are highly sensitive to color and responsible for perceiving fine details. It is situated at the posterior pole temporal to the optic disk. The fovea defines the centre of the macula and is the region of highest visual acuity.

Fig1:Colour Fundus image with various typical components

2.

STATE OF ART

A back propagation multilayer Neural Network for vascular tree segmentation is proposed by Gardener et al [19]. After Histogram Equalization, smoothing and edge detection, the image was divided into 20X20 pixels squares. The Neural Network was then fed with the values of these pixel windows for classifying each pixel into vessels or not.

Akara Sopharak et al [3] reported the result of an automated detection of exudates from low contrast digital images of retinopathy patients with non-dilated pupils by Fuzzy C-Means clustering. Four features such as intensity, standard deviation on intensity, hue and a number of edge pixels were extracted and applied as input to coarse segmentation using FCM clustering method. The detected result were validated with expert ophthalmologists hand drawn ground truths. Sensitivity, Specificity, positive predictive value (PPV) , positive likelihood ratio (PLR) and accuracy were used to evaluate the overall performance of the system.

Doaa Youssef et al [5] proposed a method to detect the exudates using segmentation process. Firstly, the optic disc and blood vessels are eliminated. The Optic Disc is eliminated using Hough Transform. The Blood vessels are detected by applying Edge Detection algorithm. Then the exudates are

B. Ramasubramanian

Assistant Professor Department of ECE Syed Ammal Engineering College Ramanathapuram, TamilNadu, India.

G. Prabhakar

Assistant Professor Department of ECE Syed Ammal Engineering College Ramanathapuram, TamilNadu, India.


(13)

segmented using Morphological Reconstruction method. Wynne Hsy et al [6] uses K-Means Clustering and Difference map for detecting the exudates and blood vessels.

Sinthanayothin et al [9] reported the result of an automated detection of Diabetic Retinopathy by Recursive Region Growing techniques on a 10X10 window using selected threshold values. In the pre-processing steps, adaptive, local, contrast enhancement is applied. The author reported a sensitivity of 88.5% and specificity of 99.7% for the detection of exudates against a small dataset comprising 21 abnormal and 9 normal retinal images.

Phillips et al [10] identified the exudates by using Global and local thresholding. The input images were pre-processed to eliminate photographic non-uniformities and the contrast of the exudates was then enhanced. The lesion based sensitivity of this technique was reported between 61% and 100% based on 14 images. A drawback of this method was that other bright lesions (such as cotton wool spots) could be identified mistakenly.

Walter et al [8] identified exudates from green channel of the retinal images according to their gray level variation. The exudates contour were determined using mathematical morphology techniques. This method used three parameters: size of the local window and two threshold value. Exudates regions were initially found using first threshold value. The second threshold represents the minimum value, by which a candidate pixel must differ from its surrounding background to be classified as exudates. The author achieved a sensitivity of 92.8% and predictivity of 92.4% against a set of 15 abnormal retinal images. However the author ignored some types of errors on the border of the segmented exudates in their reported performances and did not discriminate exudates from cotton wool spots.

3.

PROPOSED METHOD

3.1

Image Acquisition:

To evaluate the performance of this method, the digital retinal images were acquired using Topcon TRC-50 EX non-mydriatic camera with a 50˚ field of view at Aravind Eye hospital, Coimbatore. Also, the proposed algorithm were tested and evaluated on DRIVE and MESSIDOR databases. The image set contains both normal and abnormal (pathological) cases.

3.2

Pre-processing:

Color fundus images often show important lighting variation, poor contrast and noise. Preprocessing is used to eliminate these imperfection and to generate image that provides more information for classification process. The pre-processing consists of following steps: 1) RGB to HSI conversion 2) Median Filtering 3) Contrast Limited Adaptive Histogram Equalization (CLAHE).

3.2.1

RGB to HSV Conversion:

The input retinal images in RGB Colour space are converted to HSV color space. The noise in the images are due to the uneven distribution of the intensity(V) component.

3.2.2

3X3 Median Filtering:

In order to uniformly distribute the intensity throughout the image, the intensity component of HSV colour space is extracted and filtered out through a 3X3 median filter.

Fig 2: Input Image in HSV color space

3.2.3

Contrast Limited Adaptive Histogram

Equalization (CLAHE):

The contrast limited adaptive histogram equalization is applied on the filtered intensity component of the image [12]. The histogram equalized intensity component is combined with HScomponent and transformed back to the original RGB colour space.

3.3

Image Segmentation using L*a*b color

space:

In this approach, we present a novel image segmentation based on color features from the images. Firstly, the image is converted from RGB color space to L*a*b color space. Then, the regions are grouped into a set of five clusters using nearest neighbor rule. By this process, we reduce the computational cost avoiding feature calculation for every pixel in the image. The process is described by the following steps:

Step 1: Read the image. Figure 3 shows the example input retinal image with exudates.

Fig 3: Input color fundus Image(after preprocessing)

Step 2: Convert the image from RGB colour space to L*a*b* color space (Figure 3). L*a*b* colour space helps us to classify the colour differences. It is derived from the CIE XYZ tristimulus values. L*a*b* colour space consists of a Luminosity layer L*, chromaticity layer a* indicating where the color falls along the red-green axis, chromaticity layer b* indicating where the colour falls along the blue-yellow axis. All of the colour information is in the a* and b* layer. The difference between two colors can be measured using the Euclidean distance.


(14)

Fig 4: Input Image in L*a*b color space.

Step 3: Segment the colors in a*b* space using Nearest Neighbor rule. Using this method the object is segmented into five clusters using the Euclidean distance metric.

Step 4: Label every pixel in the image obtained from the above steps using the cluster index.

Step 5: Create images that segment the images by colour. Step 6: Since the Optic Disc and Exudates are homogenous in their colour property, cluster possessing Optic Disc is localized for further processing.

3.4

Feature Extraction:

The set of features which provides more meaningful information for classification are extracted from the selected cluster using GLCM [14]. The features extracted from the selected clusters are contrast, correlation, cluster prominence, cluster shade, Dissimilarity, Entropy, Energy, Homogeneity, Sum of Squares [1].

The mean and the standard deviation are given by:

μ

x

=

(1)

μ

y =

(2)

σ

x =

(3)

σ

y =

(4)

Entropy

=

(5)

Homogeneity =

(6)

Contrast =

(7)

3.5

Feature Selection using Particle Swarm

Optimization (PSO):

Particle Swarm Optimization is a stochastic optimization technique developed to simulate the social behavior of organisms. The initial swarm is usually created in such a way that the population of the particles is distributed randomly over the search space. At each iteration, the particle is updated by following two best values, called pbest and gbest. Each particle keeps track of its coordinates in the problem space,which are associated with the best solution the particle has achieved so far. This fitness value is stored, and called pbest. When a particle takes the whole population as its

topological neighbour, the best value is a global best value and is called gbest [19].

3.6 Classification using K-NN Classifier:

The feature vectors obtained above are classified into normal or abnormal using K Nearest Neighbor (KNN) Classifier. It is one of the simplest but widely used machine learning algorithm. An Object is classified based on the distance from its neighbor. In our process, a set of 100 images were selected ,out of which 60 are normal and 40 are abnormal. For supervised classifiers, two sets are required; one for training and the other for testing. The training set contains 30 normal and 20 abnormal images. Feature vector that are determined using the above procedure are given as input for KNN classifier. The testing set contains 50 images to test the performance of the classifier.

4.

RESULTS AND DISCUSSION

In this approach, we have proposed a method to automatically extract exudates from Diabetic Retinopathy images. The pre-processed color retinal image is segmented into five cluster by converting the RGB image into L*a*b color space. The cluster containing Optic Disc is selected and features are extracted.

Fig 5.a: Input color fundus Image.

Figure 5.a shows the input color fundus image obtained from the eyes of Diabetic patient. The input image is preprocessed and converted to L*a*b color space. From the L*a*b color space, a and b components are separated since the color information is present only in the a and b component. Then the image is segmented into five clusters according to the nearest neighbor rule. Of these five clusters, cluster containing OD is selected for further processing. Figure 5.b shows the selected cluster along with Optic Disc.

Fig 5.b Selected Cluster (which is obtained as a result of applying segmentation using L*a*b color space.)


(15)

Parameter Our Proposed

Method

Our Existing Method

Success Rate

97%

95%

Time to Execute(Approx.)

40 sec 2 min

Table 1. Comparison of our Proposed Method with Existing.

5.

CONCLUSION:

The input retinal images were downloaded from STARE and DRIVE database. Exudates are the earlier signs of diabetic retinopathy. The low contrast digital image is enhanced using Contrast Limited Adaptive Histogram Equalization (CLAHE). The noise are removed from the images using median filtering. The Contrast enhanced color image is segmented using by converting the RGB color image into L*a*b color space. To Classify these segmented image into exudates and non-Exudates, a set of features based on texture and color are extracted using Gray Level Co-Occurrence Matrix (GLCM). The set of features are optimized using Particle Swarm Optimization (PSO) method. Then the images are classified into exudates and non-exudates using K-Nearest Neighbor (KNN) Classifiers. The method is evaluated on 70 abnormal and 30 normal images. Out of these 100 images, 97 images were detected successfully and thus a success rate of97% was obtained.

6.

REFERENCES

[1] B.Ramasubramanian, G. Mahendran, An efficient Integrated approach for the detection of exudates and Diabetic Maculopathy in color fundus Images. In Advanced Computing: an International Journal.. .

[2] International Diabetic Federation (IDF), 2009a, Latest diabetes figures paint grim global picture.

[3] Akara Sopharak, Bunyarit Uyyanonvara, Sarah Barman,

“Automatic Exudate Detection from Non-dilated

Diabetic Retinopathy retinal images using Fuzzy

C-Means Clustering” Journal of Sensors, vol.9, No. 3, pp

2148- 2161, March 2009.

[4] Saiprasad Ravishankar, Arpit Jain, Anurag Mittal,

“Automated feature extraction for early detection of

Diabetic Retinopathy in fundus images”. IEEE

Conference on Computer vision and pattern Recognition, pp. 210-217, August 2009.

[5] Doaa Youssef, Nahed Solouma, Amr El-dib, Mai

Mabrouk, “New Feature-Based Detection of Blood

Vessels and Exudates in Color Fundus Images“ IEEE

conference on Image Processing Theory, Tools and Applications,2010,vol.16,pp.294-299.

[6] Wynne Hsu, P M D S Pallawala, Mong Li Lee, Kah-Guan Au Eong. The Role of Domain Knowledge in the Detection of Retinal Hard Exudates. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) ,Kauai Marriott, Hawaii, 2001.

[7] Sarah Wild, Gojka R, Andres G, Richard S and Hilary K,

“Global Prevalence of Diabetes”, Diabetes care, vol. 27,

no. 5, pp. 1047-1053,2004.

[8] T. Walter, J.Klein, P.Massin and A.Erginary, “A Contribution of image processing to the diagnosis of Diabetic Retinopathy detection of exudates in color fundus images of the human retina”, IEEE Trans. On Med. images, vol. 21, no. 10, pp. 1236-1243, 2002. [9] C. Sinthanayothin, “Image analysis for automatic

diagnosis of Diabetic Retinopathy”, Journal of Medical Science, Vol. 35,No. 5, pp. 1491-1501, Jan 2011. Fig 6. Screening system for the detection of Exudates (developed using MATLAB GUI).


(16)

[10]Fleming. AD, Philips. S, Goatman. KA, Williams. GJ, Olson. JA, sharp. PF, “Automated detection of exudates for Diabetic Retinopathy Screening”, Journal of Phys. Med. Bio., vol. 52, no. 24, pp. 7385-7396, 2007. [11]Guoliang Fang, Nan Yang, Huchuan Lu and Kaisong Li,

“Automatic Segmentation of Hard Exudates in fundus images based on Boosted Soft Segmentation”, International Conference on Intelligent Control and Information Processing, pp. 633-638, Sept 2010

[12]Pizer. S.M. “The Medical Image Display and analysis group at the university of North Carolina:Reminiscences and philosophy ” IEEE Trans On Medical Imaging, vol. 22, no. 1, pp. 2-10, April 2003.

[13]Plissiti.M.E., Nikar.C, Charchanti.A, “Automated

detection of cell nuclei in pap smear images using

morphological reconstruction and clustering” IEEE

Trans. On Information Technology in Biomedicine, vol.15,no. 2, pp. 233-241, March 2011.

[14]Seongijin park, Bohyoung Kim, Jeongjin Loe“ GGO nodule volume preserving Non-rigid Lung Registration

using GLCM texture analysis”, IEEE Trans. On

Biomedical Engg., vol. 58, no. 10, pp. 2885-2894, sept 2011.

[15]Kandaswamy.U, Adjerch.D.A, Lee.M.C, “Efficient

Texture analysis of SAR imagery”, IEEE Trans. On Geoscience and Remote Sensing, vol. 43, no. 9,pp. 2075-2083, August 2005

[16] Tobin.K.N, Chaum.E, Govindasamy.V.P, “Detection of

anatomic structures in human retinal imagery” IEEE

Transactions on medical imaging, vol. 26, no. 12,pp. 1729-1739, December 2007.

[17] Gwenole Quellec, Stephen R. Russell, and Michael D. Abramoff, Senior Member, IEEE “ Optimal Filter Framework for Automated, Instantaneous Detection of Lesions in Retinal Images” IEEE Trans. on medical imaging, vol. 30, no. 2,pp. 523-533, Feb 2011. [18] Akara Sopharak, Bunyarit Uyyanonvara, sarah Barman,

“Comparative analysis of automatic exudates detection

algorithms”, Proceedings of the world congress on

Engg., Vol I, Dec 2011.

[19] Farid Melgani, Yakoub Bazi, “Classification of Electrocardiogram Signals With Support Vector Machines and Particle Swarm Optimization”, IEEE Transcation on Information Technology in BioMedicine, Vol.12, Issue 5, September 2008.


(17)

Automatic Generation Control of an Interconnected

Power System Before and After Deregulation

Pardeep Nain

Assistant Professor UIT, Hansi Haryana, India

K. P. Singh Parmar

Assistant Director (Technical) CAMPS, NPTI, Faridabad

Haryana, India, 121003

A K. Singh

Associate Professor DCRUST, Murthal

Haryana, India

ABSTRACT

This paper presents the particle swarm optimization (PSO) technique to optimize the integral controller gains for the automatic generation control (AGC) of the interconnected power system before and after deregulation. Each control area includes the dynamics of thermal systems. The AGC in conventional power system is studied by giving step load perturbation (SLP) in either area. The AGC in deregulated environment is studied for three different contract scenarios. To simulate bilateral contracts in deregulated system, the concept of DISCO participation matrix (DPM) is applied.

Keywords:

Automatic generation control, bilateral contract, deregulation; integral controller, particle swarm optimization

1.

INTRODUCTION

In the power system, numbers of utilities are interconnected through a tie-line by which power is exchanged between them [1]-[3]. Any sudden load perturbation in power system can cause variation in tie-line power interchange and frequency. AGC is used in the power system to keep frequency of control areas at its nominal value and tie-line power exchange for different control areas at their scheduled values [4]-[7]. In conventional power system, utilities have their own generation, transmission and distribution. Deregulated environment can consist of a system operator (SO), distribution companies (DISCOs), generation companies (GENCOs), and transmission companies (TRANSCOs). There is some difference between the AGC operation in conventional and deregulation environment. After deregulation, simulation, optimization and operation are changed but their basic idea for AGC is keep same [8]-[9]. In the new environment, DISCOs may contract power from any GENCOs and SO have to supervise these contracts. DISCO participation matrix (DPM) concept is taken to understand the several contracts that are implemented by the GENCOs and DISCOs [10].

Classical approach considers integral square error (ISE) for optimization of integral controller gain [11], [12]. This is a time consuming and trial and error method. Different numbers of approaches such as optimal, classical, artificial neural network (ANN), fuzzy logic and genetic algorithm (GA) have been used for optimization of controller parameters [13]-[15]. Many authors [16], [17] tried to use genetic algorithm (GA) for designing of controller more efficiently than the controller based on classical approach. Recently authors have found some drawbacks in GA algorithm [18]. At the other side, the premature convergences of GA degrade its searching ability. PSO is a powerful and more recently computational intelligence technique used to overcome this problem. In PSO the large numbers of particles are used in search space as compared to GA [18], [19]. PSO may generate very quickly stable convergence

characteristics. Compared to other optimization techniques, PSO is a more faster, robust and easy to use. PSO have been used in different areas: fuzzy system control, artificial neural network training, function optimization, and several areas where GA is used.

2.

OPTIMIZATION

TECHNIQUE

PSO is one of the most used and well known optimization techniques. Introduced by Dr. Eberhart and Dr. Kennedy [20] in 1995, PSO defines a population based optimization algorithm. In PSO, each particle is initialized by giving them random number.

The cost function which to be minimized is given by:

J = )2 + ( )2 + ( )2}dt (1) Where T is the simulation time.

Let represent the particle’s position and u represent the flight

velocity in a search space. Hence, the location of each kth particle is denoted as =( , ,…, ) in the -dimensional space. The previous best position of the kth particle is stored and given as =( , ,…….. ). The best particle among all these particles is taken as global best particle, given by . The velocity for the kth particle is denoted as = ( , , ……..., ).

The updated position and velocity of all particles may be determined by using the distance and the current velocity from

to as given in the following equations [20].

=

* i + *r1( )*( - ) + *r2( )*(

)

(2)

= + (3)

In the above equation (2), i is known as the inertia weight factor and and are known as the acceleration coefficients that attract all particles towards the and positions. r1( ) and r2( ) are the uniform random numbers must taken

between [0,1]. The term r1 ( )*( - ) is called the

cognitive component. The term r2 ( )*( - ) is called the

social component. Low values of acceleration coefficients result in wandering of particle away from the target region. In the other side, a high values of same result in sudden movement of particle towards or past the target region. Acceleration coefficients and are mostly taken as 2 according to pervious experiences [20].

A low inertia weight factor helps in local search while a large inertia weight factor enhances global exploration. Hence, select a suitable inertia weight that gives a balance between local and global explorations and find a sufficient optimal solution that


(1)

An Intelligent Tender Evaluation System using

Evidential Reasoning Approach

Md. Shahadat Hossain

Professor Dept. of CSE University of Chittagong

Md. Salah Uddin Chowdury

Lecturer Dept. of CSE

BGC Trust University Bangladesh

Smita Sarker

Lecturer Dept. of CSE

BGC Trust University Bangladesh

ABSTRACT

Tender evaluation is a critical decision making process that has a great impact on the project performance, with regards to time, cost and quality. The selection of the appropriate tender can ensure a smooth finishing of a project and eliminate several problems during construction. In this paper, the evidential reasoning (ER) approach which is capable of processing both quantitative and qualitative data is applied as a means of addressing the tender evaluation process. The process of building a multiple criteria decision model of a hierarchical structure of a tender is presented, in which both quantitative and qualitative information is represented in a unified manner. At the light of a case study of Bangladesh the tender evaluation process is then fully investigated using the ER approach. The advantages of applying this model in practice and its analysis are discussed.

Keywords

Construction, decision-maker, evidential reasoning, multiple criteria decision analysis, tender evaluation.

1.

INTRODUCTION

Tendering is a critical activity in a capital works project and is normally the accepted means of obtaining a fair price and best value for undertaking construction works[1][2][13]. The tender process involves a principal of seeking competitive bids for works and/or services that are set out in tender documents typically which include contract conditions, specifications and drawings or a documented brief. Offers are made by a variety of bidders (e.g. contractors, consortia and/or consultants) who set out their offer in a submission in accordance with the tendering requirements. The introduction of quality into the evaluation of tender offers provide a viable means of managing the risk of non-conformance and the failure to attainment project outcomes, without violating the principles of fairness, transparency and value for money, particularly in respect of professional service contracts. Tendering falls under the oversight of a governance group. Local governments usually organize tenders where local companies bid for large scale projects supported and financed by the government. Tenders involve large amounts of money. Since the government supports the projects, on one side the companies find it very prestigious to be part of it, and on the other side, the public is very sensitive about how well the money is used. A multi-disciplinary committee is constituted in order to evaluate the participants. The evaluation process consists of two phases: first is the pre-qualification phase where tenders are scrutinized based on their legal and technical system, and second is the final phase where tenders are evaluated based on a costs/performance analysis [1]. In the

first phase, participants submit general information about the company, their legal and technical system, number of employees, etc. In the second phase, participants submit information on prices and product quality. The companies are then evaluated based on the criterions such as price, product quality, and technical competence [1] [2].

To assess tenders, a system of criteria intended to encapsulate the competence of the tendering organization to undertake a

particular project is used to rate the renderer’s bids. Selection

criteria are intended to assess the competence of the tendering organizations to achieve the required project outcome [1]. The criteria are usually selected from the following:

• Relevant experience; • Appreciation of the task; • Past performance;

• Management and technical skills; • Resources;

• Management systems; • Methodology; and

• Price.

Selection of above qualititative and quantitative criteria which reflect the critical elements of the project and that can be assigned a weighting to reflect the relative importance of selection criteria. Then scores that are based on information submitted with the tender bid; and normalizing the non-price criteria and the tender price before applying the weightings to allow for the true effect and advantage of the weighting system[1][2][13].

The main objective of this paper is to select best tender using Evidential Reasoning approach by aggregating significant factors of selected criteria. Finally we show the ranking of evaluated tender.

In this paper, ER approach will be applied by taking factors considered in evaluating tender of a government organization of Bangladesh, named Local Government Engineering Department (LGED). In evaluating tender, LGED mainly considers factors such as relevance experience, past performance, technical skills, management system of the Bidder Company, and price [14],[15],[16]. Each of the factors again consists of sub-factors and hence they are organized in a hierarchical order, which has been illustrated in Figure 1. It can be seen that the factors mentioned are of both qualitative and quantitative nature. These qualitative factors are the source of uncertainty and this will be addressed by using ER approach as will be elaborated in sections 3 and 4. Hence, the application of ER approach for tender evaluation in the


(2)

context of Bangladesh will ensure the transparency and hence, will mitigate corruption and criticism significantly.

We organize the research activities as follows. In section 2 we present the related works respectively. The ER approach for tender evaluation is outlined and illustrated by sections 3. The experimental result is outline by section 4.Finaly we concluding our remarks at section 5 in which we show the outcomes of evaluation with the discussion of suggestion of future work.

2.

RELATED WORKS

There are a variety of different methods that can be employed to select which contractor should be awarded a tender but research conducted for this project has indicated that the most commonly used are:

1) Bespoke approaches, which are widely used in industry and are selection procedures that are developed by individual organisations so there are many variations and relies purely

yes/no criteria and the decision maker’s judgement. This

process is very subjective and is more susceptible to the biases of the decision maker [10][11].

2) Multi-criteria selection methods which use weighted non-price factors as well as non-price in either a single or two-stage (i.e. prequalification) selection process. This approach reduces the impact of the biases of the decision maker by determining the weighting of each criterion prior to viewing any submissions [13].

But the above processes do not handle uncertainty of both qualitative and quantitative data. But the Evidential Reasoning is the strong method for handling such kind of uncertainty.

3. THE EVIDENTIAL REASONING

APPROACH FOR TENDER

EVALUATION

3.1 Identification of Evaluation Factors and

Evaluation Grades

We apply the evidential reasoning approach to analyze the performance of four types of tender including Tender1, Tender2, Tender3, and Tender4. Here both qualitative and quantitative performance attributes are considered for demonstrating purpose. The major performance attributes are

considered as relevant experience, past performance, technical skills, management systems and price. For facilitating the assessment these attributes are further classified basic attributes such as tender role, project cost, project duration, quality standard, target performance, extension of time granted, experience, technical personnel, professional ability, quality system, environmental management system and OHS & R management System which we shown on the figure 1.

3.2 Computational steps of aggregating

assessment

Firstly we show the total calculation for aggregation of the Relevant Experience .For Tender 1 .The Relevant Experience (e1) is assessed by three basic attributes: tender role (e11), project cost (e12) and project duration (e13).

From the table1, we have

1,1 = 0, 2,1 = 1.0, 3,1 = 0, 4,1 = 0

1,2 = 0, 2,2 = 0, 3,2 = 0.7, 4 ,2= 0.3

1,3 = 0, 2,3 = 0.2, 3,3 = 0.6 4,3 = 0

On the basis of importance on the tender evaluation suppose the hypothetical weights for three attributes are: ω11=0.30,

ω12=0.35 and ω12=0.35. Now using expression

mn,i=in,i n=1,…, N;

Fig. 1. Evaluation hierarchy of the tender evaluation we get the basic probability masses (mn,i) as follows [4], [5],

[6], [7], [8]:

m1,1 = 0; m2,1 = 0.30; m3,1 = 0; m4,1 = 0;

0

~

70

.

0

,1

1

,

H

H

m

m

m1,2 = 0; m2,2 = 0; m3,2 = 0.245; m4,2 = 0.105;

0

~

;

65

.

0

,2

2

,

H

H

m

m

m1,3 = 0; m2,3 = 0.70; m3,3 = 0.105; m4,3 = 0;

07

.

0

~

;

65

.

0

,3

3

,

H

H

m

m

By using recursive equations we get the combined probability masses [4], [5], [6], [7], [8]. Since


(3)

1

0

.

105

1

.

1173

)

0

..

0

0315

.

0

0735

.

0

0

..

0

(

1

1

1 1 1 4 1 4 1 2 , ) 1 ( , ) 2 (

     



t t j j j I t

I

m

m

K

Table 1

Assigned weights, beliefs and calculated probability masses for level 3attributes

and mH,i =

m

H,i+

m

~

H,i(i=1,2….) now we have

m1,I(2) = KI(2)(m1,1,m1,2+ m1,1,mH,2+ m1,2 mH,1)= 0

m2,I(2) = KI(2)(m2,1,m2,2+ m2,1,mH,2+ m2,2 mH,1)

= 1.1173(0+ 0+ 0.30*0.65) = 0.21787 m3,I(2) = KI(2)(m3,1,m3,2+ m3,1,mH,2+ m3,2 mH,1)

= 1.1173(0+ 0+ 0.245*0.70) = 0.19162 m4,I(2) = KI(2)(m4,1,m4,2+ m4,1,mH,2+ m4,2 mH,1)

= 1.1173(0+ 0+ 0.105*0.70)= 0.08212

, (1) ,2

0

.

455

) 2 ( ) 2 (

,I

I H I H

H

K

m

m

m

0

~

~

~

~

~

2 , ) 1 ( , 2 , ) 1 ( , 2 , ) 1 ( , ) 2 ( ) 2 ( ,

I H I H H I H H I H

I

H

K

m

m

m

m

m

m

m

Similarly we get

m1,I(3)= 0 , m2,I(3)= 0.226276, m3,I(3)= 0.310450 , m4,I(3) =0.06441

) 2 ( ,I H

m

=0.36001 and ) 2 ( ,

~

I H

m

=0.03877

Now the combined degrees of belief are calculated by using equation as follows [4], [5], [6], [7], [8]:

0

1

, (2)

) 2 ( , 1 1

I H I

m

m

35356

.

0

0.36001

1

0.226276

1

, (2)

) 2 ( , 2 2

I H I

m

m

48509

.

0

0.36001

1

0.31045

1

, (2)

) 2 ( , 3 3

I H I

m

m

10064

.

0

0.36001

1

0.06441

1

, (2)

) 2 ( , 4 4

I H I

m

m

 

0

.

06058

0.36001

1

0.03877

1

~

) 2 ( , 2 ,

I H I H H

m

m

Then the Relevant Experience of Tender1 is assessed by S(Relevant Experience)= { (average, 0.35356) , (good, 0.48509), (excellent ,0.10064)} (1)

From the statement (1) we can say that Relevant Experience of Tender 1 is assessed by evaluation grade average is 35.356%, good is 48.509% and excellent is 10.064%. Here we also see that the Relevant Experience is evaluated by 6.058 unassigned degree due to uncertainty.

After repeating above procedure recursively the other attributes such as past performance, technical skills, resources, management systems and price are aggregated which are shown on the Table 2.

Weight Belief Probability Mass

ω1,i β1,i β2,i β3,i β4,i m1, i

m2,i m3,i m4,i mH,i m¯H,i m˜H, i Tender

Role

0.33 0 1.0 0 0 0 0.33 0 0 0.77 0.77 0

Project Cost

0.35 0 0 0.7 0.3 0 0.245 0.105 0 0.65 0.65 0

Project Duration


(4)

After aggregating five criteria we find the assessment degree of for tender1 as follows:

S(Tender1) = { (poor, 0.02563), (average, 0.51809) ,

(good, 0.39628), (excellent, 0.39628) } (3a) Similarly we can generate the overall assessment of other three tenders such as Tender2, Tender3, and Tender4:

S(Tender2) ={ (poor, 0.12104), (average, 0.32976) ,

(good, 0.46778), (excellent, 0.05192) } (3b) S(Tender3) = { (poor, 0.12512), (average, 0.45748) ,

(good, 0.30331), (excellent, 0.07598) } (3c) S(Tender4) = { (poor, 0.20271), (average, 0.31205) ,

(good, 0.45920), (excellent, 0) } (3d)

Table 3

Distributed overall belief for four tenders

Poor Averag

e

Good Excellen t

Unknow n Tender

1

0.0256 3

0.5180 9

0.3962 8

0.02707 0.03293 Tender

2

0.1210 4

0.3297 6

0.4677 8

0.05192 0.03022 Tender

3

0.1251 2

0.4574 8

0.3033 1

0.07598 0.03811 Tender

4

0.2027 1

0.3120 5

0.4592 0 0.02604

As a strong method, the ER approach of our system finds out unassigned belief of four tenders which shown as Table 3 more significantly. Due to uncertainty these unassigned beliefs has occurred in the tender evaluation system.

Fig 2: Performance Evaluation for Tender2

TABLE2

DEGREE OF MAIN CRITERIA

4. EXPERIMENTAL RESULT AND

ANALYSIS

To precisely rank the four tenders, their utilities need to be estimated. To do so, the utilities of the four individual evaluation grades need to be estimated first. The above partial rankings of alternatives could be used to formulate regression models for estimating the utilities of grades [4],[5].[6],[7],[8]. The maximum, minimum, and the average expected utility on y are given by:

)

(

)

(

)

(

)

(

1

1

max N

N

n

H N n

n

u

H

u

H

y

u

(4a)

N

n

n n

H

u

H

u

H

y

u

2 1 1

min

(

)

(

)

(

)

(

)

(4b)

2

)

(

)

(

)

(

y

u

max

y

u

min

y

u

a vg

. (4c)

If all original assessments on y are complete, meaning

H

0

, then

u

(

y

)

u

max

(

y

)

u

min

(

y

)

u

a vg

(

y

)

. The ranking of two alternatives

a

l and

a

k is based on their utility intervals. It is said that

a

l is preferred over

a

k if and only if

))

(

(

))

(

(

max

min

y

a

l

u

y

a

k

u

. The alternatives are

indifferent if and only if

u

min

(

y

(

a

l

))

u

min

(

y

(

a

k

))

and

))

(

(

))

(

(

max

max

y

a

l

u

y

a

k

u

.

General attributes

Tender1 Tendre2 Tender3 Tender4 Relevant

Experience

A(0.35356) G(0.48509) E(0.10064)

P(0.50200) A(0.13034) E(0.34157)

A(0.14383) G(0.69479) E(0.05904)

A(0.27570) G(0.66648) Past

Performance P(0.06235) A(0.33184) G(0.51973)

P(0.02683) A(0.71938) G(0.25377)

A(0.34035) G(0.63103)

P(0.030103) A(0.27093) G(0.64187) Technical

Skills

P(0.11406) A(0.14257) G(0.71484)

P(0.23873) A(0.38789) G(0.31154)

P(0.50086) A(0.14291) G(0.22934) E(0.09828)

A(0.09612) G(0.90387) Management

System

A(0.65548) G(0.20036) E(0.08587)

A(0.27578) G(0.61322)

P(0.14675) A(0.25847) G(0.17778) E(0.32419)

P(0.51555) A(0.12419) G()0.29930


(5)

Fig. 3. Distributed Assessment of Tenders

In any other case ranking is inconclusive and not reliable. To generate reliable ranking, the quality of the original assessment needs to be improved by reducing associated incompleteness concerning

a

l and

a

k.

Now using (4a)-(4c) we get the utilities as the table4.

Table 4

Utilities on tender evaluation

Umin Umax Uavg Rank

Tender1 0.4640 0.497 0.4805 2 Tender2 0.4737 0.5031 0.4884 1 Tender3 0.4307 0.4687 0.4497 3 Tender4 0.4102 0.4362 0.4232 4

The ranking of the four tenders is stated as follows:- Tender2>Tender1>Tender3>Tender4

Fig 4. Ranking Of Four Tenders

5. CONCLUSION

Tender evaluation is complex and fragmented. Without a proper and accurate method for evaluating the tender, the performance of the project will be affected, thereby denying the client value for money. In order to ensure the completion of the project successfully, the client must evaluate the tender in an accurate and transparent way. The ER framework as presented in this paper will help to improve the quality of tender evaluation process. The reason for this is that the ER approach is capable of handling incomplete, imprecise and vague information as shown in the previous section. Eventually, this will help DMs to reach robust decisions even in the presence of incomplete data.

REFERENCES

[1] Guidelines on tender Evaluation using weighted criteria for Building Works and Services, Tasmania, Department Of Treasury And Finance, version 2.0, pp.1-12, 2006.

[2] Standard Tender Documents, Procurement of Works User Guide, European Bank for Reconstruction and development, pp. 1-122,August 2010.

[3] J. B. Yang and M. G. Singh, “An Evidential Reasoning Approach for Multiple Attribute Decision Making with

Uncertainty,” IEEE Trans. Syst.,Man, Cybern., vol. 24, no. 1, pp. 1–4, 1994.

[4] J. B. Yang and D. L. Xu, “On the Evidential Reasoning Algorithm for Multiple Attribute Decision Analysis with

Uncertainty,” IEEE Trans. Syst., Man, Cybern. A, vol. 32, pp. 289–304, May 2002.

[5] J. B. Yang and D. L. Xu, “Nonlinear Information Aggregation via Evidential Reasoning in Multiattribute Decision Analysis Under Uncertainty,” IEEE Trans. Syst., Man, Cybern. A, vol. 32, no. 4, pp. 376–393, May 2002. [6] J. B. Yang, “Rule and Utility Based Evidential Reasoning

Approach for Multiple Attribute Decision Analysis Under

Uncertainty,” Eur. J. Oper.Res., vol. 131, no. 1, pp. 31–61, 2001.

[7] D. L. Xu, “Assessment of Nuclear Waste Repository Options Using the Er Approach,” Int. J. of I T & DM vol. 8, no. 3, pp. 581–607, 2009.

[8] P. Gustafsson, R. Lagerström, P. Närman, and M.

Simonsson , “The Ics Dempster-Shafer how to ,” unpublished

[9] Y.Wang, J. B. Yang, and D. L. Xu, “Environmental Impact Assessment using the Evidential Reasoning Approach,” Eur. J. Oper.Res., vol. 174, pp. 1885–1913, 2005.

[10]M. Soenmez, J. B. Yang & G. D. Holt, “Addressing the contractor selection problem using an evidential reasoning

approach,” Blackwell Science Ltd, Engineering,

Construction and Architectural Management, vol. 8, no. 3, pp. 198–210, 2001.

[11]M. Soenmez, J. B. Yang ,G. D. Holt & G.Graham,

“Applying Evidential Reasoning to Prequalifying Construction Contractors,” Journal of Management in Engineering, vol. 18, no. 3, pp. 111–119,July 2002. [12]G.Graham & G. Hardakder “ Contractor Evaluation in the

Aerospace Industry using the Evidential Reasoning

Approach,” Journal of Research in Marketing &

Entrepreneurship, vol. 3, no. 3, pp. 162–173,2001.

0 0.1 0.2 0.3 0.4 0.5 0.6

Tender1 Tender2 Tender3 Tender4


(6)

[13]L.E.Clarker “Factors in the Selection of Contractors for,” University of Southern Queensland, Faculty of Engineering and Surveying ,pp.1–172, November,2007. [14]Schapper, Paul R (2006). “An Analytical Framework for

the Management and Reform of Public Procurement”. Journal of Public Procurement - Vol. 6 Nbr. 1/2. PrAcademics Press

[15]Singer, Marcos (2009). “Does E-Procurement Save the State Money?”Journal of Public Procurement - Vol. 9 Nbr. 1. PrAcademics Press.

[16]Governing Principles of e-Government procurement,

Central Procurement Technical Unit

(CPTU),Implementation Monitoring and Evaluation

Division, Government of The People’s Republic of