Aircraft Radar Cross Section and Speed A Brief Introduction to Information Fusion

• calculating the NN outputs and backpropagating the associated error, • adjusting the connection weights to minimize the error. After passing the training session, the NN uses the final connection weights when recognizing the input patterns given to it. As commonly in a multilayer NN, BPN has three layers namely input layer, hidden layer, and output layer. The number of input and output layers is depended on the input pattern and the output’s target. The number of hidden layer is depended on particular applications, but commonly one hidden layer is sufficient for many applications. The architecture of BPN is depicted in Figure 3. Figure 3 : The architecture of the BPN model Priyanto et.al, 2008. 1 Training Algorithm. Refer to Fausset, 1994 for detailed BPN training algorithm. 2 Activation Function. There are four common NN activation functions namely: • identity function which produces binary output 0 or 1, • binary step function with threshold θ , • binary sigmoid logistics sigmoid, • bipolar sigmoid, • hyperbolic tangent. The sigmoid function and hyperbolic tangent are the most common activation function for training NN with backpropagation mechanism.

2.4. Unsupervised Neural Network – Adaptive

Resonance Theory Figure 4: ART1-NN architecture Sumari et.al, 2008a. ART-NN, as presented in Figure4, is designed to facilitate degree controlling of pattern similarity that is placed at the same cluster and overcomes stability- plasticity problem that is faced by other NN. The ART1-NN is also designed to group cluster binary input vectors. It has two layers, F1 layer is divided into two sublayers namely F1a as input part, and F1b as interface part, and layer F2 cluster along with reset unit that is used to control degree of patterns similarity that is put down at the same unit cluster. F1 and F2 layers are connected by two groups of weight paths, bottom-up weight and top-down weight. To control learning process, some complement units are also entangled at this NN. For performing pattern matching, ART is provided with a parameter called vigilance parameter, with a value in range 0 ρ ρ 1 ≤ . Higher values of ρ are applied for training session, while lower values are applied for operating session. ART1-NN architecture consists of two parts. The architecture of the ART1-NN is depicted in Figure 4. • Computation Units. It consists of F1 layer input part and interface part, F2 layer, and reset unit. The F2 layer is also called as competitive layer. • Complementary Units. This unit provides a mechanism so that the computation that is carried out by ART1 algorithm can be done by using NN principles. These units are also called as gain control units. For the ART1-NN algorithm in detail, refer to Skapura, 1991.

2.5. Aircraft Radar Cross Section and Speed

Radar Cross Section RCS is comparison between power density that is reflected to transmitting source and power density that is reflected by detected target or object. Figure 5 shows example of aircraft RCS that captured by Radar. Every aircraft or air object has sharply differentiated RCS in accordance with configuration elements that form RCS itself. 18 Figure 5 : Common aircraft RCS Nopriansyah et.al, 2008. Aircraft speed that is presented at Radar screen can be obtained by using Doppler principle that is shown in Equation 3. Figure 3 illustrates how to calculate object speed by using Doppler principle. 2 d .v f cos θ λ = 3 which f d is Doppler shift, v is aircraft speed, λ is wavelength, and θ is angle between direction of incoming signal propagation with direction of antenna movement. In this paper we use RCS and speed data taken from previous research done by Nopriansyah et.al, 2008 as presented in Table 1. Table 1: List of aircraft RCS and speed data Nopriansyah et.al, 2008. No. Aircraft Type RCS Speed kmhour 1. Bell 47G 3 168.532 2. F-16 Fighting Falcon 5 1470 3. Hawk200 8 1,000.08 4. Su-30 Sukhoi 15 2,878.75 5. Cobra AH-1S 18 227.796 6. Cassa C-212 27 364.844 7. CN-235 30 459.296 8. A-310 Airbus 100 980 19

2.6. A Brief Introduction to Information Fusion

In general, information fusion is a technique in combining physical or non-physical information form from diverse sources to become single comprehensive information to be used as a basis for prediction or estimation of a phenomenon. The prediction or estimation is then used as the basis for performing decisions or actions. Figure 6 illustrates the concept of information fusion. Figure 6 : The concept of information fusion Ahmad Sumari, 2008. The information sources can be from as follows: • observation data from distributed sensors, • commands and data from operator or user, • a priori data from an existing database. Referring to Hall, 2001 in Ahmad Sumari, 2008, for obtaining a comprehensive information in decision level, we can select many technique options such as Boolean operator methods AND, OR or heuristics value such as M-of-N, maximum vote, or weighted sum from hard decision and Bayes method, Dempster-Shafer, and fuzzy variable for soft decision. In this paper we use the Boolean operator for all approaches.

3. A GENERIC MODEL OF NEURAL