The Model of Human Brain’s Knowledge-Growing Mechanism

3. The Method of Knowledge-Growing Mechanism

In previous research we have investigated the application of the special case in probability theory namely Bayes theorem which came up with three model of the information processing as follows [8, 9]. • Many-to-Estimated One MEO Probability. From some processed indications will be obtained information with a necessary certainty or called as Degree of Certainty DoC, which directs to an inference regarding to the hypothesis being observed. • One-to-Many-to-Estimated-One OMEO Probability. Given processed single indication will be obtained information regarding the DoCs of all available hypothesis which in turn directs to single hypothesis with the largest DoC. • Many-to-Many-to-Estimated-One MMEO Probability. Given processed multiple indications will be obtained information regarding the DoCs of all available hypotheses which in turn directs to single hypothesis with the largest DoC. This is also called as multi-hypothesis multi-indication problem [8]. Figure 6 : The Illustration of the MMEO technique [8, 9] By investigating the KG mechanism delivered in previous section we can create a model of KG for KGS. Let’s observe Figure 5, if assumed the information perceived by the sensors as indications or multi- indication of a phenomenon viewed from the sensors’ perspectives and the possible answers stored in form of knowledge in the brain are assumed as hypotheses or multi-hypothesis, we can see that it is a kind of a multi-hypothesis multi-indication problem which can be approached by MMEO technique. If i j ϑϑϑϑ is the DoC of posterior information resulted from prior indications processing with 1 2 i , , ...,n ==== represents the number of indication and 1 2 j , ,...,m ==== represents the number of hypothesis, then we can create a table for MMEO technique as shown in Table 1. The mechanism of fusing the information which is represented by multi-indication, to obtain the selected hypothesis with the highest DoC is done by applying A3S Arwin-Adang-Aciek-Sembiring information-inferencing fusion method [10] as depicted by boxed-dashed-red line in Table 1. The A3S method equations are given in Equation 1 and Equation 2. 1 m i j j i t P P m ϑϑϑϑ ψ ψψ ψ ==== ==== ∑ ∑ ∑ ∑ ... 1 1,..., max i t estimate i n P P ψ ψ ψ ψ ψ ψ ψ ψ ==== ==== ... 2 with i t P ψ ψψ ψ is New-Knowledge Probability Distribution NKPD at time t where 1 2 t , , ..., ττττ ==== . i j P ϑϑϑϑ means hypothesis i at indication j. The i t P ψ ψψ ψ is the representation of “fused probabilities” of all posterior probabilities from the same hypothesis at a certain t, while “estimated” means the selected hypothesis is the most likely hypothesis from all available hypotheses given indications at a certain t. estimate P ψ ψψ ψ is the largest value of i t P ψ ψψ ψ that we call it as DoC of the selected hypothesis at a certain t that becomes new knowledge at time t. Table 1 : The Illustration of the MMEO technique with A3S information-inferencing fusion method i A Multi-Hypothesis, i A j B 1 … i ... n Multi-Indication j B 1 1 1 P ϑϑϑϑ … 1 i P ϑϑϑϑ … 1 n P ϑϑϑϑ 2 1 2 P ϑϑϑϑ … 2 i P ϑϑϑϑ … 2 n P ϑϑϑϑ … … … … … … j 1 j P ϑϑϑϑ … i j P ϑϑϑϑ … n j P ϑϑϑϑ … … … … … … m 1 m P ϑϑϑϑ … i m P ϑϑϑϑ … n m P ϑϑϑϑ A3S JPD i t P ψ ψψ ψ 1 1 m j j P m ϑϑϑϑ ==== ∑ ∑ ∑ ∑ 1 m i j j P m ϑϑϑϑ ==== ∑ ∑ ∑ ∑ 1 m n j j P m ϑϑϑϑ ==== ∑ ∑ ∑ ∑ A3S + Maximum Score estimate P ψ ψψ ψ 1 1 m i j j i ,...,n P max m ϑϑϑϑ ==== ====                                                 ∑ ∑ ∑ ∑