Perancangan dan Realisasi Sistem Pendeteksi Gerakan Sebagai Natural User Interface (NUI) Menggunakan Bahasa C#.

(1)

PERANCANGAN DAN REALISASI SISTEM PENDETEKSI

GERAKAN SEBAGAI NATURAL USER INTERFACE ( NUI )

MENGGUNAKAN BAHASA C#

Disusun oleh :

Jeffry 0822023

Jurusan Teknik Elektro, Fakultas Teknik, Universitas Kristen Maranatha, Jl.Prof.Drg.Suria Sumantri, MPH no.65, Bandung, Indonesia,

Email : jeffryajah27@gmail.com

ABSTRAK

Perancangan sistem pendeteksi gerakan interaktif adalah suatu program yang dirancang untuk memberikan efek interaktif melalui gerakan yang dilakukan user. Pendeteksi gerakan interaktif ini menggunakan dua buah komponen utama yaitu komputer dan webcam. Webcam dirancang agar dapat mendeteksi pergerakan yang terjadi pada interactive space di antara layar komputer dan webcam. Setiap gerakan yang terjadi pada interactive space, tersebut akan direpresentasikan pada layar komputer.

Pemrograman yang dilakukan dengan menggunakan bahasa pemrograman C# dan library AForge.Net Framework pada Microsoft Visual Studio 2010. AForge.Net Framework ini digunakan karena dirancang khusus untuk memberikan filter pemrosesan gambar pada bahasa pemrograman C#. Perancangan dimulai dengan membuat dua buah program yaitu pendeteksi gerakan dan efek air pada gambar. Kedua program tersebut akan digabungkan sehingga didapatkan satu buah aplikasi pendeteksi gerakan interaktif. Setelah program dibuat, eksperimen dilakukan terhadap aplikasi agar mendapatkan hasil yang optimal.

Eksperimen dilakukan terhadap pengaruh pencahayaan, pengaruh jarak bidang interactive space,. Berdasarkan pengaruh pencahayaan, didapatkan bahwa aplikasi harus dilakukan pada keadaan terang. Berdasarkan pengaruh luas bidang interactive space, didapat bahwa pada distance ( jarak webcam ke layar laptop) 1,5 meter gerakan cursor mouse dengan layar mendekati hasil yang diharapkan. Pada pemrosesan efek air dilakukan beberapa penyesuaian agar hasil optimal, yaitu penetapan radius put drop sebesar 20 pixel dan penentuan timer sebesar 100ms.

Kata kunci : interactive space, webcam, bahasa C#, pendeteksi gerakan, AForge.Net Framework, efek air.


(2)

DESIGNING AND REALISATION MOVING

DETECTION SYSTEM AS NATURAL USER

INTERFACE (NUI) USING C# LANGUAGE

Write by : Jeffry 0822023

Dept of Electrical Enginering, Faculty of Enginering, Maranatha Cristian University , Prof.Drg.Suria Sumantri Street, MPH no.65, Bandung, Indonesia,

Email : jeffryajah27@gmail.com

ABSTRACT

The designing of interactive moving detection system is program which build to give interactive effect while the user doing movement. This interactive moving detection use two main component, that is computer and webcam. Webcam was designing to detect motion in interactive space between computer and webcam. Every motion did in the interactive space will being representative in computer’s monitor.

The program using C# language’s program and library AForge.Net Framework in Microsoft Visual Studio 2010. AForge.Net Framework used because it was build to image filtering in C# language’s program. The first step of designing is make two main program, that is motion detection and water effect in the image. Next step is merge both of the program until the interactive motion detection application made. After that, to make the optimal result, the experiment did.

The experiment did to known how the lighting and the distance of interactive space can effect this program. Among to the light effect, this application must doing in the bright condition. Among to the distance of interactive space, has known that in the distance ( webcam to the computer’s monitor) 1,5 metre , the motion of mouse’s cursor are almost have expected result. In the water effect processing, it has been change to get the optimal result, for example the radius of put drop are 20 pixel and the timer elapsed every 100 ms.

Keywords: interactive space , webcam ,C# language, motion detection, AForge.Net Framework , water effect


(3)

DAFTAR ISI

Halaman Judul ... i

Lembar Pengesahan ... ii

Pernyataan Orisinalitas Tugas Akhir……... iii

Pernyataan Publikasi Tugas Akhir……... iv

Kata Pengantar ... v

Abstrak ... vii

Abstract ... viii

Daftar Isi ... ix

Daftar Tabel ... xii

Daftar Gambar ... xiii

Daftar Lampiran ... xiv

BAB I PENDAHULUAN 1.1. Latar Belakang ... 1

1.2. Identifikasi Masalah ... 2

1.3. Perumusan Masalah ... 2

1.4. Tujuan dan Manfaat ... 2

1.5. Pembatasan Masalah ... 3

1.6. Sistematika Penulisan dan Metodologi Penelitian ... 3

BAB II LANDASAN TEORI 2.1 Computer Vision……….. ... 5

2.2 Image Processing ……….……… 6

2.3 Natural User Interface ( NUI) ………..….. 8


(4)

2.4.1. Grayscale Class………. 11

2.4.2. Difference Class ………. 13

2.4.3. Threshold Class ……….. 14

2.4.4. Erosion Class ……….. 15

BAB III PERANCANGAN DAN PENDETEKSI GERAKAN DAN EFEK AIR 3.1 Perancangan Letak Pendeteksi Gerakan ………...….. 17

3.2 Perancangan Pendeteksi Gerakan Pada Webcam... 18

3.2.1. Pengambilan Gambar oleh Webcam ………. 18

3.2.2. Proses Pendeteksi Gerakan………. 21

3.2.3.Pengolahan motion detection menjadi pergerakan cursor mouse 28 3.3 Perancangan Efek Air ……….. 36

BAB IV DATA PENGAMATAN DAN ANALISA DATA 4.1 Implementasi dan Analisis Pendeteksi Gerakan... 42

4.1.1 Pendeteksi Gerakan Pada Webcam... 42

4.1.2 Pencarian titik putih dan perbandingan letak terhadap titik pada layar yang diinginkan ... 44

4.1.3 Proses pembentukan efek air... 47

BAB V KESIMPULAN DAN SARAN 5.1. Kesimpulan ... 48

5.2. Saran ... 49

DAFTAR PUSTAKA ... 50 LAMPIRAN A


(5)

DAFTAR TABEL

Tabel 4.1 Tabel pengaruh pencahayaan... 43 Tabel 4.2 Tabel Pengaruh Interactive Space... 46


(6)

DAFTAR GAMBAR

Gambar 2.1 Computer Vision……… ………….…. 5

Gambar 2.2 Representasi 24-bit RGB ………...………... 7

Gambar 2.3 Pixel sebuah gambar ……… 7

Gambar 2.4 Perkembangan CLI , GUI dan NUI ………...…… 8

Gambar 2.5 Grayscale Filter………..……….. 12

Gambar 2.6 Difference Filter ………..……… 14

Gambar 2.7 Threshold Filter ………...……...…… 15

Gambar 2.8 Erosion Filter ……….. 16

Gambar 3.1 Penempatan Pendeteksi Gerakan ……….……….….. 17

Gambar 3.2 Flowchart Pengambilan Gambar oleh Webcam …...….. 18

Gambar 3.3 Flowchart Capture Device ………...………….……….. 19

Gambar 3.4 Tampilan Capture Device Form …..….……….. 20

Gambar 3.5 Tampilan Aplikasi Webcam …….…...……….. 20

Gambar 3.6 Urutan pengunaan Filter AForge.Net ………...….…. 21

Gambar 3.7 Flowchart Pendeteksi Gerakan …….………...…….………… 22

Gambar 3.8 Tampilan Menu Motion ……... 23

Gambar 3.9 Flowchart Motion Detection……...………. 24

Gambar 3.10 Hasil pendeteksi gerakan …………...……….… 28

Gambar 3.11 Flowchart Penghitung Pixel Putih ………...….………. 29

Gambar 3.12 Hasil dari motion to white pixel detection ……...……….. 30

Gambar 3.13 Flowchart motion to white pixel …………...….……. 31

Gambar 3.14 Flowchart Vertex ……….………. 32

Gambar 3.15 Tampilan aplikasi penghitung pixel putih... 33

Gambar 3.16 Flowchart gerakan penganti mouse……...…...…….. 35

Gambar 3.17 Proses Buffer pada efek air ……...….. 36


(7)

Gambar 3.19 Gambar terjadinya put drop ………...……… 37

Gambar 3.20 Gambar pixel efek gelombang air ... 38

Gambar 3.21 Flowchart Proses Efek Air ………...…... 40

Gambar 4.1 Pendeteksi gerakan pada webcam... 43

Gambar 4.2 Perbandingan letak gerakan terhadap titik pada layar yang diinginkan ... 44


(8)

DAFTAR LAMPIRAN

LAMPIRAN A

LIST PROGRAM AFORGE.NET ... A - 1 LAMPIRAN B


(9)

LAMPIRAN A

A. AForge.Net Framework

A.1 Grayscale Class

namespace AForge.Imaging.Filters {

using System;

using System.Collections.Generic; using System.Drawing;

using System.Drawing.Imaging; ///<summary>

/// Base class for image grayscaling. ///</summary>

///

///<remarks><para>This class is the base class for image grayscaling. Other /// classes should inherit from this class and specify <b>RGB</b>

/// coefficients used for color image conversion to grayscale.</para> ///

///<para>The filter accepts 24, 32, 48 and 64 bpp color images and produces /// 8 (if source is 24 or 32 bpp image) or 16 (if source is 48 or 64 bpp image) /// bpp grayscale image.</para>

///

///<para>Sample usage:</para> ///<code>

/// // create grayscale filter (BT709)

/// Grayscale filter = new Grayscale( 0.2125, 0.7154, 0.0721 ); /// // apply the filter

/// Bitmap grayImage = filter.Apply( image ); ///</code>

///

///<para><b>Initial image:</b></para>

///<img src="img/imaging/sample1.jpg" width="480" height="361" /> ///<para><b>Result image:</b></para>

///<img src="img/imaging/grayscale.jpg" width="480" height="361" /> ///</remarks>

///

///<seealso cref="GrayscaleBT709"/> ///<seealso cref="GrayscaleRMY"/> ///<seealso cref="GrayscaleY"/> ///


(10)

publicclassGrayscale : BaseFilter {

///<summary>

/// Set of predefined common grayscaling algorithms, which have aldready initialized

/// grayscaling coefficients. ///</summary>

publicstaticclassCommonAlgorithms {

///<summary>

/// Grayscale image using BT709 algorithm. ///</summary>

///

///<remarks><para>The instance uses <b>BT709</b> algorithm to convert color image

/// to grayscale. The conversion coefficients are: ///<list type="bullet">

///<item>Red: 0.2125;</item> ///<item>Green: 0.7154;</item> ///<item>Blue: 0.0721.</item> ///</list></para>

///

///<para>Sample usage:</para> ///<code>

/// // apply the filter

/// Bitmap grayImage = Grayscale.CommonAlgorithms.BT709.Apply( image );

///</code> ///</remarks> ///

publicstaticreadonlyGrayscale BT709 = newGrayscale(0.2125, 0.7154, 0.0721);

///<summary>

/// Grayscale image using R-Y algorithm. ///</summary>

///

///<remarks><para>The instance uses <b>R-Y</b> algorithm to convert color image

/// to grayscale. The conversion coefficients are: ///<list type="bullet">

///<item>Red: 0.5;</item> ///<item>Green: 0.419;</item>


(11)

///<item>Blue: 0.081.</item> ///</list></para>

///

///<para>Sample usage:</para> ///<code>

/// // apply the filter

/// Bitmap grayImage = Grayscale.CommonAlgorithms.RMY.Apply( image ); ///</code>

///</remarks> ///

publicstaticreadonlyGrayscale RMY = newGrayscale(0.5000, 0.4190, 0.0810);

///<summary>

/// Grayscale image using Y algorithm. ///</summary>

///

///<remarks><para>The instance uses <b>Y</b> algorithm to convert color image

/// to grayscale. The conversion coefficients are: ///<list type="bullet">

///<item>Red: 0.299;</item> ///<item>Green: 0.587;</item> ///<item>Blue: 0.114.</item> ///</list></para>

///

///<para>Sample usage:</para> ///<code>

/// // apply the filter

/// Bitmap grayImage = Grayscale.CommonAlgorithms.Y.Apply( image ); ///</code>

///</remarks> ///

publicstaticreadonlyGrayscale Y = newGrayscale(0.2990, 0.5870, 0.1140); }

// RGB coefficients for grayscale transformation ///<summary>

/// Portion of red channel's value to use during conversion from RGB to grayscale.

///</summary>

publicreadonlydouble RedCoefficient; ///<summary>


(12)

/// Portion of green channel's value to use during conversion from RGB to grayscale.

///</summary>

publicreadonlydouble GreenCoefficient; ///<summary>

/// Portion of blue channel's value to use during conversion from RGB to grayscale.

///</summary>

publicreadonlydouble BlueCoefficient; // private format translation dictionary

privateDictionary<PixelFormat, PixelFormat> formatTranslations = new

Dictionary<PixelFormat, PixelFormat>(); ///<summary>

/// Format translations dictionary. ///</summary>

publicoverrideDictionary<PixelFormat, PixelFormat> FormatTranslations {

get { return formatTranslations; } }

///<summary>

/// Initializes a new instance of the <see cref="Grayscale"/> class. ///</summary>

///

///<param name="cr">Red coefficient.</param> ///<param name="cg">Green coefficient.</param> ///<param name="cb">Blue coefficient.</param> ///

public Grayscale(double cr, double cg, double cb) {

RedCoefficient = cr; GreenCoefficient = cg; BlueCoefficient = cb;

// initialize format translation dictionary

formatTranslations[PixelFormat.Format24bppRgb] = PixelFormat.Format8bppIndexed;

formatTranslations[PixelFormat.Format32bppRgb] = PixelFormat.Format8bppIndexed;

formatTranslations[PixelFormat.Format32bppArgb] = PixelFormat.Format8bppIndexed;

formatTranslations[PixelFormat.Format48bppRgb] = PixelFormat.Format16bppGrayScale;


(13)

formatTranslations[PixelFormat.Format64bppArgb] = PixelFormat.Format16bppGrayScale;

}

///<summary>

/// Process the filter on the specified image. ///</summary>

///

///<param name="sourceData">Source image data.</param>

///<param name="destinationData">Destination image data.</param> ///

protectedoverrideunsafevoid ProcessFilter(UnmanagedImage sourceData, UnmanagedImage destinationData)

{

// get width and height

int width = sourceData.Width; int height = sourceData.Height;

PixelFormat srcPixelFormat = sourceData.PixelFormat; if (

(srcPixelFormat == PixelFormat.Format24bppRgb) || (srcPixelFormat == PixelFormat.Format32bppRgb) || (srcPixelFormat == PixelFormat.Format32bppArgb)) {

int pixelSize = (srcPixelFormat == PixelFormat.Format24bppRgb) ? 3 : 4; int srcOffset = sourceData.Stride - width * pixelSize;

int dstOffset = destinationData.Stride - width; int rc = (int)(0x10000 * RedCoefficient); int gc = (int)(0x10000 * GreenCoefficient); int bc = (int)(0x10000 * BlueCoefficient); // do the job

byte* src = (byte*)sourceData.ImageData.ToPointer(); byte* dst = (byte*)destinationData.ImageData.ToPointer(); // for each line

for (int y = 0; y < height; y++) {

// for each pixel

for (int x = 0; x < width; x++, src += pixelSize, dst++) {

*dst = (byte)((rc * src[RGB.R] + gc * src[RGB.G] + bc * src[RGB.B]) >> 16);

}

src += srcOffset; dst += dstOffset;


(14)

} } else

{

int pixelSize = (srcPixelFormat == PixelFormat.Format48bppRgb) ? 3 : 4; byte* srcBase = (byte*)sourceData.ImageData.ToPointer();

byte* dstBase = (byte*)destinationData.ImageData.ToPointer(); int srcStride = sourceData.Stride;

int dstStride = destinationData.Stride; // for each line

for (int y = 0; y < height; y++) {

ushort* src = (ushort*)(srcBase + y * srcStride); ushort* dst = (ushort*)(dstBase + y * dstStride); // for each pixel

for (int x = 0; x < width; x++, src += pixelSize, dst++) {

*dst = (ushort)(RedCoefficient * src[RGB.R] + GreenCoefficient * src[RGB.G] + BlueCoefficient * src[RGB.B]);

} } } } } }

A.2 Difference Class

namespace AForge.Imaging.Filters {

using System;

using System.Collections.Generic; using System.Drawing;

using System.Drawing.Imaging; ///<summary>

/// Difference filter - get the difference between overlay and source images. ///</summary>

///

///<remarks><para>The difference filter takes two images (source and ///<see cref="BaseInPlaceFilter2.OverlayImage">overlay</see> images)


(15)

/// of the same size and pixel format and produces an image, where each pixel equals

/// to absolute difference between corresponding pixels from provided images.</para>

///

///<para>The filter accepts 8 and 16 bpp grayscale images and 24, 32, 48 and 64 bpp

/// color images for processing.</para> ///

///<para><note>In the case if images with alpha channel are used (32 or 64 bpp), visualization

/// of the result image may seem a bit unexpected - most probably nothing will be seen

/// (in the case if image is displayed according to its alpha channel). This may be /// caused by the fact that after differencing the entire alpha channel will be zeroed /// (zero difference between alpha channels), what means that the resulting image will be

/// 100% transparent.</note></para> ///

///<para>Sample usage:</para> ///<code>

/// // create filter

/// Difference filter = new Difference( overlayImage ); /// // apply the filter

/// Bitmap resultImage = filter.Apply( sourceImage ); ///</code>

///

///<para><b>Source image:</b></para>

///<img src="img/imaging/sample6.png" width="320" height="240" /> ///<para><b>Overlay image:</b></para>

///<img src="img/imaging/sample7.png" width="320" height="240" /> ///<para><b>Result image:</b></para> /// <img

src="img/imaging/difference.png" width="320" height="240" /> ///</remarks>

///

///<seealso cref="Intersect"/> ///<seealso cref="Merge"/> ///<seealso cref="Add"/> ///<seealso cref="Subtract"/> ///

publicsealedclassDifference : BaseInPlaceFilter2 {


(16)

// private format translation dictionary private Dictionary<PixelFormat, PixelFormat> formatTranslations = new Dictionary<PixelFormat, PixelFormat>( ); ///<summary>

/// Format translations dictionary. ///</summary>

publicoverrideDictionary<PixelFormat, PixelFormat> FormatTranslations {

get { return formatTranslations; } }

///<summary>

///Initializes a new instance of the <see cref="Difference"/> class. ///</summary>

public Difference() {

InitFormatTranslations(); }

///<summary>

/// Initializes a new instance of the <see cref="Difference"/> class. ///</summary>

///

///<param name="overlayImage">Overlay image.</param> ///

public Difference(Bitmap overlayImage) : base(overlayImage)

{

InitFormatTranslations(); }

///<summary>

/// Initializes a new instance of the <see cref="Difference"/> class. ///</summary>

///

///<param name="unmanagedOverlayImage">Unmanaged overlay image.</param> ///

public Difference(UnmanagedImage unmanagedOverlayImage) : base(unmanagedOverlayImage)

{

InitFormatTranslations(); }

// Initialize format translation dictionary privatevoid InitFormatTranslations() {


(17)

formatTranslations[PixelFormat.Format8bppIndexed] = PixelFormat.Format8bppIndexed;

formatTranslations[PixelFormat.Format24bppRgb] = PixelFormat.Format24bppRgb;

formatTranslations[PixelFormat.Format32bppRgb] = PixelFormat.Format32bppRgb;

formatTranslations[PixelFormat.Format32bppArgb] = PixelFormat.Format32bppArgb;

formatTranslations[PixelFormat.Format16bppGrayScale] = PixelFormat.Format16bppGrayScale;

formatTranslations[PixelFormat.Format48bppRgb] = PixelFormat.Format48bppRgb;

formatTranslations[PixelFormat.Format64bppArgb] = PixelFormat.Format64bppArgb;

}

///<summary>

/// Process the filter on the specified image. ///</summary>

///

///<param name="image">Source image data.</param> ///<param name="overlay">Overlay image data.</param> ///

protectedoverrideunsafevoid ProcessFilter(UnmanagedImage image, UnmanagedImage overlay)

{

PixelFormat pixelFormat = image.PixelFormat; // get image dimension

int width = image.Width; int height = image.Height; // pixel value

int v; if (

(pixelFormat == PixelFormat.Format8bppIndexed) || (pixelFormat == PixelFormat.Format24bppRgb) || (pixelFormat == PixelFormat.Format32bppRgb) || (pixelFormat == PixelFormat.Format32bppArgb)) {

// initialize other variables

int pixelSize = (pixelFormat == PixelFormat.Format8bppIndexed) ? 1 : (pixelFormat == PixelFormat.Format24bppRgb) ? 3 : 4;

int lineSize = width * pixelSize; int srcOffset = image.Stride - lineSize;


(18)

int ovrOffset = overlay.Stride - lineSize; // do the job

byte* ptr = (byte*)image.ImageData.ToPointer(); byte* ovr = (byte*)overlay.ImageData.ToPointer(); // for each line

for (int y = 0; y < height; y++) {

// for each pixel

for (int x = 0; x < lineSize; x++, ptr++, ovr++) {

// abs(sub)

v = (int)*ptr - (int)*ovr;

*ptr = (v < 0) ? (byte)-v : (byte)v; }

ptr += srcOffset; ovr += ovrOffset; }

} else

{

// initialize other variables

int pixelSize = (pixelFormat == PixelFormat.Format16bppGrayScale) ? 1 : (pixelFormat == PixelFormat.Format48bppRgb) ? 3 : 4;

int lineSize = width * pixelSize; int srcStride = image.Stride; int ovrStride = overlay.Stride; // do the job

int basePtr = (int)image.ImageData.ToPointer(); int baseOvr = (int)overlay.ImageData.ToPointer(); // for each line

for (int y = 0; y < height; y++) {

ushort* ptr = (ushort*)(basePtr + y * srcStride); ushort* ovr = (ushort*)(baseOvr + y * ovrStride); // for each pixel

for (int x = 0; x < lineSize; x++, ptr++, ovr++) {

// abs(sub)

v = (int)*ptr - (int)*ovr;

*ptr = (v < 0) ? (ushort)-v : (ushort)v; }


(19)

} } } }

A.3 Threshold Class

namespace AForge.Imaging.Filters {

using System;

using System.Collections.Generic; using System.Drawing;

using System.Drawing.Imaging; ///<summary>

/// Threshold binarization. ///</summary>

///

///<remarks><para>The filter does image binarization using specified threshold value. All pixels

/// with intensities equal or higher than threshold value are converted to white pixels. All other

/// pixels with intensities below threshold value are converted to black pixels.</para>

///

///<para>The filter accepts 8 and 16 bpp grayscale images for processing.</para> ///

///<para><note>Since the filter can be applied as to 8 bpp and to 16 bpp images, /// the <see cref="ThresholdValue"/> value should be set appropriately to the pixel format.

/// In the case of 8 bpp images the threshold value is in the [0, 255] range, but in the case

/// of 16 bpp images the threshold value is in the [0, 65535] range.</note></para> ///

///<para>Sample usage:</para> ///<code>

/// // create filter

/// Threshold filter = new Threshold( 100 ); /// // apply the filter

/// filter.ApplyInPlace( image ); ///</code>

///

///<para><b>Initial image:</b></para>


(20)

///<para><b>Result image:</b></para>

///<img src="img/imaging/threshold.jpg" width="480" height="361" /> ///</remarks>

///

publicclassThreshold : BaseInPlacePartialFilter {

///<summary> /// Threshold value. ///</summary>

protectedint threshold = 128;

// private format translation dictionary

privateDictionary<PixelFormat, PixelFormat> formatTranslations = new

Dictionary<PixelFormat, PixelFormat>(); ///<summary>

/// Format translations dictionary. ///</summary>

publicoverrideDictionary<PixelFormat, PixelFormat> FormatTranslations {

get { return formatTranslations; } }

///<summary> /// Threshold value. ///</summary> ///

///<remarks>Default value is set to <b>128</b>.</remarks> ///

publicint ThresholdValue {

get { return threshold; } set { threshold = value; } }

///<summary>

/// Initializes a new instance of the <see cref="Threshold"/> class. ///</summary>

///

public Threshold() {

// initialize format translation dictionary

formatTranslations[PixelFormat.Format8bppIndexed] = PixelFormat.Format8bppIndexed;

formatTranslations[PixelFormat.Format16bppGrayScale] = PixelFormat.Format16bppGrayScale;


(21)

}

///<summary>

/// Initializes a new instance of the <see cref="Threshold"/> class. ///</summary>

///

///<param name="threshold">Threshold value.</param> ///

public Threshold(int threshold) : this()

{

this.threshold = threshold; }

///<summary>

/// Process the filter on the specified image. ///</summary>

///

///<param name="image">Source image data.</param>

///<param name="rect">Image rectangle for processing by the filter.</param> ///

protectedoverrideunsafevoid ProcessFilter(UnmanagedImage image, Rectangle rect)

{

int startX = rect.Left; int startY = rect.Top;

int stopX = startX + rect.Width; int stopY = startY + rect.Height;

if (image.PixelFormat == PixelFormat.Format8bppIndexed) {

int offset = image.Stride - rect.Width; // do the job

byte* ptr = (byte*)image.ImageData.ToPointer(); // allign pointer to the first pixel to process

ptr += (startY * image.Stride + startX); // for each line

for (int y = startY; y < stopY; y++) {

// for each pixel

for (int x = startX; x < stopX; x++, ptr++) {

*ptr = (byte)((*ptr >= threshold) ? 255 : 0); }


(22)

} } else

{

byte* basePtr = (byte*)image.ImageData.ToPointer() + startX * 2; int stride = image.Stride;

// for each line

for (int y = startY; y < stopY; y++) {

ushort* ptr = (ushort*)(basePtr + stride * y); // for each pixel

for (int x = startX; x < stopX; x++, ptr++) {

*ptr = (ushort)((*ptr >= threshold) ? 65535 : 0); }

} } } } }

A.4 Erosion Class

namespace AForge.Imaging.Filters {

using System;

using System.Collections.Generic; using System.Drawing;

using System.Drawing.Imaging; ///<summary>

/// Erosion operator from Mathematical Morphology. ///</summary>

///

///<remarks><para>The filter assigns minimum value of surrounding pixels to each pixel of

/// the result image. Surrounding pixels, which should be processed, are specified by

/// structuring element: 1 - to process the neighbor, -1 - to skip it.</para> ///

///<para>The filter especially useful for binary image processing, where it removes pixels, which


(23)

/// are not surrounded by specified amount of neighbors. It gives ability to remove noisy pixels

/// (stand-alone pixels) or shrink objects.</para> ///

///<para>For processing image with 3x3 structuring element, there are different optimizations

/// available, like <see cref="Erosion3x3"/> and <see cref="BinaryErosion3x3"/>.</para>

///

///<para>The filter accepts 8 and 16 bpp grayscale images and 24 and 48 bpp /// color images for processing.</para>

///

///<para>Sample usage:</para> ///<code>

/// // create filter

/// Erosion filter = new Erosion( ); /// // apply the filter

/// filter.Apply( image ); ///</code>

///

///<para><b>Initial image:</b></para>

///<img src="img/imaging/sample12.png" width="320" height="240" /> ///<para><b>Result image:</b></para>

///<img src="img/imaging/erosion.png" width="320" height="240" /> ///</remarks>

///

///<seealso cref="Dilatation"/> ///<seealso cref="Closing"/> ///<seealso cref="Opening"/> ///<seealso cref="Erosion3x3"/> ///<seealso cref="BinaryErosion3x3"/> ///

publicclassErosion : BaseUsingCopyPartialFilter {

// structuring element

privateshort[,] se = newshort[3, 3] { { 1, 1, 1 }, { 1, 1, 1 }, { 1, 1, 1 } }; privateint size = 3;

// private format translation dictionary

privateDictionary<PixelFormat, PixelFormat> formatTransalations = new

Dictionary<PixelFormat, PixelFormat>(); ///<summary>


(24)

///</summary>

publicoverrideDictionary<PixelFormat, PixelFormat> FormatTransalations {

get { return formatTransalations; } }

///<summary>

/// Initializes a new instance of the <see cref="Erosion"/> class. ///</summary>

///

///<remarks><para>Initializes new instance of the <see cref="Erosion"/> class using

/// default structuring element - 3x3 structuring element with all elements equal to 1.

///</para></remarks> ///

public Erosion() {

// initialize format translation dictionary

formatTransalations[PixelFormat.Format8bppIndexed] = PixelFormat.Format8bppIndexed;

formatTransalations[PixelFormat.Format24bppRgb] = PixelFormat.Format24bppRgb;

formatTransalations[PixelFormat.Format16bppGrayScale] = PixelFormat.Format16bppGrayScale;

formatTransalations[PixelFormat.Format48bppRgb] = PixelFormat.Format48bppRgb;

}

///<summary>

/// Initializes a new instance of the <see cref="Erosion"/> class. ///</summary>

///

///<param name="se">Structuring element.</param> ///

///<remarks><para>Structuring elemement for the erosion morphological operator

/// must be square matrix with odd size in the range of [3, 99].</para></remarks>

///

///<exception cref="ArgumentException">Invalid size of structuring element.</exception>

///


(25)

: this() {

int s = se.GetLength(0);

// check structuring element size

if ((s != se.GetLength(1)) || (s < 3) || (s > 99) || (s % 2 == 0))

thrownewArgumentException("Invalid size of structuring element."); this.se = se;

this.size = s; }

///<summary>

/// Process the filter on the specified image. ///</summary>

///

///<param name="sourceData">Source image data.</param>

///<param name="destinationData">Destination image data.</param>

///<param name="rect">Image rectangle for processing by the filter.</param> ///

protectedoverrideunsafevoid ProcessFilter(UnmanagedImage sourceData, UnmanagedImage destinationData, Rectangle rect)

{

PixelFormat pixelFormat = sourceData.PixelFormat; // processing start and stop X,Y positions

int startX = rect.Left; int startY = rect.Top;

int stopX = startX + rect.Width; int stopY = startY + rect.Height; // structuring element's radius int r = size >> 1;

if ((pixelFormat == PixelFormat.Format8bppIndexed) || (pixelFormat == PixelFormat.Format24bppRgb))

{

int pixelSize = (pixelFormat == PixelFormat.Format8bppIndexed) ? 1 : 3; int dstStride = destinationData.Stride;

int srcStride = sourceData.Stride; // base pointers

byte* baseSrc = (byte*)sourceData.ImageData.ToPointer(); byte* baseDst = (byte*)destinationData.ImageData.ToPointer(); // allign pointers by X

baseSrc += (startX * pixelSize); baseDst += (startX * pixelSize);

if (pixelFormat == PixelFormat.Format8bppIndexed) {


(26)

// grayscale image // compute each line

for (int y = startY; y < stopY; y++) {

byte* src = baseSrc + y * srcStride; byte* dst = baseDst + y * dstStride; byte min, v;

// loop and array indexes int t, ir, jr, i, j;

// for each pixel

for (int x = startX; x < stopX; x++, src++, dst++) {

min = 255;

// for each structuring element's row for (i = 0; i < size; i++)

{

ir = i - r; t = y + ir; // skip row if (t < startY) continue; // break

if (t >= stopY) break;

// for each structuring element's column for (j = 0; j < size; j++)

{

jr = j - r; t = x + jr; // skip column if (t < startX) continue; if (t < stopX) {

if (se[i, j] == 1) {

// get new MIN value v = src[ir * srcStride + jr]; if (v < min)

min = v; }


(27)

} }

// result pixel *dst = min; }

} } else

{

// 24 bpp color image // compute each line

for (int y = startY; y < stopY; y++) {

byte* src = baseSrc + y * srcStride; byte* dst = baseDst + y * dstStride; byte minR, minG, minB, v;

byte* p;

// loop and array indexes int t, ir, jr, i, j;

// for each pixel

for (int x = startX; x < stopX; x++, src += 3, dst += 3) {

minR = minG = minB = 255; // for each structuring element's row for (i = 0; i < size; i++)

{

ir = i - r; t = y + ir; // skip row if (t < startY) continue; // break

if (t >= stopY) break;

// for each structuring element's column for (j = 0; j < size; j++)

{

jr = j - r; t = x + jr; // skip column if (t < startX) continue;


(28)

if (t < stopX) {

if (se[i, j] == 1) {

// get new MIN values

p = &src[ir * srcStride + jr * 3]; // red

v = p[RGB.R]; if (v < minR) minR = v; // green

v = p[RGB.G]; if (v < minG) minG = v; // blue

v = p[RGB.B]; if (v < minB) minB = v; }

} } }

// result pixel

dst[RGB.R] = minR; dst[RGB.G] = minG; dst[RGB.B] = minB; }

} } } else

{

int pixelSize = (pixelFormat == PixelFormat.Format16bppGrayScale) ? 1 : 3;

int dstStride = destinationData.Stride / 2; int srcStride = sourceData.Stride / 2; // base pointers

ushort* baseSrc = (ushort*)sourceData.ImageData.ToPointer(); ushort* baseDst = (ushort*)destinationData.ImageData.ToPointer(); // allign pointers by X

baseSrc += (startX * pixelSize); baseDst += (startX * pixelSize);


(29)

if (pixelFormat == PixelFormat.Format16bppGrayScale) {

// 16 bpp grayscale image // compute each line

for (int y = startY; y < stopY; y++) {

ushort* src = baseSrc + y * srcStride; ushort* dst = baseDst + y * dstStride; ushort min, v;

// loop and array indexes int t, ir, jr, i, j;

// for each pixel

for (int x = startX; x < stopX; x++, src++, dst++) {

min = 65535;

// for each structuring element's row for (i = 0; i < size; i++)

{

ir = i - r; t = y + ir; // skip row if (t < startY) continue; // break

if (t >= stopY) break;

// for each structuring element's column for (j = 0; j < size; j++)

{

jr = j - r; t = x + jr; // skip column if (t < startX) continue; if (t < stopX) {

if (se[i, j] == 1) {

// get new MIN value v = src[ir * srcStride + jr]; if (v < min)


(30)

} } } }

// result pixel *dst = min; }

} } else

{

// 48 bpp color image // compute each line

for (int y = startY; y < stopY; y++) {

ushort* src = baseSrc + y * srcStride; ushort* dst = baseDst + y * dstStride; ushort minR, minG, minB, v;

ushort* p;

// loop and array indexes int t, ir, jr, i, j;

// for each pixel

for (int x = startX; x < stopX; x++, src += 3, dst += 3) {

minR = minG = minB = 65535; // for each structuring element's row for (i = 0; i < size; i++)

{

ir = i - r; t = y + ir; // skip row if (t < startY) continue; // break

if (t >= stopY) break;

// for each structuring element's column for (j = 0; j < size; j++)

{

jr = j - r; t = x + jr; // skip column


(31)

if (t < startX) continue; if (t < stopX) {

if (se[i, j] == 1) {

// get new MIN values

p = &src[ir * srcStride + jr * 3]; // red

v = p[RGB.R]; if (v < minR) minR = v; // green

v = p[RGB.G]; if (v < minG) minG = v; // blue

v = p[RGB.B]; if (v < minB) minB = v; }

} } }

// result pixel

dst[RGB.R] = minR; dst[RGB.G] = minG; dst[RGB.B] = minB; }

} } } } } }


(32)

LAMPIRAN B

LIST PROGRAM MOTION DETECTION

WITH WATER EFFECT


(33)

B.1 Camera Window

using System;

using System.Collections;

using System.ComponentModel;

using System.Drawing;

using System.Data;

using System.Windows.Forms;

using System.Threading;

namespace Gerakan_Efek_Air {

publicpartialclassCameraWindow : Control {

privateCamera camera = null; privatebool autosize = false;

privatebool needSizeUpdate = false; privatebool firstFrame = true;

privateColor rectColor = Color.Black;

//AutoSize property [DefaultValue(false)] publicbool AutoSize {

get { return autosize; } set

{

autosize = value; UpdatePosition(); }

}

// Camera property [Browsable(false)] publicCamera Camera {

get { return camera; } set


(34)

{

// lock

Monitor.Enter(this); // detach event if (camera != null) {

camera.NewFrame -= newEventHandler(camera_NewFrame); timer.Stop();

}

camera = value;

needSizeUpdate = true; firstFrame = true; // atach event if (camera != null) {

camera.NewFrame += newEventHandler(camera_NewFrame); timer.Start();

}

// unlock

Monitor.Exit(this); }

}

public System.Drawing.Bitmap bmp {

get { return bmp; } set { bmp = value; } }

public CameraWindow() {

InitializeComponent();

SetStyle(ControlStyles.AllPaintingInWmPaint | ControlStyles.DoubleBuffer | ControlStyles.ResizeRedraw | ControlStyles.UserPaint, true);

}

public CameraWindow(IContainer container) {


(35)

container.Add(this); InitializeComponent(); }

// Paint control

protectedoverridevoid OnPaint(PaintEventArgs pe) {

if ((needSizeUpdate) || (firstFrame)) {

UpdatePosition(); needSizeUpdate = false; }

// lock

Monitor.Enter(this); Graphics g = pe.Graphics;

Rectangle rc = this.ClientRectangle; Pen pen = newPen(rectColor, 1); // draw rectangle

g.DrawRectangle(pen, rc.X, rc.Y, rc.Width - 1, rc.Height - 1); if (camera != null)

{ try

{

camera.Lock(); // draw frame

if (camera.LastFrameLocal != null) {

g.DrawImage(camera.LastFrameLocal, rc.X + 1, rc.Y + 1, rc.Width - 2, rc.Height - 2);

firstFrame = false; }

else

{

// Create font and brush

Font drawFont = newFont("Arial", 12);


(36)

g.DrawString("Connecting ...", drawFont, drawBrush, newPointF(5, 5));

drawBrush.Dispose(); drawFont.Dispose(); }

}

catch (Exception) {

} finally

{

camera.Unlock(); }

}

pen.Dispose(); // unlock

Monitor.Exit(this); base.OnPaint(pe); }

// Update position and size of the control publicvoid UpdatePosition()

{

// lock

Monitor.Enter(this);

if ((autosize) && (this.Parent != null)) {

Rectangle rc = this.Parent.ClientRectangle; int width = 320;

int height = 240; if (camera != null) {

camera.Lock(); // get frame dimension


(37)

{

width = camera.LastFrameLocal.Width; height = camera.LastFrameLocal.Height; }

camera.Unlock(); }

//

this.SuspendLayout();

this.Location = newPoint((rc.Width - width - 2) / 2, (rc.Height - height - 2) / 2);

this.Size = newSize(width + 2, height + 2); this.ResumeLayout();

}

// unlock

Monitor.Exit(this); }

// On new frame ready

privatevoid camera_NewFrame(object sender, System.EventArgs e) {

Invalidate(); }

} }

B.2 Camera

using System;

using System.Drawing;

using System.Drawing.Imaging;

using System.Threading;

using VideoSource;

using AForge.Imaging;

using AForge.Imaging.Filters;

namespace Gerakan_Efek_Air {


(38)

publicclassCamera {

privateIVideoSource videoSource = null; privateCaptureDevice localSource = null; privateBitmap lastFrameLocal = null; privateBitmap lastFrame = null; //private proses image

privateIFilter grayscaleFilter = newGrayscaleBT709(); privateDifference differenceFilter = newDifference(); privateThreshold thresholdFilter = newThreshold(15); privateIFilter erosionFilter = newErosion();

privateMerge mergeFilter = newMerge();

privateIFilter extrachChannel = newExtractChannel(RGB.R);

privateReplaceChannel replaceChannel = newReplaceChannel(RGB.R, null); // image width and height

privateint width = -1, height = -1; privateBitmap backgroundFrame = null; privateBitmapData bitmapData;

publiceventEventHandler NewFrame; // LastFrame property

publicBitmap LastFrame {

get { return lastFrame; } }

// Width property publicint Width {

get { return width; } }

// Height property publicint Height {

get { return height; } }


(39)

// Running property publicbool Running {

get { return (localSource == null) ? false : localSource.Running; } }

// LastFrameLocal property publicBitmap LastFrameLocal {

get { return lastFrameLocal; } }

//Constructor

public Camera(CaptureDevice localSource) {

this.localSource = localSource;

localSource.NewFrame += newCameraEventHandler(video_NewFrame); }

//Start video Source publicvoid Start() {

if (videoSource != null) {

videoSource.Start(); }

if (localSource != null) {

localSource.Start(); }

}

//Signal Video Source to Stop publicvoid SignalToStop() {

if (videoSource != null) {

videoSource.SignalToStop(); }


(40)

{

localSource.SignalToStop(); }

}

//wait video source to Stop publicvoid WaitForStop() {

if (videoSource != null) {

videoSource.WaitForStop(); }

if (localSource != null) {

localSource.WaitForStop(); }

}

//Abort Camera publicvoid Stop() {

if (videoSource != null) {

videoSource.Stop(); }

if (localSource != null) {

localSource.Stop(); }

}

// Lock it

publicvoid Lock() {

Monitor.Enter(this); }

// Unlock it

publicvoid Unlock() {


(41)

Monitor.Exit(this); }

privatevoid video_NewFrame(object sender, CameraEventArgs e) {

try

{ //lock

Monitor.Enter(this); //dispose old frame if (lastFrame != null) {

lastFrame.Dispose(); }

lastFrame = (Bitmap)e.Bitmap.Clone(); if (backgroundFrame == null)

{

// create initial backgroung image

backgroundFrame = grayscaleFilter.Apply(lastFrame); // get image dimension

width = lastFrame.Width; height = lastFrame.Height; // just return for the first time return;

}

Bitmap tmpImage; // apply the grayscale file

tmpImage = grayscaleFilter.Apply(lastFrame);

// set backgroud frame as an overlay for difference filter differenceFilter.OverlayImage = backgroundFrame; // apply difference filter


(42)

// lock the temporary image and apply some filters on the locked data bitmapData = tmpImage2.LockBits(newRectangle(0, 0, width, height), ImageLockMode.ReadWrite, PixelFormat.Format8bppIndexed); // threshold filter

thresholdFilter.ApplyInPlace(bitmapData); // erosion filter

Bitmap tmpImage3 = erosionFilter.Apply(bitmapData); Bitmap tmpImage4 = tmpImage3.Clone(newRectangle(0, 0, tmpImage3.Width, tmpImage3.Height), PixelFormat.Format24bppRgb); // unlock temporary image

tmpImage2.UnlockBits(bitmapData); tmpImage2.Dispose();

// dispose old background backgroundFrame.Dispose(); // set backgound to current backgroundFrame = tmpImage;

lastFrameLocal = tmpImage4;

}

catch (Exception) {

} finally

{

//unlock

Monitor.Exit(this); }

// notify client

if (NewFrame != null)

NewFrame(this, newEventArgs()); }

} }


(43)

B.3 Capture Device Form

using System;

using System.Drawing;

using System.Drawing.Imaging;

using System.Threading;

using VideoSource;

using AForge.Imaging;

using AForge.Imaging.Filters;

namespace Gerakan_Efek_Air {

publicclassCamera {

privateIVideoSource videoSource = null; privateCaptureDevice localSource = null; privateBitmap lastFrameLocal = null; privateBitmap lastFrame = null; //private proses image

privateIFilter grayscaleFilter = newGrayscaleBT709(); privateDifference differenceFilter = newDifference(); privateThreshold thresholdFilter = newThreshold(15); privateIFilter erosionFilter = newErosion();

privateMerge mergeFilter = newMerge();

privateIFilter extrachChannel = newExtractChannel(RGB.R);

privateReplaceChannel replaceChannel = newReplaceChannel(RGB.R, null); // image width and height

privateint width = -1, height = -1; privateBitmap backgroundFrame = null; privateBitmapData bitmapData;


(44)

// LastFrame property publicBitmap LastFrame {

get { return lastFrame; } }

// Width property publicint Width {

get { return width; } }

// Height property publicint Height {

get { return height; } }

// Running property publicbool Running {

get { return (localSource == null) ? false : localSource.Running; } }

// LastFrameLocal property publicBitmap LastFrameLocal {

get { return lastFrameLocal; } }

//Constructor

public Camera(CaptureDevice localSource) {

this.localSource = localSource;

localSource.NewFrame += newCameraEventHandler(video_NewFrame); }

//Start video Source publicvoid Start() {

if (videoSource != null) {


(45)

}

if (localSource != null) {

localSource.Start(); }

}

//Signal Video Source to Stop publicvoid SignalToStop() {

if (videoSource != null) {

videoSource.SignalToStop(); }

if (localSource != null) {

localSource.SignalToStop(); }

}

//wait video source to Stop publicvoid WaitForStop() {

if (videoSource != null) {

videoSource.WaitForStop(); }

if (localSource != null) {

localSource.WaitForStop(); }

}

//Abort Camera publicvoid Stop() {

if (videoSource != null) {

videoSource.Stop(); }


(46)

{

localSource.Stop(); }

}

// Lock it

publicvoid Lock() {

Monitor.Enter(this); }

// Unlock it

publicvoid Unlock() {

Monitor.Exit(this); }

privatevoid video_NewFrame(object sender, CameraEventArgs e) {

try

{ //lock

Monitor.Enter(this); //dispose old frame if (lastFrame != null) {

lastFrame.Dispose(); }

lastFrame = (Bitmap)e.Bitmap.Clone(); if (backgroundFrame == null)

{

// create initial backgroung image

backgroundFrame = grayscaleFilter.Apply(lastFrame); // get image dimension

width = lastFrame.Width; height = lastFrame.Height;


(47)

// just return for the first time return;

}

Bitmap tmpImage; // apply the grayscale file

tmpImage = grayscaleFilter.Apply(lastFrame);

// set backgroud frame as an overlay for difference filter differenceFilter.OverlayImage = backgroundFrame; // apply difference filter

Bitmap tmpImage2 = differenceFilter.Apply(tmpImage);

// lock the temporary image and apply some filters on the locked data bitmapData = tmpImage2.LockBits(newRectangle(0, 0, width, height), ImageLockMode.ReadWrite, PixelFormat.Format8bppIndexed); // threshold filter

thresholdFilter.ApplyInPlace(bitmapData); // erosion filter

Bitmap tmpImage3 = erosionFilter.Apply(bitmapData); Bitmap tmpImage4 = tmpImage3.Clone(newRectangle(0, 0, tmpImage3.Width, tmpImage3.Height), PixelFormat.Format24bppRgb); // unlock temporary image

tmpImage2.UnlockBits(bitmapData); tmpImage2.Dispose();

// dispose old background backgroundFrame.Dispose(); // set backgound to current backgroundFrame = tmpImage;

lastFrameLocal = tmpImage4;


(48)

catch (Exception) {

} finally

{

//unlock

Monitor.Exit(this); }

// notify client

if (NewFrame != null)

NewFrame(this, newEventArgs()); }

} }

B.4 Gerakan Efek Air

using System;

using System.Collections.Generic;

using System.ComponentModel;

using System.Data;

using System.Drawing;

using System.Linq;

using System.Text;

using System.Windows.Forms;

using System.Collections;

using VideoSource;

namespace Gerakan_Efek_Air {

publicpartialclassFormEfek : Form {

//private webcam

privatestaticCaptureDevice localSource = newCaptureDevice(); //inisialisai Webcam


(49)

//inisialisai gambar privateBitmap image1;

privateBitmap OriginalImage; //Inisialisai posisi Mouse publicstaticint pointX; publicstaticint pointY; //Inisialisasi warna putih

privateColor cWhite = System.Drawing.Color.FromArgb(255, 255, 255, 255); //Inisialisasi FormState

FormState formstate = newFormState();

//Constructor public FormEfek() {

InitializeComponent(); }

//Form Gerakan Efek Air Load

privatevoid FormEfek_Load(object sender, System.EventArgs e) {

//Menampilkan Capture Device Form

CaptureDeviceForm form = newCaptureDeviceForm(); if (form.ShowDialog(this) == DialogResult.OK) {

localSource.VideoSource = form.Device; //Camera dipilih this.Cursor = Cursors.WaitCursor;

//Memulai Camera camera.Start();

this.Cursor = Cursors.Default; }

if (camera != null) {

//timerImage dijalankan untuk mengambil frame timerImage.Start();

} }


(50)

//TimerImage jalan, frame mulai diambil

privatevoid timerImage_Tick(object sender, System.EventArgs e) {

image1 = camera.LastFrameLocal;

//timerVertex dijalankan untuk mengolah frame timerVertex.Start();

}

//TimerVertex diambil untuk mengolah frame

privatevoid timerVertex_Tick(object sender, System.EventArgs e) {

if (image1 != null) // Jika frame berhasil didapat {

labelMotion.Text = " Motion Detection Ok"; OriginalImage = (Bitmap)image1.Clone();

ArrayList pxList = scan(OriginalImage); //frame yg didapat di scan //Pixel Putih dihitung

int a = pxList.Count; int[] arrayX = newint[a]; int[] arrayY = newint[a]; //Mengenali tiap titik yg didapat Vertex prv;

for (int k = 0; k < pxList.Count; k++) {

prv = (Vertex)pxList[k]; arrayX[k] = prv.X; arrayY[k] = prv.Y; }

//Menentukan Banyak titik yg didapat int countX = arrayX.Length;

int countY = arrayY.Length;

//Mengenali gerakan untuk menentukan posisi mouse if (countX != 0)

{

//posisi koordinat X ditentukan

pointX = Convert.ToInt32(arrayX[countX / 2]); }


(51)

{

pointX = 0; }

labelPointX.Text = "pointX = " + pointX*2 + ""; // label menampilkan posisi X kursor mouse

if (countY != 0) {

//posisi koordinat Y ditentukan

pointY = Convert.ToInt32(arrayY[countY / 2]); }

else

{

pointY = 0; }

labelPointY.Text = "pointY = " + pointY*2 + "";// label menampilkan posisi Y kursor mouse

} }

//Proses Scan Original Image

privateArrayList scan(Bitmap imgScr) {

ArrayList pixel = newArrayList();

for (int j = 0; j < imgScr.Height; j = j + 3) // koordiant Y {

for (int i = 0; i < imgScr.Width; i = i + 3) // koordinat X {

if (imgScr.GetPixel(i, j) == cWhite) {

Vertex pix = newVertex(i, j); pixel.Add(pix);

} } }

return pixel; }


(52)

privatevoid FormEfek_KeyPress(object sender, System.Windows.Forms.KeyPressEventArgs e) {

if (e.KeyChar == (char)Keys.F) //Tekan Shit+F untuk menampilkan full screen

{

formstate.Maximize(this);

//timer dijalankan saat full screen, saat ini gerakan cursor mulai diambil alih oleh gerakan

//yang terdeteksi oleh camera webcam timerCursor.Start();

} else

if (e.KeyChar == (char)Keys.Escape) // tekan Escape dan Form kembali normal

{

formstate.Restore(this);

//timer berhenti, mouse kembali berfungsi secara normal timerCursor.Stop();

} }

//Timer Cursor dijalankan

privatevoid timerCursor_Tick(object sender, System.EventArgs e) {

//posisi mouse ditentukan oleh posisi titik dari Vertex Cursor.Position = newPoint(pointX * 2, pointY * 2); }

} }

B.5 Water Effect Picture Box

using System;

using System.Collections;

using System.Drawing;

using System.Drawing.Imaging;

using System.Runtime.InteropServices;

using System.Windows.Forms;

namespace Gerakan_Efek_Air {


(53)

{

privateBitmap bmp; privateshort[, ,] waves; privateint waveWidth; privateint waveHeight; privateint activeBuffer = 0; privatebool weHaveWaves; privateint bmpHeight; privateint bmpWidth; privatebyte[] bmpBytes;

privateBitmapData bmpBitmapData; privateint scale;

public WaterEffectPictureBox() {

InitializeComponent(); effecttimer.Enabled = true; effecttimer.Interval = 50;

SetStyle(ControlStyles.UserPaint, true);

SetStyle(ControlStyles.AllPaintingInWmPaint, true); SetStyle(ControlStyles.DoubleBuffer, true);

this.BackColor = Color.White; weHaveWaves = false;

scale = 1; }

public WaterEffectPictureBox(Bitmap bmp) : this() {

this.ImageBitmap = bmp; }

publicBitmap ImageBitmap {

get { return bmp; } set

{

bmp = value;

bmpHeight = bmp.Height; bmpWidth = bmp.Width;

waveWidth = bmpWidth >> scale; waveHeight = bmpHeight >> scale;


(54)

waves = newInt16[waveWidth, waveHeight, 2]; bmpBytes = newByte[bmpWidth * bmpHeight * 4];

bmpBitmapData = bmp.LockBits(newRectangle(0, 0, bmpWidth, bmpHeight), ImageLockMode.ReadWrite, PixelFormat.Format32bppArgb); Marshal.Copy(bmpBitmapData.Scan0, bmpBytes, 0, bmpWidth * bmpHeight * 4);

} }

publicint scale {

get { return scale; } set { scale = value; } }

privatevoid effecttimer_Tick(object sender, System.EventArgs e) {

if (weHaveWaves) {

Invalidate(); ProcessWaves(); }

}

///<summary> /// Paint handler ///

/// Calculates the final effect-image out of ///</summary>

///<param name="sender"></param> ///<param name="e"></param>

publicvoid WaterEffectPictureBox_Paint(object sender, System.Windows.Forms.PaintEventArgs e)

{

using (Bitmap tmp = (Bitmap)bmp.Clone()) {


(55)

int xOffset, yOffset; byte alpha;

if (weHaveWaves) {

BitmapData tmpData = tmp.LockBits(newRectangle(0, 0, bmpWidth, bmpHeight), ImageLockMode.ReadWrite, PixelFormat.Format32bppArgb); byte[] tmpBytes = newByte[bmpWidth * bmpHeight * 4];

Marshal.Copy(tmpData.Scan0, tmpBytes, 0, bmpWidth * bmpHeight * 4);

for (int x = 1; x < bmpWidth - 1; x++) {

for (int y = 1; y < bmpHeight - 1; y++) {

int waveX = (int)x >> scale; int waveY = (int)y >> scale; //check bounds

if (waveX <= 0) waveX = 1; if (waveY <= 0) waveY = 1;

if (waveX >= waveWidth - 1) waveX = waveWidth - 2; if (waveY >= waveHeight - 1) waveY = waveHeight - 2; //this gives us the effect of water breaking the light

xOffset = (waves[waveX - 1, waveY, activeBuffer] - waves[waveX + 1, waveY, activeBuffer]) >> 3;

yOffset = (_waves[waveX, waveY - 1, activeBuffer] - waves[waveX, waveY + 1, activeBuffer]) >> 3;

if ((xOffset != 0) || (yOffset != 0)) {

//check bounds

if (x + xOffset >= bmpWidth - 1) xOffset = bmpWidth - x - 1; if (y + yOffset >= bmpHeight - 1) yOffset = bmpHeight - y - 1; if (x + xOffset < 0) xOffset = -x;

if (y + yOffset < 0) yOffset = -y; //generate alpha


(56)

if (alpha < 0) alpha = 0; if (alpha > 255) alpha = 254; //set colors

tmpBytes[4 * (x + y * bmpWidth)] = bmpBytes[4 * (x + xOffset + (y + yOffset) * bmpWidth)];

tmpBytes[4 * (x + y * bmpWidth) + 1] = bmpBytes[4 * (x + xOffset + (y + yOffset) * bmpWidth) + 1];

tmpBytes[4 * (x + y * bmpWidth) + 2] = bmpBytes[4 * (x + xOffset + (y + yOffset) * bmpWidth) + 2];

tmpBytes[4 * (x + y * bmpWidth) + 3] = alpha; }

} }

//copy data back

Marshal.Copy(tmpBytes, 0, tmpData.Scan0, bmpWidth * bmpHeight * 4);

tmp.UnlockBits(tmpData); }

e.Graphics.DrawImage(tmp, 0, 0, this.ClientRectangle.Width,

this.ClientRectangle.Height); }

}

///<summary>

/// This is the method that actually does move the waves around and simulates the

/// behaviour of water. ///</summary>

privatevoid ProcessWaves() {

int newBuffer = (activeBuffer == 0) ? 1 : 0; bool wavesFound = false;


(57)

{

for (int y = 1; y < waveHeight - 1; y++) {

_waves[x, y, newBuffer] = (short)(

((waves[x - 1, y - 1, activeBuffer] + waves[x, y - 1, activeBuffer] + waves[x + 1, y - 1, activeBuffer] + waves[x - 1, y, activeBuffer] + waves[x + 1, y, activeBuffer] + waves[x - 1, y + 1, activeBuffer] + waves[x, y + 1, activeBuffer] +

waves[x + 1, y + 1, activeBuffer]) >> 2) - waves[x, y, newBuffer]);

//damping

if (waves[x, y, newBuffer] != 0) {

waves[x, y, newBuffer] -= (short)(_waves[x, y, newBuffer] >> 4); wavesFound = true;

}

} }

weHaveWaves = wavesFound; activeBuffer = newBuffer; }

///<summary>

/// This function is used to start a wave by simulating a round drop ///</summary>

///<param name="x">x position of the drop</param> ///<param name="y">y position of the drop</param>

///<param name="height">Height position of the drop</param> privatevoid PutDrop(int x, int y, short height)

{

weHaveWaves = true; int radius = 20;


(58)

for (int i = -radius; i <= radius; i++) {

for (int j = -radius; j <= radius; j++) {

if (((x + i >= 0) && (x + i < waveWidth - 1)) && ((y + j >= 0) && (y + j < waveHeight - 1)))

{

dist = Math.Sqrt(i * i + j * j); if (dist < radius)

waves[x + i, y + j, activeBuffer] = (short)(Math.Cos(dist * Math.PI / radius) * height);

} } } }

///<summary>

/// The MouseMove handler. ///</summary>

///<param name="sender"></param> ///<param name="e"></param>

privatevoid WaterEffectPictureBox_MouseMove(object sender, System.Windows.Forms.MouseEventArgs e)

{

int realX = (int)((e.X / (double)this.ClientRectangle.Width) * _waveWidth);

int realY = (int)((e.Y / (double)this.ClientRectangle.Height) * _waveHeight);

PutDrop(realX, realY, 80); }

} }

B.6 Vertex

using System;

using System.Collections.Generic;

using System.Linq;

using System.Text;

using System.Collections;

namespace Gerakan_Efek_Air {


(59)

publicclassVertex {

ArrayList vertices = newArrayList(); //Inisialisasi x,y

privateint x; privateint y;

//Vertex adalah program untuk melalukan Looping pada pengambilan //data hasil scan, sehingga data dapat diambil secara kontinu

public Vertex(int i, int j) {

this.X = i; this.Y = j; }

publicvoid insert(Vertex v) {

vertices.Add(v); }

publicArrayList GetPixels() {

return vertices; }

publicint GetCount() {

return vertices.Count; }

publicvoid AddPixels(Vertex v) {

vertices.Add(v); }

publicint X {

get { return x; } set { x = value; } }

publicint Y {


(60)

get { return y; } set { y = value; } }

} }

B.7 Form State

using System;

using System.Drawing;

using System.Windows.Forms;

using System.Runtime.InteropServices;

namespace Gerakan_Efek_Air {

///<summary>

/// Selected Win AI Function Calls ///

///</summary> publicclassWinApi {

[DllImport("user32.dll", EntryPoint = "GetSystemMetrics")] publicstaticexternint GetSystemMetrics(int which);

[DllImport("user32.dll")] publicstaticexternvoid

SetWindowPos(IntPtr hwnd, IntPtr hwndInsertAfter, int X, int Y, int width, int height, uint flags);

privateconstint SM_CXSCREEN = 0; privateconstint SM_CYSCREEN = 1;

privatestaticIntPtr HWND_TOP = IntPtr.Zero;

privateconstint SWP_SHOWWINDOW = 64; // 0x0040 publicstaticint ScreenX

{

get { return GetSystemMetrics(SM_CXSCREEN); } }

publicstaticint ScreenY {

get { return GetSystemMetrics(SM_CYSCREEN); } }


(61)

{

SetWindowPos(hwnd, HWND_TOP, 0, 0, ScreenX, ScreenY, SWP_SHOWWINDOW);

} }

///<summary>

/// Class used to preserve / restore state of the form ///</summary>

publicclassFormState {

privateFormWindowState winState; privateFormBorderStyle brdStyle; privatebool topMost;

privateRectangle bounds;

privatebool IsMaximized = false; publicvoid Maximize(Form targetForm) {

if (!IsMaximized) {

IsMaximized = true; Save(targetForm);

targetForm.WindowState = FormWindowState.Maximized; targetForm.FormBorderStyle = FormBorderStyle.None; targetForm.TopMost = true;

WinApi.SetWinFullScreen(targetForm.Handle); }

}

publicvoid Save(Form targetForm) {

winState = targetForm.WindowState; brdStyle = targetForm.FormBorderStyle; topMost = targetForm.TopMost;

bounds = targetForm.Bounds; }

publicvoid Restore(Form targetForm) {

targetForm.WindowState = winState; targetForm.FormBorderStyle = brdStyle; targetForm.TopMost = topMost;

targetForm.Bounds = bounds; IsMaximized = false;


(62)

} }

B. 8 Programs

using System;

using System.Collections.Generic;

using System.Linq;

using System.Windows.Forms;

namespace Gerakan_Efek_Air {

staticclassProgram {

///<summary>

/// Saat Debug maka Form Efek yg akan di load ///</summary>

[STAThread] staticvoid Main() {

Application.EnableVisualStyles();

Application.SetCompatibleTextRenderingDefault(false); Application.Run(newFormEfek());

} } }


(63)

BAB I PENDAHULUAN

1.1. Latar Belakang

Aplikasi pendeteksi gerakan adalah suatu sarana yang baik untuk dikembangkan di bidang computer vision. Aplikasi pendeteksi gerakan termasuk antarmuka NUI (Natural User Interface) yang merupakan evolusi dari antarmuka CLI dan GUI. Saat ini banyak sekali penggunaan pendeteksi gerakan di bidang-bidang seperti entertainment, gaming, dan advertising. Hal ini cukup menarik perhatian karena user dapat merasakan sendiri sensasi mengendalikan suatu efek tertentu pada video game dengan gerakan tangan atau tubuh yang sederhana.

Pendeteksi gerakan sederhana dapat dibuat hanya dengan menggunakan perangkat lunak dari PC dan webcam. Untuk menghasilkan pendeteksi gerakan yang interaktif, maka penting untuk ditambahkan suatu efek tertentu pada gambar yang ditampilkan. Efek ini dimaksudkan agar user dapat melakukan interaksi pada interactive space yang tersedia dengan lebih baik.

Dengan demikian dibutuhkan pembuatan program yang terintegrasi dengan baik antara pendeteksi gerakan dari webcam dengan gambar pada PC agar terjadi hubungan yang interaktif. Hal ini dicapai dengan menggunakan image processing

sesuai dengan efek yang diinginkan. Program aplikasi ini dibuat dengan menggunakan bahasa pemrograman C# pada Microsoft Visual Basic 2010 dan

AForge.Net Framework untuk membuat aplikasi pendeteksi gerakan dari

webcam.

Berdasarkan latar belakang di atas, peneliti memiliki ketertarikan untuk mengangkat permasalahan tersebut, sehingga dilakukan penelitian dengan judul “Perancangan Dan Realisasi Sistem Pendeteksi Gerakan Sebagai Natural User Interface (NUI) Mengunakan Bahasa C#”


(64)

1.2. Identifikasi Masalah

Berdasarkan latar belakang di atas, masalah utama yang akan dibahas pada Tugas Akhir ini adalah perancangan dan realisasi pendeteksi gerakan menggunakan Microsoft Visual Studio 2010 berbahasa C#

1.3. Perumusan Masalah

Dalam Tugas Akhir ini, terdapat beberapa perumusan masalah guna mendukung kelancaran dari Tugas Akhir ini, yaitu :

 Apa yang dimaksud dengan Natural User Interface (NUI) ?

 Bagaimana cara kerja dari pendeteksi gerakan menggunakan webcam dengan bantuan AForge.Net Framework ?

 Bagaimana perancangan alat dapat bekerja sesuai dengan kriteria yang diinginkan?

1.4. Tujuan dan Manfaat

Dalam Tugas Akhir ini, tujuan yang ingin dicapai yaitu :

 Memahami konsep dan cara kerja dari pendeteksi gerakan dengan

menggunakan AForge.Net Framework.

 Memahami konsep dan cara kerja dari image processing efek air.

 Membuat dan mengembangkan aplikasi pendeteksi gerakan dari webcam.  Membuat dan mengembangkan aplikasi image processing untuk membuat

efek air.

 Menggabungkan aplikasi pendeteksi gerakan dengan aplikasi image processing efek air.

 Melakukan eksperimen pada daerah interactive space dengan menentukan parameter-parameter yang sesuai.


(65)

 Membuat laporan hasil eksperimen pengaturan parameter dan hasil pengujian aplikasi yang dibuat.

1.5. Pembatasan Masalah

Dalam Tugas Akhir ini, sistem yang akan dibuat dibatasi pada hal-hal sebagai berikut :

1. Membuat aplikasi pendeteksi gerakan.

2. Membuat aplikasi image processing dengan efek air.

3. Mengintegrasikan aplikasi pendeteksi gerakan beserta efek air dengan membuat program utama.

4. Melakukan eksperimen untuk menentukan parameter-parameter yang diperlukan.

1.6. Sistematika Penulisan

Tugas akhir ini mengunakan metodologi penulisan sebagai berikut :  Studi literatur

Berisikan pembahasan teoritis melalui studi literatur dari buku – buku atau jurnal ilmiah yang berkaitan dengan pemograman pada Visual Studio 2010, pengunakan referensi AForge.Net, dan image processing efek air.  Perancangan sistem

Merancang sistem pendeteksi gerakan mengunakan filter image yang terdapat dalam AForge.Net kemudian menampilkan efek air pada gambar yang ditampilkan sesuai dengan pergerakan yang terjadi pada interactive space .

 Analisa sistem

Melakukan analisa terhadap sistem yang dirancang agar sesuai dengan kriteria yang diinginkan.


(1)

2 Universitas Kristen Maranatha 1.2. Identifikasi Masalah

Berdasarkan latar belakang di atas, masalah utama yang akan dibahas pada Tugas Akhir ini adalah perancangan dan realisasi pendeteksi gerakan menggunakan Microsoft Visual Studio 2010 berbahasa C#

1.3. Perumusan Masalah

Dalam Tugas Akhir ini, terdapat beberapa perumusan masalah guna mendukung kelancaran dari Tugas Akhir ini, yaitu :

 Apa yang dimaksud dengan Natural User Interface (NUI) ?

 Bagaimana cara kerja dari pendeteksi gerakan menggunakan webcam dengan bantuan AForge.Net Framework ?

 Bagaimana perancangan alat dapat bekerja sesuai dengan kriteria yang diinginkan?

1.4. Tujuan dan Manfaat

Dalam Tugas Akhir ini, tujuan yang ingin dicapai yaitu :

 Memahami konsep dan cara kerja dari pendeteksi gerakan dengan menggunakan AForge.Net Framework.

 Memahami konsep dan cara kerja dari image processing efek air.

 Membuat dan mengembangkan aplikasi pendeteksi gerakan dari webcam.  Membuat dan mengembangkan aplikasi image processing untuk membuat

efek air.

 Menggabungkan aplikasi pendeteksi gerakan dengan aplikasi image processing efek air.

 Melakukan eksperimen pada daerah interactive space dengan menentukan parameter-parameter yang sesuai.


(2)

3 Universitas Kristen Maranatha

 Membuat laporan hasil eksperimen pengaturan parameter dan hasil pengujian aplikasi yang dibuat.

1.5. Pembatasan Masalah

Dalam Tugas Akhir ini, sistem yang akan dibuat dibatasi pada hal-hal sebagai berikut :

1. Membuat aplikasi pendeteksi gerakan.

2. Membuat aplikasi image processing dengan efek air.

3. Mengintegrasikan aplikasi pendeteksi gerakan beserta efek air dengan membuat program utama.

4. Melakukan eksperimen untuk menentukan parameter-parameter yang diperlukan.

1.6. Sistematika Penulisan

Tugas akhir ini mengunakan metodologi penulisan sebagai berikut :  Studi literatur

Berisikan pembahasan teoritis melalui studi literatur dari buku – buku atau jurnal ilmiah yang berkaitan dengan pemograman pada Visual Studio 2010, pengunakan referensi AForge.Net, dan image processing efek air.  Perancangan sistem

Merancang sistem pendeteksi gerakan mengunakan filter image yang terdapat dalam AForge.Net kemudian menampilkan efek air pada gambar yang ditampilkan sesuai dengan pergerakan yang terjadi pada interactive space .

 Analisa sistem

Melakukan analisa terhadap sistem yang dirancang agar sesuai dengan kriteria yang diinginkan.


(3)

4 Universitas Kristen Maranatha Sistematika Penulisan

Penulisan tugas akhir ini akan dibagi dalam beberapa bagian sebagai berikut : Bab I, Pendahuluan

Berisi tentang latar belakang pembuatan tugas akhir, perumusan masalah, maksud dan tujuan pembuatan tugas akhir, pembatasan masalahnya, metodologi penulisan serta sistematika yang digunakan dalam penulisan laporan tugas akhir ini.

Bab II, Landasan Teori

Pada bab ini akan dibahas teori-teori tentang computer vision, image processing, Natural User Interface ( NUI), dan class pada AForge.Net

Bab III, Perancangan dan Realisasi sistem

Pada bab ini dijelaskan mengenai perancangan letak pendeteksi gerakan, perancangan pendeteksi gerakan pada webcam, dan perancangan efek air. Bab IV, Data Pengamatan Analisa Data

Pada bab ini berisi tentang implementasi dan analisis pendeteksi gerakan, pencarian titik putih pada webcam, dan proses pembentukan efek air.

Bab V, Kesimpulan & Saran

Pada bab ini berisi kesimpulan dari Tugas Akhir dan saran-saran yang perlu dilakukan untuk perbaikan di masa mendatang.

DAFTAR PUSTAKA LAMPIRAN


(4)

48 Universitas Kristen Maranatha

BAB V

KESIMPULAN DAN SARAN

Bab ini merupakan bab penutup yang berisi kesimpulan dari hasil

penelitian dan analisis dari Tugas Akhir ini serta saran bagi pihak yang terkait

berkenaan dengan pembuatan “

sistem pendeteksi gerakan mengunakan Microsoft

Visual Studio 2010 dengan bahasa pemograman C#

”.

5.1

Kesimpulan

1.

Dari hasil tugas akhir ini, didapatkan bahwa implementasi pendeteksi

gerakan dengan efek air sudah berjalan baik sesuai dengan perancangan

yang dibuat.

2.

Perancangan dan implementasi pendeteksi gerakan dengan menggunakan

filter yang terdapat pada framework AForge.Net berhasil diterapkan dalam

Aplikasi Pendeteksi Gerakan.

3.

Perancangan dan implementasi efek air terhadap sebuah

image

berhasil

dilakukan. Efek air yang timbul pada gambar muncul bila terjadi gerakan.

Untuk hasil implementasi program yang lebih efektif, maka dilakukan

beberapa penyesuaian dalam program. Penyesuaian yang dilakukan adalah proses

pencarian posisi efek air. Dari hasil percobaan yang dilakukan, beberapa

penyesuaian yang dilakukan adalah:

1. Nilai radius put drop yang diberikan sebesar 20 pixel.

2. Tinggi efek air pada

put drop

yang diberikan adalah 200 pixel

3. Timer Tick yang digunakan pada program untuk mencari titik

put drop

sebesar 100 ms.


(5)

49 Universitas Kristen Maranatha

Hal ini dilakukan agar program dapat berjalan lebih efektif. Hasil akhir

yang didapatkan dari Tugas Akhir ini adalah sebuah aplikasi pendeteksi gerakan

yang ditampilkan dalam bentuk

image

dengan efek air.

5.2

Saran

1. Perancang dapat menambah sejumlah efek lain dalam bidang

image

processing

untuk memberikan efek lain pada pendeteksi gerakan agar

pendeteksi gerakan terlihat lebih interaktif.

2. Perancang harus memperhatikan parameter-parameter yang sesuai dalam

implementasi pendeteksi gerakan seperti pengaturan letak webcam dengan

layar laptop, pengaturan timer yang diberikan, pencahayaan dan

pengunaan VGA yang sesuai untuk

image processing

.


(6)

50 Universitas Kristen Maranatha

DAFTAR PUSTAKA

1. Bradski, G., and Kaehler, Adrian., Learning OpenCV : Computer Vision with the OpenCV Library., O‟Reilly., 2008

2. Efford, Nick., Digital Image Processing: A Practical Introduction Using Java.,Addison-Wesley.,2000

3. Matthews, James., An Introduction To Machine Vision., 2000 4. Wahana Komputer., Microsoft Visual C# 2010., Penerbit Andi.,2011

5.