Paper/ Journal TitleAuthorsPublication TypeName of Conference/ JournalPublished YearPublished Month & DayField of ResearchName of DepartmentKeywordsAbstractdepartment_hfilterpublication_type_hfilterpublisher_hfilterpublished_year_hfilterfield_of_research_hfilter
Design of High Performance for Analog -to Digital ConverterDr Myo Min Than-202015National Conference on Science and engineeringDownload

rr

conference-papercombinatorial-optimization
Enhancement SAR Angorithm for Analog to Digital ConverterDr Myo Min Than-192015Proceedings of 2015 International Conference on Future Computational Technologies (Singapore)Download

qq

conference-papercombinatorial-optimization
Investigation of Parameters Strategy Tracing Results Dependence in P-CADRDr Myo Min Than-18200512th Conference of MIED Science and Technology -MIED,MOSCOW 2005Download

pp

conference-paper2005combinatorial-optimization
Enhancement of Basic Characteristic of Analog-Digital Computing System by Realization of Arithmetic OperationsDr Myo Min Than-172007Collection of Science Word MIET,MOSCOW,2007Download

oo

conference-paper2007cloud-computing
Reducing the Number of Command in the Organization of Arithmetic Operation "Division" of Analog-Digital Computing SystemDr Myo Min Than-16200614th Conference of MIET Science and Technology -MIETDownload

nn

conference2006cloud-computing
Computing Error Correction by Using the Method of Operant Division in Analog-Digital Computing SystemDr Myo Min Than-152006Collection of Science Work MIED,MOSCOW 2006Download
mm
conference2006cloud-computing
Hardware Requirement Reducing by Using the Principle of Operant DivisionDr Myo Min Than-142006Collection of Science Work MIED,MOSCOW 2006Download

kk

conference2006cloud-computing
Implementation of Control System for Analog-Digital Computing SystemDr Myo Min Than-13200613th Conference of MIED Science and Technology MIED,MOSCOW 2006Download

jj

conference2006cloud-computing
Enhancement of Basic Characteristic of Analog-Digital Computing System by Realization of Arithmetic OperationsDr Myo Min Than-122008Collection of Science Work MEPhi,MOSCOW,2008Download

ll

conference2008automatic-control-system
Reducing the Numbers of Command in the Organization of Arithmetic Operation "Division"of Analog-Digital Computing SystemDr Myo Min Than-11200714th Conference of MIET Science and Technology-MIET,MOSCOW,2007Download

kk

conference2007cloud-computing
Computing Error Correction by Using The Method of operand Division in Analog-Digital Computing SystemDr Myo Min Than-102006Collection of Science Work-MIED,MOSCOW,2006Download

jj

conference2006cloud-computing
Hardware Requirement Reducing by Using the Principle of Operant DivisionDr Myo Min Than2006Collection of Science Work-MIED,MOSCOW,2006Download

ii

conference2006cloud-computing
Implementation of Control System for Analog-Digital Computing SystemDr Myo Min Than200613th Conference MIED Science and Technology-MIED,MOSCOW ,2006Download

hh

conference2006cloud-computing
Investigation of Parameters Strategy Tracing Results Dependence in P-CCADDr Myo Min Than200512th Conference of MIET Science and Technology-MIET,Moscow,2005Download

gg

conference2005automatic-control-system
Design of Time Reduction For Successive Approximation Register A/D ConverterDr Myo Min Than, 20157th international conference on information Technology and Electrical Engineering ,Thailand,2015Download

ff

conference conference-papercloud-computing
Program of Arithmetic Operation 'division for Enhancement of Converting Speed in Analog-Digital Computing SystemDr Myo Min Than2008Program for IBM-Registration No.2008615231,Russia,2008Download

ee

2008cloud-computing
Analog to Digital ConverterDr Myo Min Than2008Patent No.2007139042,Russia,2008Download

dd

2008big-data-machine-learning
Reducing the Execution Steps of Arithmetic Operation in Analog-Digital Computing SystemDr Myo Min Than2008Natural and Technological Science No.2,MOSCOW,2008Download

cc

international-journal-paper2008cloud-computing
Enhancement of Execution Speed in the Operation "Division " of Analog-Digital Computing SystemDr Myo Min Than200815th Conference of MIED Science and Technology-MIED,MOSCOW,2008Download

bb

conference-paper2008cloud-computing
Reducing the Execution Time of Arithmetic Operation in Analog-Digital Computing SystemDr Myo Min Than2008Collection of Science Work-MIED,MOSCOW,2008Download
AA
conference-paper2008
ဂျာနယ်ကျော်မမလေး၏"ကမ္ဘာမြေဝယ်"ဝတ္တရှည်မှဇာတ်ဆောင်စရိုက်ဖန်တီးမှုDaw Htay Htay Aung(၇.၇.၂၀၂၃), 162Download

ဤစာတမ်းသည် ဂျာနယ်ကျော်မမလေး၏ ကမ္ဘာမြေဝယ်ဝတ္တုရှည်မှ ဇာတ်ဆောင်စရိုက်ဖန်တီးမှု့ကို လေ့လာတင်ပြထားသည့်စာတမ်းဖြစ်သည်။ ဝတ္ထု၏ရည်ရွယ်ချက်မှာ လယ်သမားများ ကိုယ်ပိုင်လယ်ယာဖြင့်လယ်လုပ်ငန်းလုပ်ကိုင်လိုကြသော်လည်း နယ်ချဲ့စနစ်၏ ဆိုးရွားမှုကြောင့် ဆင်းရဲမွဲတေခဲ့ရပြီး ထိုစနစ်ဆိုးတို့ ပပျောက်စေရန်နှင့် တရားမျှတသည့် ခေတ်သစ်စနစ်သစ် ပေါ်ပေါက်လာစေရန်ဟူသောရည်ရွယ်ချက်ဖြင့် ရေးဖွဲ့ထားခြင်းဖြစ်သည်။

languagemyanmar ucsmgyinternational-journal-paper2023
Review on Awareness of Microsoft Office 365May Aye Chan Aung, Dr. Thuzar Khinseptemberoffice 365; Miscrosoft office 365; apps; service planDownload

Microsoft has been the backbone of most educational intitutions. Communication and scheduling are done through Outlook. Documents are created and sent in Word format. Presentations are planned and presented through PowerPoint. Therefore, Miscrosoft (MS) Office is importatnt for academics. Every student, faculty member, and staff relies on Miscrosoft software for their day-to-day tasks, Over the past decade, Miscrosoft Office 365 has ransformed the face of MS Office into a new avatar for Software as a Service. Microsoft Office 365 is widely used in higher eduation worldwide. This paper explores histroy of Miscrosoft Office 365, its applications, plans, and system requirements. In addition, this paper also compares the advantages and disadvantages of Miscrosoft Office 365 and discusses user’s awareness and challenges of Miscrosof Office 365 migration.

itsmoral2020internt-computing
IEEE 802.11 Attacks and DefensesMay Aye Chan Aung, Dr, Khin Phyo ThantFebruary 27–28IEEE 802.11; Media Access Control (MAC) DoS; Disassociation; DeauthentationDownload

Protection of IEEE 802.11 networks means protection against attacks on confidentiality, integrity and availability. Possible threats come from vulnerabilities of security protocols. The rapid growth in the use of wireless networks attracts the attackers as a target. Wireless traffic consists of management, control and data frames. An attacker can manipulate these frames that affect the data integrity, confidentiality, authentication and availability. A real Wireless Local Area (WLAN) testbed setup is proposed for performing the vulnerabilities of well-known attacks pertaining to IEEE 802.11 network and monitoring the analysis of packets. Based on these categories of vulnerabilities and threats, Confidentiality Attack: Evil Twin Availability Attacks: Deauthentication Disassociation and Café Latte and Authentication Attack: Dictionary attack are conducted through demonstrations in a real environment by using proposed setup.

itsmoral2019network-security
Wireless Network Cracking and Penetration Testing for Link Layer AttacksDaw May Aye Chan Aung, DrFebruary 22–23-Download

With the passage of time, attackers are getting smarter and smarter. Incident handlers and law enforcement have been forced to deal with the complexity linked in wireless technologies to manage and respond security incidents. Wireless network cracking and penetration testing will mitigate the risk and threatening to protect Wireless Local Area Network (WLAN). The aim of this paper is to implement a wireless network security system which can audit the WLAN network and penetrate link layer attacks. Link Layer Attacks Detection Prevention System (LLADPS) is proposed for harvesting all Wi- Fi information, detecting, auditing and preventing wireless intrusion. This proposed system is implemented by using Kali linux environment with python network programming on real time set-up. In wireless network cracking, proposed LLADPS successfully detects and be alert attackers’ behaviors with log files by displaying on screen.

itsm2018network-security
Detection and Mitigation of Wireless Link Layer AttacksDaw May Aye Chan Aung , Dr. Khin Phyo ThantJune 7 – 9-Download

Nowadays, Wireless Local Area Networks (WLANs) have become popular because of mobility, lower installation, maintenance and ease of placement. The rapid growth in the deployment of wireless networks attracts the attackers as a target. Wireless link layer attacks target the lower layers of the open system interconnection (OSI) protocol stack to render the network unusable. Link layer attacks in wireless network are known as one of the weakest points of wireless networks because of unprotected management frames. According to these motivations, an approach that can detect and mitigate wireless link layers is proposed. The aim of this proposed work is to implement a network security system for wireless link layer attacks and to understand the patterns of these attacks. Moreover, Wireless Link Layer Attacks Detection algorithm (WLLADA) is also proposed by using active and passive finger printing methods to detect masquerading denial of service (DoS) attacks. Proposed algorithm is implemented with a real time set-up using in Kali linux environment with python network programming.

itsmoral2017network-security
Efficient Secure Network Steganographic Communication over Stream Control Transmission ProtocolMay Aye Chan Aung , Dr. Khin Phyo ThantFebruary 16–17-Download

Nowadays, Wireless Local Area Network (WLAN) is widely used because of mobility and ease of placement. The rapid growth in the use of wireless networks attracts the attackers as a target. Link layer is the weakness link among OSI model because of unprotected management frames. Wireless link layer is not covered by the current standard for WLAN networks leading to many potential attacks. A framework that can able to detect link layer attacks in wireless network is proposed. The aim of this proposed framework is to implement a wireless network security system which can detect link layer attacks and to understand the patterns of this attacks. The proposed Detection Link Layer Algorithm (DLLA) will detect wi-fi deauthentication attack by using both active and passive fingerprinting methods and implement with the Network Simulator 2 (NS2) tool. This proposed framework will be set-up and deploy in Kali Linux with Python network programming.

itsmoral2017network-security
Efficient Secure Network Steganographic Communication over Stream Control Transmission ProtocolMay Aye Chan Aung, Dr. Khin Phyo ThantFebruary, 5-6-Download

As the internet evolves and computer networks become bigger and bigger, network security is one of the most challenging issues in today’s world that touches many areas using network communication. Nowadays, there is a need for mobility, therefore new protocols were designed to support the mobility and multihoming facilities. Since SCTP is designed for telecommunication, its native design does not consider the security issues for data transmission. In this paper, an efficient secure network steganographic approach that is able to send, receive, detect and recover encrypted messages hidden in the Initiate Tag of Stream Control Transmission Protocol (SCTP) will be proposed. The main aim of the proposed work is to hide and protect secret data inside user’s normal data transmissions without destroying by third parties. Finally, the efficiency of this proposed approach will be compared to other known steganographic tools.

itsmposter2016network-security
A Survey on Security Protocols in Various Wireless NetworksMay Aye Chan Aung, Myat Su WaiAugust 26-28Wireless Network, Wireless Security, Wired Equivalent Privacy, Wi-Fi Protected Access, Wi-Fi Protected Access 2, Wi-Fi Protected SetupDownload

With the advance of wireless networks technology, the reliable and secured communication becomes extremely important. Most people are using the wireless networks for many purposes such as online banking and shopping. Wireless networks enable mobility computing to provide a rapid and easy access to information. Hackers and intruders can also make the utilization of the loopholes of the wireless communication. Therefore, security are applied on wireless networks to keep users’ confidentiality and privacy. In this study, various types of wireless networks and security protocols are mulled. Moreover, this study also gives the advantages, disadvantages and comparative study of these network types and protocols.

itsmoral2015network-security
Study on Symmetric and Asymmetric Cryptographic TechniquesMay Aye Chan Aung, Myat Su WaiFebruary 5-6-Download

Communication is the backbone of any enterprise. Without exchanging of data, communication is unimaginable. Data security is the challenging issue of today world that touches many areas including computers and communication. Recent cyber security attacks have certainly played with the sentiments of the users. Encryption is one of the ways to protect the data from unauthorized access and is used in many fields such as medical science, military and diplomatic communication, courts. To achieve data security, different cryptographic algorithms (symmetric and asymmetric) are used that jumbles data into scribbled format that can only be reversed by the user that have to desire key. In this paper, we study on recent existing different cryptographic algorithms and also give the advantages and disadvantages of these two encryption techniques.

itsmoral2015network-security
Design and Implementation of Web Page Classification using Naïve Bayesian ClassifierMay Aye Chan Aung, Dr. Khin Mar AyeDecember,27-Download

As the Web has grown, the ability of mine for specific information has become almost important. According to the advance of technologies, organizing information into suitable class plays a very important role. So, classification is also important for web crawlers. They could be specialized to look for certain web pages. Due to the different dimensionality and different representing forms of heterogeneous data source in the web, web page classification is more difficult than pure text classification. Automatic web page classification is an integral part of analyzing the World Wide Web. In this paper, Naive Bayesian Classifier (NBC) is applied to classify web pages into appropriate category. This system classifies web pages on six domains such as “Art & Humanities”,“Business &Economy”,“Education”, “Government”, “Health” and “Sport” by using Naive Bayesian Classifier.

fisoral2010data-mining
Brain Tumor Segmentation on Multi-Modal MR image using Active Contour MethodHla Hla Myint, Dr, Soe Lin Aung-active contour, HGG, LGG, Dice index, Jaccard indexDownload

Brain tumor segmentation is one of the most important in medical image processing, and it separates the different tumor tissues such as active cells, necrotic core, and edema from normal brain tissues. Normal brain tissues are White Matter (WM), Gray Matter (GM), and Cerebrospinal Fluid (CSF). In medical image processing, segmentation of the internal structure of brain is the fundamental task. The precise segmentation of brain tumor has great impact on diagnosis, monitoring, treatment planning for patients. Various segmentation techniques are widely used for brain Magnetic Resonance Imaging (MRI). In this paper, we propose an efficient method on active contour for 3 D volumetric MRI brain tumor segmentation, in which preprocessing use the image resize and then accurately segment the magnetic resonance images (MRI). Active contour model, also called snakes, are widely used in many applications. Three D volumetric MR images consist of T1, T1ce, T2 and Fluid Attenuated Inversion Recovery (FLAIR).

ucsmgyinternational-journal2020image-processing
An Efficient Tumor Segmentation of MRI Brain Image using Thresholding and Morphology OperationHla Hla Myint, Dr Soe Lin Aung-Image segmentation, Thresholding, Morphology operation, PreprocessingDownload

In medical image processing, segmentation of the internal structure of brain is the fundamental task. The precise segmentation of brain tumor has great impact on diagnosis, monitoring, treatment planning for patients. Various segmentation techniques are widely used for brain Magnetic Resonance Imaging (MRI). The aim of this paper presents an efficient method of brain tumor segmentation. Morphological operation, pixel extraction threshold based segmentation and Gaussian high pass filter techniques are used in this paper. Thresholding is the simplest approach to separate object from the background, and it is an efficient technique in medical image segmentation. Morphology operation can be used to extract region of brain tumor. This system converts the RGB image to gray scale image and removes the noise by using Gaussian high pass filter. Gaussian high pass filter produced sharpen image and that improves the contrast between bright and dark pixels. This method will help physicians to identify the brain tumor before performing the surgery.

ucsmgyinternaional-conference-paper2020image-processing
Comparision of Thresholding Method in Image SegmentationHla Hla Myint, Dr. Soe Lin Aung-image segmentation, ThresholdingDownload

-Image segmentation is the most challenging tasks of image processing. Many algorithms and techniques have been advanced in medical application. Several segmentation techniques involved detection, recognition and measurement of features which can be classified as either contextual or non-contextual. Among all the segmentation methods, the fundamental approach to segment an image is based on the intensity levels and is called as threshold based. For image processing and pattern recognition, thresholding is an essential method. Image thresholding technique needs determining a threshold value T and the method converts a gray scale image into a binary image. Thresholding is the simplest approach to separate object from the background, and efficient technique in medical image segmentation. In this paper, five different thresholding methods are analyzed and compared their segmentation images. The different method for image segmentation has been simulated using MATLAB. The analysis and comparison of that five thresholding algorithms are Sauvola, Niblack, Otsu’s, iterative and histogram thresholding.

ucsmgyinternational-paper2019image-processing
Web Page Category Classification Using Decision Tree Classifier and Recommendation of Related LinksPhyu Phyu Thant, Amy AungAugust ,18Web Page Classification, TF-IDF, C4.5, Web Content MiningDownload

Today, there has been an exponential growth in the number of electronic documents and pages in the web that is needed accurate automated classifiers of machine learning method. The purpose of the web data mining is to find the useful knowledge or information from web contents, web usages and hyperlinks. In this paper, the web page category classification system is proposed. The background theory of this system is web data mining. This system is tested a collection of hyperlinks in the computer science domain and ten categories of class. In this paper, it is seen web preprocessing for extracting contents. For classification, the system uses TF-IDF features extraction and decision tree classifier. The result shows that the proposed system produces the category of classified page according to predefined class category and related links of its.

fisjournal2022web-mining
An Efficient Email Spam Detection Using Multinomial Naïve Bayes AlgorithmNew New Aye, Amy AungJune, 26Multinomial Naïve Bayes, Email spamDownload

Nowadays, from business to education, emails are used in almost every field. Emails have two subcategories. A necessary contribution to messaging providing email via the internet. An email has two categories. They are ham email and spam email. Various models and techniques detect spam emails automatically. Spam emails are defined as junk emails and these are unsolicited messages. For email, Spam detection and filtration are important and massive problems. This paper mainly describes an efficient email spam detection using Multinomial Naive Bayes classification using an SMS spam collection dataset from Kaggle.com. Before preprocessing step, the dataset analyses cleaning the text, removing digits, and case folding steps. The system describes firstly pre-processing steps. These steps are removing punctuation, tokenization, detection of stop-words and lemmatization. After the preprocessing steps, we calculate the POS of tagging. In the proposed system, to detection of spam email messages’ meaning was approached with sentiment analysis techniques and a Multinomial Naive Bayes classifier. The calculation of MNB is based on a bag of words in sentiment analysis. The experimental results of the proposed method can calculate the Accuracy, F1-Score, Precision and Recall. The experimental results of the proposed method are detected spam emails with 84.02% accuracy on the Naive Bayes classifier. For actual implementation of the system using python with NLTK.

fisjournal2022web-mining
Myanmar Spelling Error Detection and CorrectionYee Mon Kyaw, Phyo Phyo WaiAugustSpell Checker, Edit Distance, Dictionary Lookup, Confusion Sets, Naive Bayes ClassifierDownload

Natural language processing (NLP) is a branch of AI (artificial intelligence) within computer science that helps computers to understand human languages. Spell checking and correction systems are important for many NLP applications, such as Machine Translation, Text Summarization, Text to Speech, and Information Retrieval, etc. Spell checking means to detect and correct the error. The Myanmar language is the official language of our country, Myanmar. This system intends to check for Typographic, Phonetic, and Context Errors in the Myanmar Language. Syllable Dictionary Lookup approach is used for Typographic Error Detection. Dictionary and Corpus Lookup approaches are used for Phonetic Error Detection. The Levenshtein Distance Algorithm is applied for giving a suggestion list of Typographic and Phonetic errors. For Context Errors, confusion sets approach is used in error detection and Naïve Bayes Classifier is used in suggestion generation. If there is an error in the incoming sentence, a suggestion list will be given and the correct sentence will be generated. The experimental results such as error detection rate, error correction rate and accuracy results on error correction are evaluated for performance.

fisjournal2022natural-language-processing
Sequential Pattern Mining in Web Log Data Using Generalize Sequential Pattern Mining (GSP) AlgorithmKhin Su Hlaing , Ei Ei Moe TunDecember,31-Download

The rising popularity of electronic commerce makes data mining an indispensable technology for several applications, especially online business competitiveness. The World Wide Web provides abundant raw data in the form of web access logs. However, without data mining techniques, it is difficult to make any sense out of such massive data. In this paper focus on the mining of web access log, commonly known as Web usage mining. Frequent pattern mining is a heavily researched area in the field of data mining with wide range of applications. One of them is to use frequent pattern discovery methods in Web log data. Discovering hidden information from Web log data is called Web usage mining. The aim of discovering frequent patterns in Web log data is to obtain information about the navigational behavior of the users. This can be used for advertising purposes, for creating dynamic user profiles etc.In this paper, GSP algorithm is used for sequential pattern mining in web log data.

fisconference2009sequential-pattern-mining
Dissimilarity Computation for Object of Different Variable TypesZin Nyein Nyein Han , Ei Ei Moe TunDecember,30-Download

Clustering is an essential step in data mining, statistical data analysis, pattern recognition, image processing, and can be used to drive data layout in massive distributed datasets, for example, to improve the retrieval of data subsets from tertiary systems or minimize the amount of data transferred and stored. Measuring the dissimilarity between data objects is one of the primary tasks for distance-based techniques in data mining and machine learning, e.g., distance-based clustering and distance-based classification. The quality of clustering can be accessed based on dissimilarity measures of objects which can be computed for various types of data. In this paper, we propose general framework for measuring a dissimilarity betweens various data analysis is proposed. The key idea is to consider the dissimilarity between two values of an attribute as a combination of dissimilarities between the conditional probability distributions of other attributes given these two values. In this system, the similarity is guessed by computing the dissimilarity measure between two objects. This can get the most similar values and the least similar values before clustering analysis.

fisconference2009data-mining
Eliminating Noise Web Pages to Improve Web Information ExdstractionPhyo Phyo San , Ei Ei Moe TunFebruary-Download

Today, the main content in web page is surrounded by noise. A web page usually contains noises such as navigation panels, copyright and privacy notices, and advertisements which are not related to the topic of the web page. These noises disturb the people who want to get the main content information and make them difficult to extract the main content. Therefore, detecting the main content information in a web page could potentially improve the performance of many web applications such as information retrieval, web page clustering and classification. This paper is implemented to eliminate noises in web pages for improving web information extraction. This paper uses noise detection and web page cleaning algorithm to remove noise such as unnecessary information. This paper is implemented to clean web pages with the purpose of preventing in web mining poor resul

fisconference2013web-mining
Knowledge Discovering in XML Documents by using Graw AlgorithmAye Thi Tar , Ei Ei Moe TunFebruary-Download

Text mining is an important field because of the necessity of obtaining knowledge from the enormous number of text documents. This system describes text mining technique for automatically extracting association rules amongst keywords from collections of textual documents. This system integrates XML technology with Information Retrieval scheme (TF-IDF) for keyword selection that automatically selects the most discriminative keywords for use in association rules generation and uses data mining technique for association rules discovery. This system consists of text preprocessing phase for filtration and indexing of the document, knowledge distillation phase for generating association rules by using GARW algorithm and visualisation phase for displaying results. This system extracts association rules that contain important features and describe the informative news (knowledge) that are included in the XML documents collection by using generated association rules. This system is implemented by using C# programming language.

fisconference2013text-mining
Implementing Data Warehouse for Sale Information using Data Integration and OLAP OperationZaw Ni , Ei Ei Moe TunDecember-Download

Data warehouse is data intensive system that is used for analytical task in business. The term used for this task is “On-Line Analytical Processing” (OLAP). The system is implemented to build data warehouse by using data integration as data preprocessing. Data integration combines the data from multiple data sources into a coherent data source, as data warehouse. In this paper, data warehouse is a collection of information that is critical to the successful execution of enterprise initiatives. This system is used to build data warehouse through data integration issues such as schema integration and object matching, redundancy and detection and resolution of data value conflicts. To display results, the user can use OLAP operations such as roll-up, drill-down, and slice and dice operation. This system can be used business system that has many branches

fisconference2011data-warehousing
Implementation of Geographic Information System for TonshipSu Pan Thaw , Ei Ei Moe TunDecember-Download

GIS is an important technology because “Everything Happens Somewhere”. Geographical information system captures, stores, analyzes, manages, and presents data that is linked to location. The presented system intends to supply the need for GIS in Magway Township.GIS helps as a tool with great power to know latitude, longitude and real distance of places in Magway Township. To find Latitude and Longitude, Lat Long algorithm is used. Catesian Coordinate system is also applied to evaluate the distance between source and destination of locations. This paper will describe two types of trip planning sub system: Text-based GIS and Map-based GIS. The presented system supports to find the information of all places such as university, industry, army force, bridge, pagoda, and supermarket and so on. The system will help to save time, money and energy.

fisconference2011gis
Comparison Analysis of Oil Crop Yield Prediction in Magway Region using Machine Learning MethodEi Ei Moe TunFebruary-Download

Agriculture assumes a predominant job in the development of the nation’s economy. In the Myanmar economy, agriculture is the fundamental help and the major financial division of the nation. The atmosphere and other natural changes have become a significant danger in the agribusiness field. Machine Learning is a fundamental methodology for accomplishing pragmatic and powerful answers to this issue. Oil Crop Yield Prediction includes foreseeing the yield of the oil crop from accessible chronicled accessible information like climate boundary, soil boundary, and noteworthy harvest yield. This paper focuses on foreseeing the yield of the harvest depends on the current information by utilizing a machine learning algorithm. Real data of the Magway region were utilized for building the models and the models were tested with samples. The prediction will assist the rancher in predicting the yield of the crop before developing onto the agriculture field. To anticipate the crop yield in future precisely machine learning tools, a generally incredible and well known managed ML algorithm is utilized. This research is executed as a framework to anticipate crop yield from past information. This is accomplished by applying three machine learning techniques, for example, Neural Networks, Support Vector Machines, and Decision Tree strategies for oil crop information in the Magway region. Oil crops in the Magway district are sesame, groundnut, and some sunflower. The exploration is expected to assist farmers in predicting the yield of the crop before developing onto the agribusiness field. The proposed framework is intended to arrive at smart farming for Myanmar agriculture.

fisconference2023machine-learning
A System for Learning Classification Rules using Lymphatic Disease DatasetEi Ei Moe Tunseptember-Download

Data classification is an important task in KDD (knowledge discovery in databases) process. ELEM2 (Extension of Learning from Examples Modules Version 2) is a machine learning system that induces classification rules from a set of data based on a heuristic search over a hypothesis space. In this paper, a system for classifying lymphatic disease is implemented based on ELEM2 algorithm. ELEM2 is to handles inconsistent training data by defining an unlearnable region of a concept. To further deal with imperfect data, ELEM2 makes use of the post-pruning technique to remove unreliable portions of a generated rule. A new rule quality measure is proposed for the purpose of post-pruning. In this paper, features of ELEM2, its rule-induction algorithm and its classification procedure are also described. We also report the experimental results which compare ELEM2 with C4.5 method on a number of datasets with k-fold accuracy.

fisconference2019data-mining
AN ELEVATOR SCHEDULING PROBLEM BASED ON WEIGHTED DIGRAPHThet Su Hlaing, Mya Thida Kyawseptember-Download

To solve a real world problem, the well presented work of cases. This paper considered the Elevator scheduling Problem based on weighted digraph. An Elevator scheduling Problem is chosen as a benchmark where all the elevator travel paths, traveling time and up or down directions are represented by weighted digraph. The scheduling problem involves finding an optimal path, or in other words, finding the shortest elevator travel paths of a building with certain number of elevators and floors. This system is implemented to find the optimal route for each elevator that satisfies all initial conditions such as present elevator position passenger’s destination in each elevator and floor hall call is presented. The Elevator Scheduling Problem is calculated by the two methods .These two methods are Shortest Path Algorithm and summing up each of the elevator traveling time.

fisconference2009artificial-intelligence
Pancreas Cancer Diagnosis System using NB and ID3 ClassifiersLin Lin Htun, Yin Su Hlaing Yin Nyein AyeDecemberPancreas Cancer, Classifications, ID3, NB ClassifierDownload

These days, individual’s way of life changes as per the cutting-edge patterns and the food styles change and become various diseases. Pancreas releases enzymes that aid digestion and produces hormones that help manage of the blood sugar. Pancreatic cancer is seldom detected at its early stages when it’s most curable. This is because it often doesn’t cause symptoms until after it has spread to other organs. The pancreas disease recognizable proof is a significant and the right computerization would be entirely attractive. Each person can’t be similarly dexterous as specialists. Besides, all specialists can’t be similarly gifted in each sub strength. In this circumstance, a robotized clinical consideration and it can likewise lessen costs. This framework is proposed the clinical finding framework by arranging pancreas malignant growth. For arrangement, this framework utilizes the ID3 decision tree and Naive Bayesian (NB) classifiers. Besides, this framework analyses the exhibition of these two classifiers. The framework causes the client to rapidly know the pancreas malignant growth status and to decrease the registration costs.

fisjournal2022data-mining
University Journal of Science, Engineering and ResearchLin Lin HtunMayEducational Data Mininig (EDM), Decision Tree, Classification, WekaDownload

Currently the amount huge of data stored in educational database these database contain the useful information for predict of students exam result. This paper measures student’s result according their practical test grade, seminar perfprmance, measure of student participate, their assignment, attendance, mid-term marks and final grade marks. The proposed system is to predict student’s exam result using classification model under the data set of students at University of Computer Studies, Magway. Data mining is used in higher education for analysis, predicting and evaluating the performance of students, teachers and others. Educational Data Mining (EDM) is one of the ways that issused to achieve these objectives. In this paper, the classification task is used to predict the final grade of students and as there are many approaches that are used for data calssification, the decision tree (ID3) method is used here. This paper analyses REPTree and Naive Bayes classification algorithms for classification model by using Weka tool. Weka is a collection of tools for regression, clustering, association, data pre-processing, classification and visualization. In this paper, data set is collected from exam results and information survey about 100 students in Third Year (CS and CT) at University of Computer Studies, Magway (Myanmar).

fisjournal2020data-mining
Genetic Algorithm-Based Feature Selection and Classification of Breast Cancer Using Bayesian Network ClassifierYi Mon Aung,Nwet Nwet Than, Linn Linn HtunDecemberGenetic Algorithm, Classification, Bayesian Network, Feature SelectionDownload

Cancer is one of the fastest growing and most dangerous diseases in the healthcare sector. Early diagnosis of this disease is very important because the success of your treatment depends on how quickly and accurately it is diagnosed. Data mining technology can help clinicians make diagnostic decisions. To improve the efficiency of these algorithms, the best features are needed to choose. To exclude non-essential attributes, genetic algorithms are used to extract useful and important attributes. This process speeds up the data miniing process and reduces computational complexity. Therefore, this study uses genetic algorithms to select the best features before applying the classification algorithms to three breast cancer datasets retrieved from the UCI repository. The process also used several single and multiple classifier systems to build a precise system for breast cancer diagnosis. This approach was useful for early forecasting, and the results show that the Genetic algorithms based features selection performs better accuracy than other classifiers without features selection.

fisjournal2020data-mining
Visitor Navigation Pattern Prediction Using Markov Models, Association Rules and Ambiguous RulesEi Theint Theint Thu, Khaing Min Kyu, Hlaing Htake Khaung TinJanauary, Markov models, association rules and ambiguous rules,web usage mining techniquesDownload

There are large number of Web sites which consist of many web pages. It is more difficult for the users to quickly get their target pages. The main aim of this paper is to only implement Visitor Navigation Pattern Prediction Using Markov models, Association Rules and Ambiguous Rules. The paper uses traveling paths that assist visitors to navigate the visiting places based on the past visitor’s behavior. In this article, Markov models and association rules and ambiguous rules were used to resolve ambiguous web access predictions .An improved method that organizes Markov models, association rules and ambiguous rules and combines web pages into a web site for prediction. This method can offer better predictions than using each method alone and other traditional models. It uses web usage mining techniques for recommending a visitor which (next)paths is closely the most popular paths in Myanmar.

fisjournal2021data-mining mathematics
Retrieving Text Documents by using Vector Space ModelKhaing Min Kyu,RenuDecember-Download

The rapid growth of world-wide information systems results in new requirements for text indexing and retrieval. In traditional Information Retrieval (IR), detecting similarities within a group of text documents is used widely. For efficient query processing, an indexing mechanism is required. The system in this work is a search engine for retrieving documents by using vector space model. Vector Space Model incorporates local and global information using weighting formula known as term frequency-inverse document frequency model. In this system, the user can search documents similar to the user query. Cosine similarity method is used to compute the similarity between query vector and document vector. The system not only can search texts from *.doc files also gives link to author’s web site to perform further searching. The system is also analyzed with precision, recall and mean average precision of retrieved results.

fisconference2010data-mining
WEB SERVICE-BASED HELP DESK SYSTEM FOR HOTEL RESERVATIONMay Thu KhinDecember, Helpdesk system, web service, ASP.NETDownload

Web service is a server-side program that listens for messages from client application and returns to combine, share, exchange information to form entirely new service or custom application created on the fly to meet the requirements of the client. In this web service system, the implemented system provides not only hotel information of Myanmar but also room checking and reservation. This system is constructed based on the services provided by service providers. In this paper, many famous hotel web sites of Myanmar are provided. The service provider gives the services to the service consumer. The Helpdesk system (service consumer) uses the services that are given by service provider. Then, the system returns the result to the system user. This system is implemented with ASP.NET

fcsconference2010digital-image-processing digital-signal-processing
Classification Of NAT And PAT With SimulationAye Aye Mar, Lai Yi Kyaw, Thiri Mon--Download

Today, every person using internet so connected the network and used Network Address Translation (NAT). NAT is often used by the router that connects the computer to the internet. NAT enable the whole network to access the internet using one single real IP address. Port Address Translation is a type of Network address translation used when there is a shortage of public IP address. One of the public IP address of the same subnet or the interface address is used for translation. In NAT overloading when a computer from an inside network is used the PAT. PAT uses a single external public address and maps multiple internal addresses using various port numbers. In recent years, simulation has become more and more popular among computer network researchers around the world. It is also important to ensure that the simulator results are accurate and credible. In this paper are using simulation tool (Cisco Packed Tracer), static NAT, dynamic NAT and PAT methods classification and describe different global address when inside network to outside global network. In this paper describe the router configuration and using different command line. To introduce NAT terminology this paper describes the result of globally network address using PDU (Protocol Data Unit) information and network topology configuration.

fcstjournal2019network
Open Shortest Path First (OSPF) Routing Protocol SimulationThiri Mon, May Thandar Oo, May Phyo Ko-OSPF, Dijkstra Algorithm, Link-State Protocol, VLSMDownload

Routing protocols are the family of network protocols that enable computer routers to connect with each other and in turn to intelligently forward traffic between their respective networks. Among routing protocols, Open Shortest Path First (OSPF) is an open standard routing protocol. OSPF has been used by a wide variety of network vendors. OSPF is a Dijkstra least-cost path algorithm and a link-state protocol that uses flooding of link-state information. In OSPF, a router constructs the entire autonomous system’s a complete topological map (that is, a graph). The router determines a shortest-path tree to all networks, with itself as the root node by running Dijkstra’s shortest-path algorithm. This paper presents the simulation of OSPF and uses GNS3 to build the simulation.

fcstjournal2019network
Myanmar Currency Exchange Rate Prediction System By Using Neural NetworksThiri MonJulyNeural Networks, Technical Trading Signals, Foreign Currency exchange ratesDownload

fcstjournal2019neural-network
Implementation of Image Encryption based on Standard MapNu Nu Hlaing,Su Myat NandarDecemberimage,Encryption,DecryptionDownload

Digital image encryption/decryption is to transform a meaningful image into a meaningless or disordered in order to enhance the power to resist invalid attack and in turn enhance the security.So,this paper presents a scheme of digital image encrypton based on the modified standard map.The encryption process includes tow parts mainly:confusion and diffusion.After processing the confusion and diffusion stage again and again,this system produces the reliable cipher image.So, this system can be used as pretreatment for digital.

fcstjournal2020cryptography
Two-Stage Correlation Apporach For Detection of Video SteganographyNu Nu Hlaing,Su Myat NandarDecembersteganalysis, steganography, MSU stego, correlation, feature extractionDownload

today, every person using internet so connected the network and used Network Address Translation (NAT). NAT is often used by the router that connects the computer to the internet. NAT enable the whole network to access the internet using one single real I

fcstjournal2020cryptography
Traffice Signal Control using Neural NetworkNu Nu Hlaingseptember-Download

Controlling the traffic lights at an intersection is a tedious and difficult contol challenge.Modern traffic intersections are contrllled using algorithms based on decision making and serial execution,similar to any software program.This system is intended to control traffic volume depending on the numbers of cars.Traffic light controller is based on the combing fuzzy logic and neural network.Neural Network is supported to learn from the fuzzy controller output.Input vector has two dimensional,arrival and queue.The labeled datasets is learned softwaree can be used at the junction for three way,four ways or five ways.This system is implemented by using c# programming language.

fcstconference2009network-technology
Semantic Analysis of Web Page with DBSCAN AlgorithmEi Ei Moe, War War Cho, Hnin Yu Hlaing--Download

Web mining, involves activities such as web page clustering, community mining, ect. to be performed on web. Web page clustering is a useful and powerful technique for topic discovery from web pages. It is the process of finding out groups of information from the web page and clusters these web pages into the most relevant groups. Semantic analysis identifies the semantic content of web pages and solve the ambiguity problem. The aim of the system is to solve the ambiguous word in each web page. In this paper, we utilize the glosses of terms for Word Sense Disambiguation (WSD). This system performs WSD task for all terms in all web pages to get the best sense to be used as features in the clustering process. In this paper, WSD approaches and DBSCAN (Density-Based Spatial Clustering of Application with Noise) method are presented for web page clustering. This system is implemented to develop for information retrieval system, culster-based browsing and bio-informatics applications about weather and instrument domains by using semantics content of web page rather than the keywords. In this paper, we used the corpus as the lexical resource to support semantic search. For clustering, the system uses the DBSCAN algorithm. As a result, we present senses of ambiguous words that can improve the performance of web page clustering system. In this paper shows the better performance of web page clustering system by disambiguating the ambiguous words and using semantic features.

fcsjournal2020data-mining
Finding Sequential Pattern by Using Spade AlgorithmHnin Yu Hlaing--Download

Sequential pattern mining is an important data mining problem with broad applications. However, it is also a difficult problem since the mining may have to generate or examine a combinatorial explosive number of intermediate subsequences. This paper is intended to develop a system for finding sequential pattern based on SPADE algorithm. SPADE algorithm adopts a candidate generation and test approach using vertical data format. The vertical format database can be obtained by transforming from horizontally formatted sequence database in just one scan. The aim of discovering frequent patterns in transcational database is to obtain information about the customer purchase sequence behavior of the transactions. This implemented system can be used for advertising purposes, for making business decision and so on.

fcsconference2010data-mining
VISITOR NAVIGATION PATTERN PREDICTION USING MARKOV MODELS, ASSOCIATION RULES AND AMBIGUOUS RULESEi Theint Theint ThuJuneMarkov models, association rules and ambiguous rulesDownload

There are large number of web sites which consist of many web pages. It is more difficult for the users to quickly get their target pages. The main aim of this paper is to only implement Visitor Navigation Pattern Prediction Using Markov models, Association Rules and Ambiguous Rules. The paper uses traveling paths that assist visitors to navigate the visiting places based on the past visitor’s behavior. In this article, Markov models, association rules and ambiguous rules were used to resolve ambiguous web access predictions. An improved method that organizes Markov models, association rules and ambiguous rules and combines web pages into a web site for prediction. This method can offer better predictions than using each method alone and other traditional models. It uses web usage mining techniques for recommending a visitor which (next) paths is closely the most popular paths in Myanmar.

fisjournal
Visitor Navigation Pattern Prediction Using Transition Matrix CompressionEi Theint Theint ThuJuneTransition probability matrix compression, web usage mining techniquesDownload

As an increasing number of cities consists of an increasing number of visiting places, it is more difficult for the visitors to consider. Meanwhile, the system tries to introduce recommendation features to their visitors. The main aim of this paper is to only implement visitor navigaiton pattern prediction system in Myanmar. The paper uses traveling paths that assist visitors to navigate the visiting places based on the past visitor’s behavior. To cluster the paths with similar transition behavior and compress the transition matrix to an best size for efficient probability calculation in paths, transition probability matrix compression has been used. In this paper, Visitor Navigation Pattern Prediction Using Transition Matrix Compression is developed. It uses web usage mining techniques for recommending a visitor which (next) paths is closely the most popular paths in Myanmar. By looking at the traveling paths in the organization, the system cna know the popular paths(places).

fisjournal2020mathematics
TRANSITION MATRIX COMPRESSION IN BUILDING WEB RECOMMENDER SYSTEMEi Theint Theint ThuseptemberTransition probability matrix compression, web usage mining techniquesDownload

As an increasing number of web sites consist of an increasing number of pages, it is more difficult for the users to rapidly reach their own target pages. Meanwhile, many e-commence sites try to introduce the Web Personalization and Recommendation features to their sites. The main aim of this paper is to only implement web recommendation system for Free Book website. This paper uses llinks that assist users to navigate the website based on the past user’s behavior. Transition probability matrix compression has been used to cluster web pages with similar transition behavior and compress the transition matrix to an optimal size for efficient probability calculation in link. In this paper, Transition Matrix Compression in Building Web Recommender System is developed. This paper uses web mining techniques for recommending a user which (next) links is the most popular links in the Free Book website.

fisjournal2009mathematics
Web page Category Classification Using Decision Tree Classifier and Recommendation of Related LinksPhyu Phyu Thant, Amy AungAugust, 18Web Page Classification, TF-IDF, C4.5, Web Content MiningDownload
Abstract: Today, there has been an exponential growth in the number of electronic documents and pages in the web that is needed accurate automated classifiers of machine learning method. The purpose of the web data mining is to find the useful knowledge or information from web contents, web usages and hyperlinks. In this paper, the web page category classification system is proposed. The background theory of this system is web data mining. This system is tested a collection of hyperlinks in the computer science domain and ten categories of class. In this paper, it is seen web preprocessing for extracting contents. For classification, the system uses TF-IDF features extraction and decision tree classifier. The result shows that the proposed system produces the category of classified page according to predefined class category and related links of its.
fisjournalweb-mining
Myanmar Word Sense Disambiguation based on HMMHtwe Htwe Lin, Kyi Kyi LwinWSD, text corpus, HMMWSD, text corpus, HMMDownload
In natural language processing, word sense disambiguation (WSD) is the problem of determining which “sense” (meaning) of a word is activated by the use of the word in a particular context, a process which appears to be largely unconscious in people. Nowadays, Word Sense Disambiguation (WSD) is an important technique for many natural language processing applications such as information retrieval and machine translation. Among them, the WSD technique is used for machine translation to find the correct sense of a word in a specific context. In machine translation, the input sentences in the source language are disambiguated in order to translate correctly in the target language which is Myanmar language that has many ambiguous words. Therefore, Hidden Markov Model based WSD method is used to resolve the ambiguity of words in Myanmar language. Myanmar-English bilingual corpus is used as the training data. This system can solve the semantic ambiguous problems that usually happen in Myanmar-to-English translation.
fisjournal2020natural-language-processing
An effictive approach of semantic analysis to Natural Language Translation System (English-Myanmar Language)Tin Htar Nwe, Phyo Phyo Wai14 th March"WSDDownload
English is one of the most widely spoken languages in the
world these days. Most of the commercial websites are also
being designed in the English language. So, we need a
language translation system to understand all websites and all
information from the internet. This paper is one part of the
English-Myanmar Machine Translation system. In our system,
the tagged English text is accepted as input. Firstly, these tags
are tokenized. Each phrase is tokenized. The structure is
basically a list of tokens. The linguistics clues are normalized.
And then the dictionary lookup is performed. The guessing
process is applied. Since the files are sorted according to
alphabetically, required information can be searched quickly
with a binary search. The dictionary-based translation
technique produces one or more translation terms in the target
language for each term in the source language. We propose a
method called word sense disambiguation to solve the
ambiguity of words. By using this technique and bilingual
lexicon, the system may retrieve correct translated words.
fisjournal2019natural-language-processing
"Image Segmentation by using global and local and local threaholding alogrithm""Hla Hla Myint, Phyo Phyo Wai, Moe Moe Zaw"JulyImage segmentation, ThresholdingDownload
Image segmentation is one of the most difficult and challenging tasks in many image processing. Several general purpose
algorithms and techniques have been developed in medical application. Image processing describes the analysis images and
obtaining desired segmentation results. Many researchers have used various types of techniques to reviewing the images. The
goal of image segmentation is to partition an image into more meaningful and easier use to analyze the various features of that
image. Segmentation techniques are involving detection, recognition and measurement of features. Segmentation algorithm is
based on color, and gray value images. Among all the segmentation methods, the fundamental approach to segment an image
is based on the intensity levels and is called as threshold based. One of the widely used techniques is thresholding.
Thresholding is a simplest approach to separate object from the background and it is widely used for medical image
processing. Thresholding technique creates from a gray scale image into binary image. Thresholding and edge detection are
an important technique in image processing. Thresholding techniques can be classified into two thresholding. These are global
thresholding and local thresholding. The local thresholding method is divided the original image into small subimages. It
determines a threshold value for each of subimages. Global thresholding method determines a single threshold value in the
whole images. The thresholding values are depending upon the spatial co-ordinates. In this paper, we analyze an efficient
segmentation of for three different thresholding methods. These methods are Otsu’s, Feng’s and Sauvola methods. The three
thresholding algorithms have been simulated in MATLAB.
fisjournal2019image-processing
Supervised Word Sense Disambiguation for Myanmar using Joint EntropyPhyo Phyo Wai, Tin Htar Nwe-WSD, text corpusDownload
Word ambiguity is a problem which has been a great challenge for Natural Language Processing (NLP). Many words have several meanings or senses. Selecting the most appropriate sense for an ambiguous word in a sentence plays an important role in machine translation system. Many approaches can be employed to resolve word ambiguity with a reasonable degree of accuracy. This paper presents a Myanmar Word Sense Disambiguation algorithm based on joint entropy that attempts to disambiguate the ambiguities of Noun, Verb, Adjective and Adverb in Myanmar to English translation. This system uses text corpus for training data. Furthermore, presented joint entropy method identifies correct sense of word in context. This method is very useful for disambiguation. Myanmar WSD (MWSD) system is well applied in Myanmar to English Translation System.
fisconference-paper2013natural-language-processing
"Evaluating The Verb Sense Disambiguation Using Naïve Bayesian Classifiers in Machine Translation"Phyo Phyo Wai-Word Sense Disambiguation (WSD), and Naïve Bayesian ClassifierDownload
Natural Language Processing(NLP) is a field of computer science and linguistics concerned with the interactions between computers and human (natural) languages. Ambiguity  is one of these problems which have been a great challenge for computational linguistic. This paper concentrates on the problem of target word selection in machine translation, for which the approach is directly applicable. However this system can only solve the ambiguities of verbs in Myanmar-English translations. This paper presents a corpus-based approach to word sense disambiguation that builds an ensemble of Naive Bayesian classifiers, each of which based on lexical features that represent cooccurring words in varying sized windows of context. In this paper, we proposed a frameword to solve ambiguous verb problems. Our system will support to improve the accuracy of the Myanmar to English translation.
fisconference-paper2010natural-language-processing
Choosing the Proper Target Word by using Statistical MethodPhyo Phyo Wai-Decision Tree, Word Sense Disambiguation(WSD)Download
Natural Language Processing (NLP) based technologies are now becoming important and future intelligent system will use more of these techniques as technology is improving explosively. Natural language translation is one of the most important researches carried out in the world of Artificial Intelligent. In any application where a computer has to process natural language, ambiguity is a problem. Many words have several meanings or senses. For such words, there is thus ambiguity about how they are to be interpreted. Meanings of word play the important role in translation system. The goal of this paper is to help natural language text translation and semantic analysis to natural language translation system based on Myanmar-English language. We proposed an algorithm called Myanmar WSD to solve the ambiguity of words. WSD algorithm solves and produces the nearest English meaning of the Myanmar input words. This MWSD system can solve all of the lexical semantic ambiguous problems (Noun, Verb, Adjective and Adverb) that are faced in Myanmar to English translation. By using Myanmar WSD algorithm, the language translation system for Myanmar-English may choose the best translation term for most of the words.
fisconference-paper2011natural-language-processing
Proposed Myanmar Word Tokenizer based on LIPIDIPIKAR Treatise"Thein Than Thwin, Phyo Phyo Wai"April(16-18)token, Unicode, NLP, Condensed form, Elaborated formDownload
Natural Language Processing(NLP) based technologies are now becoming important and future intelligent system will use more of these techniques as the technology is improving explosively. But Asia becomes a dense area in NLP field because of linguistic diversity. Many Asian languages are inadequately supproted on computers. Myanmar language is an analytic language but it includes special character like killer, medial, etc.. In English or European languaes, all of the syllables are formed by combining the alphabets that represent only consonants and vowels but Myanmar language uses compound syllables and make more difficult to analyze. So we can face difficulties in word sorting. In our proposed system, the condensed form of Myanmar ordinary scripts will be transformed into analyzable elaborated scripts basedon LIPIDIPIKAR treatise written by Yaw Min Gyi U Pho Hlaing. These elaborated words can be easily sorted by using this treatise. In our proposed system, complexity of Myanmar condensed words sorting compared with complexity of elaborted words sorting.
fisconference-paper2010natural-language-processing
Myanmar Word Sense Disambiguation using Mutual InformationPhyo Phyo Wai, Tin Htar Nwe-Word Sense Disambiguation (WSD), NLP and Mutual InformationDownload
Natural language processing (NLP) is one of the most important researches carried out in the world of artificial intelligent. In any application where a computer has to process natural language, ambiguity is a problem. Many words have several meanings or senses. Meanings of word play important role in translation system. Ambiguity problem is one of these problems which have been a great challenge for NLP. In this paper, we propose Myanmar Word Sense Disambiguation(WSD) algorithm based on mutual information. This MWSD system can solve the lexical semantic ambiguous (Noun, Verb, Adjective and Adverb) problems that are faced in Myanmar to English translation. This system can also provide the nearest English meaning of the Myanmar input words.
fisconference-paper2012natural-language-processing
"Word Sense Disambiguation for Myanmar to English Translation"Phyo Phyo Wai, Khin Thidar Linn-Decision Tree, Word Sense Disambiguation(WSD)Download
Machine translation is one of the important research areas in the world of Natural Language Processing. In this paper, we firstly present some challenges in Machine Translation. Translation system needs to solve the source and target language’s ambiguities and also need to properly map the concept and grammar rules of both languages. This paper focuses on semantic disambiguation. Therefore, this paper present word sense disambiguation with the decision tree techniques (ID3, J48, Naive Bayesian tree and Random Rorest). This system solved the ambiguities based on concept of word. Moreover, this system classifies the concept of each word.
fisconference-paper2011natural-language-processing
Myanmar to English Verb Translation Disambiguation approach based on Naïve Bayesian ClassifierPhyo Phyo Wai, Khin Thandar LinnMarch(11-13)Word Sense Disambiguation (WSD), NLP and Naïve Bayesian ClassifierDownload
Natural Language processing (NLP) is a field of computer science and linguistics concerned with the interactions between computers and human (natural) languages. Ambiguity is one of these problems which have been a great challenge for computational linguists. This paper concentrates on the problem of target word selection in Myanmar to English machine translation, for which the approach is directly applicable. However this system can only solve the ambiguities of verbs in Myanmar-English translations. This paper presents a corpus-based approach to word sense disambiguation that builds an ensemble of Naïve Bayesian classifiers, each of which based on lexical features. Moreover nouns are detail classified in our system. In this paper, we propose a framework to solve ambiguous verb problems. Our system will support to improve the accuracy of the Myanmar to English translation.
fisconference-paper2011natural-language-processing
APPLICATION OF LINEAR PROGRAMMING FOR PROFIT MAXIMIZATION OF REGIONAL PLANNINGZin Mar Oo, Hla Hla Myint, Junelinear programming model, Profit optimization, simplex method, Graphical method,Download

This paper is prepared to determine the optimal solution using linear programming model, and it analysed for profit optimization of a farm located at Yinmarbin Township in Sagaing Division during raining paddy. Linear programming is a method to achieve the best outcome in a mathematical model whose requirements are represented by linear relationships. The profits varied considerably depending on subjective. In linear programming model, a theoretical perspective undertaking for the present study is review of various different applications. Collected data is analysed to determine the decision making and to solve the problem is defined. By using decision variables of production, sales and profit over a period, objective function is can constructed. So, a linear programming model is developed for profit optimization of company. But production cost is considered for minimum. In this paper, we calculated the profit optimal solution by using graphical method and simplex method.

itsmbook national-paper2020computing
Brain Tumor Segmentation on Multi-Modal MR image using Active Contour MethodHla Hla Myint, Soe Lin Aung, Novemberactive contour, HGG, LGG, Dice index, Jaccard indexDownload

Brain tumor segmentation is one of the most important in medical image processing, and it separates the different tumor tissues such as active cells, necrotic core, and edema from normal brain tissues. Normal brain tissues are White Matter (WM), Gray Matter (GM), and Cerebrospinal Fluid (CSF). In medical image processing, segmentation of the internal structure of brain is the fundamental task. The precise segmentation of brain tumor has great impact on diagnosis, monitoring, treatment planning for patients. Various segmentation techniques are widely used for brain Magnetic Resonance Imaging (MRI). In this paper, we propose an efficient method on active contour for 3 D volumetric MRI brain tumor segmentation, in which preprocessing use the image resize and then accurately segment the magnetic resonance images (MRI). Active contour model, also called snakes, are widely used in many applications. Three D volumetric MR images consist of T1, T1ce, T2 and Fluid Attenuated Inversion Recovery (FLAIR).

itsmbook national-paper2020image-processing
An Efficient Tumor Segmentation of MRI Brain Image using Thresholding and Morphology OperationHla Hla Myin, Dr. Soe Lin AungFebruaryImage segmentation, Thresholding, Morphology operation, PreprocessingDownload

In medical image processing, segmentation of the internal structure of brain is the fundamental task. The precise segmentation of brain tumor has great impact on diagnosis, monitoring, treatment planning for patients. Various segmentation techniques are widely used for brain Magnetic Resonance Imaging (MRI). The aim of this paper presents an efficient method of brain tumor segmentation. Morphological operation, pixel extraction threshold based segmentation and Gaussian high pass filter techniques are used in this paper. Thresholding is the simplest approach to separate object from the background, and it is an efficient technique in medical image segmentation. Morphology operation can be used to extract region of brain tumor. This system converts the RGB image to gray scale image and removes the noise by using Gaussian high pass filter. Gaussian high pass filter produced sharpen image and that improves the contrast between bright and dark pixels. This method will help physicians to identify the brain tumor before performing the surgery.

itsminternaional-conference-paper2020image-processing
Comparision of Thresholding Method in Image SegmentationZin Mar Oo, Hla Hla Myint-image segmentation, ThresholdingDownload

Image segmentation is the most challenging tasks of image processing. Many algorithms and techniques have been advanced in medical application. Several segmentation techniques involved detection, recognition and measurement of features which can be classified as either contextual or non-contextual. Among all the segmentation methods, the fundamental approach to segment an image is based on the intensity levels and is called as threshold based. For image processing and pattern recognition, thresholding is an essential method. Image thresholding technique needs determining a threshold value T and the method converts a gray scale image into a binary image. Thresholding is the simplest approach to separate object from the background, and efficient technique in medical image segmentation. In this paper, five different thresholding methods are analyzed and compared their segmentation images. The different method for image segmentation has been simulated using MATLAB. The analysis and comparison of that five thresholding algorithms are Sauvola, Niblack, Otsu’s, iterative and histogram thresholding.

itsminternaional-conference-paper2019image-processing
Image segmentation by using global and local thresholding algorithmHla Hla Myint, Phyo Phyo Wai, Dr. Moe Moe Zaw.JulyImage segmentation, ThresholdingDownload
Image segmentation is one of the most difficult and challenging tasks in many image processing. Several general purpose algorithms and techniques have been developed in medical application. Image processing describes the analysis images and obtaining desired segmentation results. Many researchers have used various types of techniques to reviewing the images. The goal of image segmentation is to partition an image into more meaningful and easier use to analyze the various features of that image. Segmentation techniques are involving detection, recognition and measurement of features. Segmentation algorithm is based on color, and gray value images. Among all the segmentation methods, the fundamental approach to segment an image is based on the intensity levels and is called as threshold based.
itsmonline-journal2019image-processing
Effective performance of hidden markov model for epidemilogic survellianceHla Hla MyintMarchHidden Markov model, Influenza, EpidemicsDownload

The public health surveillance system is one of the most important for the detection of the seasonal influenza epidemic. We introduced different kinds of surveillance data for early detection of a disease outbreak. Hidden Markov model has been recognized as an appropriate method to model disease surveillance data. In this work, we proposed a hidden Markov model (HMM) to characterize epidemic and non-epidemic dynamic in a time series of influenza-like illness incidence rates and presents a method of influenza detection in an epidemic. ILI is defined as an illness marked by the presence of a fever (100.5 ℉or greater) and either a cough or sore throat within 72 hours of ILI symptom onset, or physician-diagnosed ILI. ILI incidence rate is based on surveillance data and activity state. HMMs have been used in many areas, including automatic speech recognition, electrocardiographic signal analysis, the modelling of neuron firing and meteorology. A two-state HMM is applied on incidence time series assuming that those observations are generated from a mixture of Gaussian distribution. Bayesian inference method is calculated to obtain the probability of an epidemic state and non-epidemic state every week. The various influenza dataset applied the methodology.

itsmonline-journal2019image-processing
Making Medical Decision for IFR Rate using Fuzzy LogicTin Yadanar Zaw, Hla Hla MyintFebruaryfuzzy logic,IFR RateDownload

The paper aims to precise a medical decision making for IFR (Intravenous Fluid Resuscitation) rate at ICU (Intensive Care Unit) using Fuzzy Logic. Intensive care medicine frequently involves making rapid decision. To make medical decisions, physicians often rely on conventional wisdom and personal experiences to arrive subjective assessments and judgments. The methods of Fuzzy Logic are suited to the nonexplicit nature of clinical decision making. So, this system implements fuzzy logic to administer the rate of Intravenous Fluid Resuscitation (IFR).The paper introduces a simple and effective methodology for making a medical decision based on fuzzy logic. Finally the system produces the adjusted IFR rate.

itsmconference2012artificial-intelligence
Secure Data Transmission Using Digital Envelope Steganography SystemSaw Min TunDecemberptography, RC5 Algorithm, RSA Algorithm, Steganography, LSBDownload

Nowadays, digital communication has become an essential part of infrastructure and a lot of applications areInternet-based. Consequently, the security of information has become a fundamental  issue. In this system,steganography and cryptography methods are used to be more secure for the  message transmission on thepublic network. The message transmission has two parts: sender site  and receiver site. In the sender site, firstlythe massage is encrypted by using RC5 algorithm with  RC5 secret key. And then, the RC5 key is encrypted by using RSA algorithm with receiver public key.  Finally, the encrypted message and encrypted RC5 secret key is embedded into a cover image by using LSB steganographic  algorithm. At the receiver site, firstly extract the encrypted message and encrypted RC5  secret key from the stego image. After that, the RC5 secret key is recovered by using RSA decryption algorithm with  receiver private key. Finally the original message is recovered by using RC5 decryption  algorithm with RC5 secret key at the receiver site.

fcstbook2015cryptography-and-steganography
"Microcontroller Based Humidity and Temperature Monitoring System""Dr. Thuzar Aung, Nyein Nyein Hlaing, Tint Tint Oo"-Arduino, microcontroller, monitoring, IDE, codesDownload

Arduino is associate open supply hardware and software. Arduino hardware, a motherboard, is employed in order to create interacting objects with appropriate computer programing Integrated Development surroundings (IDF). The aim of the paper is to create Associate in Nursing Arduino-based embedded device for observance environmental variables like wetness and temperature. The performance in numerous temperature and wetness is additionally studied within the paper, the device was engineered by victimisation the Arduino microcontroller board (Uno) and sensors, which may sense the temperature and quantity of wet within and outdoors a building within the implementation, info is showed each on a serial monitor and on a liquid display.

fcstjournal2019micro-electronic-and-control-system
Implementing Defenense System of DNS Hijacking and Cache Poisoning Attachs in the Domain Name System"Tint Tint Oo, Dr. Ingyin Oo"December-Download

The Domain Name System (DNS) is one of the key components of the Internet and most IP networks for that matter. Despite the Domain Name System being important, not many people have even heard of DNS; few know what it is and how to keep its security. DNS translates the server names which humans are more likely to remember into IP addresses which computers use to navigate through the Internet. In most of DNS transactions, protection of information is needed. Because of its vital role, DNS is involved in manifold Internet attacks both against the system itself and other Internet hosts. DNS and its vulnerabilities, various attacks to DNS system, and preventing methods will be described in this paper.

fcstconference2010networking-and-security
"Analysis of Open-Loop Stepper Motor Control System on Matlab and VHDL"Aye Aye MonMayMotor drive system, MATLAB, Verilog HDLDownload

n this paper, a stepper motor control system is designed and implemented on MATLAB/ Simulink. First of all, stepper motor is mathematically implemented on MATLAB/ Simulink using its subsystems. Feedback control is also considered in the system. After that, the system is partially optimized and simulated on Modelsim. In comparison with Verilog HDL, using MATLAB/Simulink to construct most of the functions of the motor drive system is more convenient and efficient, and the resulting codes could be optimized directly by the edit of Verilog HDL code. Simulations and experiments are demonstrated to show the performances.

nsjournal2020automatic-control-system
"Funzzy Logic PID Control of Automatic Volage Regulator System"Aye Aye MonFebruaryFuzzy logic system, PID Controller, control system, controlled A V RDownload

The application of a simple microcontroller to deal with a three variable input and a single output fuzzy logic controller, with Proportional – Integral – Derivative (PID) response control built-in has been tested for an automatic voltage regulator. The fuzzifiers are based on fixed range of the variables of output voltage. The control output is used to control the wiper motor of the auto transformer to adjust the voltage, using fuzzy logic principles, so that the voltage is stabilized. In this report, the author will demonstrate how fuzzy logic might provide elegant and efficient solutions in the design of multivariable control based on experimental results rather than on mathematical models. Keywords—Fuzzy logic system, PID Controller, control systems, controlled A V

nsconference-paper2009automatic-control-system
"Ritz Method and Ɵ- Family Method for Time Dependent Problems"Myint Theingi--Download

In this paper we deal with combination of two methods to determine approximate solution of time dependent problems. In the finite element solution of time dependent problems we assume that the nodal variables are functions of time. In this way, we can solve time dependent problems whose solutions can be represented as a product of a time function and a spatial function.

fcjournal2014d-e
A Study on TopologyWin win Than, Myint Theingi, --Download

In this research paper, we study continuity, open mapping, closed mapping and homeomorphism on topological spaces. Then we also study that the relation of homeomorphism onto the set of all topological spaces is an equivalence relation

fcjournal2012 2013algebra
Study On Soil Classification Using Linear Regression AnalysisChaw Kalyar Than, Yawai TintDecemberSoil Dataset, Soil Classification, SPSS Software, Linear Regression AnalysisDownload

Statistical regression analysis is a powerful and reliable method to determine the impact of one or several independent variable(s) on a dependent variable. It is the most widely used of all statistical methods and has broad applicability to numerous practical problems. Classification of soil type is very important for agricultural and engineering field around the world. Soil type is predicted using classification Techniques such as regression analysis, K mean cluster and so on. In this paper, it is tempt to use the predicting equation for soil type by using SPSS software based on linear regression analysis. The data is collected from soil profiles at each site. In this analysis, there are one parameter as an independent variable. This parameters are denoted as PI (Plasticity Index), Fine Content and Liquid Limit. According to the aim, this equation is easy to determine soil type at each layer of soil profile. Moreover, this paper aims to prove the right way in the usage of applied mathematics for predicting the classification of soil type.

fcstbook2019applied-mathematic
"Image Inpainting System for Cultural Museum Using Fast Marching Method"Hsint Hsint Htay, Yawai TintDecemberImpainting, Fast Marching MethodDownload

Digital image inpainting provides a means for reconstruction of small damaged portions of an image. Inpainting, the technique of modifying an image in an undetectable form, is as ancient as art itself. Image restoration or image reconstruction is a technique to improve the quality of image. Hence, we develop Image Inpainting System using Fast Marching Method (FMM) to edit damage photo like original photo. Fast Marching Method (FMM) is very simple to implement, fast and produce nearly identical results. This system can remove and replace selected objects due to the advantages of FMM. In addition, this system can be used to originate photos for museum restoration artists and to modify the image in retouching that is non-detectable by an observer.

fcstbook2019digital-image-processing
Study On Mathematical Models With Discharge Flow Rate Based On Rainfall Fluctuation In Kalewa StateHsu Yi Mon Win, Yawai TintDecemberRainfall, Discharge Flow, Exponential Regression Equation, SPSS SoftwareDownload

The discharge flow and rainfall fluctuation play very important effects to cause climate conditions all over the world. The climate changes are caused to damage people’s lives and their properties and other infrastructures. Therefore, it is tempting to study the prediction of the discharge flow concerning rainfall data by using the exponential regression analysis in SPSS statistics software. The discharge flow and rainfall data are collected from the Meteorological Department. The dependents variable is defined as the annual mean log discharge flow and the independent variable is referred to as annual mean rainfall in this study. The model obtained from the result is to be beneficial tool for local government authorities to reduce the death toll rate and other sectors.

fcstbook2019applied-mathematic
Analysis of Traffic Injury Severity by Using Two- Stage Data Mining AlgorithmsYawai Tint, Yoshiki MikamiOctoberInjury Severity Analysis, Neural Network, Decision TreeDownload

Transportation continues to be an integral part of modern life, and the importance of road traffic safety cannot be overstated. Consequently, most of the existing studies of risk factors; the data are categorized according to the accident severity level. While some of the risk factors, such as drug use and drinking, are widely known to affect severity, an accurate modeling of their influences is still an open research topic. This paper proposes a two- stage mining framework to explore the key risk conditions that may have contributed to powered four-wheelers and two-wheelers in Europe countries. In the first stage, to identify the potential risk conditions that best elucidate for crash severity by using neural network. In the second stage, top ranking risk factors from the neural network are respectively used as feature variables of the decision tree classifier. The analysis shows that the most important factors affecting the traffic injury severity.

fcstonline-journal2015data-science
Active Steganalysis of Mp3StegoYawai Tint, Khin Than MyaFebruary, Steganalysis, MP3, Principle Component Analysis (PCA), Independent Component Analysis (ICA), Part2_3_length, Global_gainDownload

The goal of steganalysis is to detect and/or estimate potentially hidden information from observed data. Steganalysis not only plays a significant role in information countermeasures, but also can prevent the illicit use of steganography. This paper develops an active steganalysis system for detecting hidden messages, by estimating frames with hidden message and message length in compressed audio files produced by MP3Stego. Principle Component Analysis (PCA) is applied not only to estimate uncorrelated components but also help to detect whether the received MP3 streams are steganography or original. In this steganalysis system, hidden messages can be detected in the PCA (whitening) stage. Independent Component Analysis (ICA) attempts to separate a set of steganographic signals from original signals. By analyzing the nature of MP3 signals (frame header, side information), frame with secret message can be detected. The results of empirical tests reach 97% and conclude that detection accuracy of the proposed steganalysis system is convenient for MP3Stego embedded contents. Experiments shows that the proposed method can be used quite effectively to detect locations of messages embedded using nature of MP3 signal.

fcstbook2014digital-signal-processing information-security
Breaking the Steganographic Utility Mp3StegoYawai Tint, Khin Than MyaFebruary, MP3Stego, Perceptual Evaluation of Speech Quality (PESQ), Bit Error Rate (BER)Download

The majority of steganographic utilities for covering of confidential communication suffer from fundamental weaknesses. To more secure steganographic algorithms, the development of attacks is essential to assess security. In this paper, techniques are presented which aim at the breaking of steganography usage in digital audio data. In this propose system, white noise is added to the audio signal and lowest bit of part2_3_length is changed by other bit. The propose scheme analyzes this algorithm by using MP3Stego.

fcstbook2013digital-signal-processing information-security
Steganalysis for MP3 Stego using Independent component analysisYawai Tint, Khin Than MyaDecember, steganalysis, MP3, principle component analysis (PCA), independent component analysis (ICA).Download

Steganalysis means detecting messages hidden by steganography. Steganalysis not only use information security but also can avert the illegal use of steganography. MP3 audio are commonly used in secret message embedding, which causes less doubtful than other audio formats. This system proposes a scheme for detecting secret messages embedded by MP3Stego. Principle component analysis (PCA) and independent component analysis (ICA) are used in this steganalysis system. PCA is used for estimating independent components and detecting whether the input mp3 files are steganography or original. ICA uses to split a set of steganographic signals from original signals. Experimental outcomes validate that the proposed system can detect the MP3Stego effectively.

fcstbook2013digital-signal-processing information-security
Audio Steganalysis Based on Independent Component AnalysisYawai Tint, Khin Than MyaFebruary, MP3Stego, Principle Component Analysis (PCA), Independent Component Analysis (ICA)Download

Steganalysis is the art and science of detecting messages hidden using steganography. This paper proposes steganalysis of audio signal using Independent Component Analysis. A detection method is used for detecting hidden message in compressed audio files produced by MP3Stego. Steganography can be successfully detected during the Principle Component Analysis (PCA) whitening stage. A nonlinear robust batch ICA algorithm, which is able to efficiently extract various temporally correlated sources from their observed linear mixture are used for blind steganography extraction.

fcstbook2012digital-signal-processing information-security
Source Separation of Steganography Mixed Audio SignalYawai Tint, Khin Than MyaFebruary, steganography, Independent Component AnalysisDownload

The research on blind source separation is focus in the community of signal processing and has been developed in recent years. This paper proposes enhance audio steganalysis technique, which adopts Independent Component Analysis (ICA) for steganography detection and extraction process. Steganography can be successfully detected during the Principle Component Analysis (PCA) whitening stage. A nonlinear ICA algorithm, which is able to efficiently extract various temporally correlated sources from their observed linear mixtures, is used for blind steganography extraction.

fcstbook2011digital-signal-processing information-security
A Minimum Redundancy Maximum Relevance- based Approach for Multivariate Causality AnalysisYawai Tint, Yoshiki MikamiOctoberCausality, Dummy Variable, MRMR, Multivariate AnalysisDownload

Causal analysis, a form of root cause analysis, has been applied to explore causes rather than indications so that the methodology is applicable to identify direct influences of variables. This study focuses on observational data-based causal analysis for factors selection in place of a correlation approach that does not imply causation. The study analyzes the causality relationship between a set of categorical response variables (binary and more than two categories) and a set of explanatory dummy variables by using multivariate joint factor analysis. The paper uses the minimum redundancy maximum relevance (MRMR) algorithm to identify the causation utilizing data obtained from the National Automotive Sampling System’s Crashworthiness Data System (NASS-CDS) database.

fcstonline-journal2017data-science
A Minimum Redundancy Maximum Relevance- based Causal Assessment of Injury SeverityYawai Tint, Yoshiki MikamiJulyCausality, Injury Severity, Multivariate Analysis, NASS-CDSDownload

Minimum Redundancy Maximum Relevance (mRMR) is an algorithm which has applied in various domains for feature selection. This study has adopted mRMR in multivariate causal analysis to explain the factors contributing to injury severity. mRMR uses the mutual information to maximize the joint dependency with injury severity to identify the most relevant and the least redundant set of causal factors. This study examines the relationship between injury severity and 17 causal factors, which represent the integrated aspects of human, vehicle and environment. The data were acquired from the National Automotive Sampling system/Crashworthiness Data System (NASS-CDS), as categorical variables. The results compared the number of factors required to explain injury severity by using mRMR and MI in multivariate causal analysis. Accordingly, mRMR can explain about 80% of the injury severity of male, female and older age groups by using a group of seven factors. Whereas, MI can only explain about 60% of injury severity by a group of seven factors for the same. Hence, mRMR can be recommended as the appropriate multivariate analysis method for complex data which is applicable in the domain of traffic injury prevention.

fcstonline-journal2017data-science
Audio Steganalysis Using Features Extraction and ClassificationYawai Tint, Khin Than Mya-, steganalysis, steganography, correlation coefficientDownload

Audio steganalysis has attracted more attentions recently. In this article, propose a steganalysis method for detecting the presence of information-hiding behavior in wav audios. We extract mel-frequency ceptral coefficient, zero crossing rate, spectral flux and short time energy features of audio files, and combine these features with the features extracted from the modified version that is generated by randomly modifying the least significant bits. And then correlation coefficient method is utilized in classification. Experimental result show that the propose method performs well in steganalysis of the audio stegograms that are produced by using Hide4PGP, Stegowav and S-tools4.

fcstbook2012digital-image-processing information-security
Electrocardiogram (ECG) Beat Classification Based on Correlation Coefficient MethodYawai Tint , Tin Mar Kyi-Correlation Coefficient, ECG, ClassificationDownload

fcstbook2008digital-signal-processing
Histogram of Accumulated Changing Gradient Orientation (HACGO) for Saliency Navigated Action Recognition"Hnin Mya Aye Sai Maung Maung Zaw"JuneAction Recognition, HACGO, HOF, HOG, SVMDownload
Action recognition has been an active research area in computer vision community during the recent years. However, it is still a challenging task due to the difficulties mainly resulted from the background clutter, illumination changes, large intraclass variation and noise. In this paper, we aim to develop an action recognition approach by navigating focus of attention (action region) with saliency detection and introducing a feature descriptor, namely Histogram of Accumulated Changing Gradient Orientation (HACGO). We firstly detect saliency in each video frame by computing pattern and color distinctness to localize action region. Then, we extract appearance and motion features using proposed HACGO, and existing HOG and HOF feature descriptors. Finally, a multi-class SVM classifier is applied to recognize different actions. The experiments were conducted on the standard UCF Sports action dataset. As experimental results, our action recognition approach achieved high recognition accuracy with a new combination of feature descriptors.
fcsconference-paper2017image-processing
Salient Object based Action Recognition using Histogram of Changing Edge Orientation (HCEO)"Hnin Mya Aye Sai Maung Maung Zaw"JuneAction Recognition, HCEO, HOF,SVMDownload
Action recognition has been a growing research topic in computer vision due to its great potentials for real-world applications. In this paper, we develop an effective action recognition approach based on salient object detection and propose a new feature descriptor to represent the changes of edge orientation. Firstly, we detect salient objects from each frame of a video sequence and generate edge maps for those detected salient objects. Then, we extract features on developed edge maps, using a combination of proposed Histogram of Changing Edge Orientation (HCEO) feature descriptor and existing Histogram of Optical Flow (HOF) feature descriptor. Finally, supervised multi-class support vector machine (SVM) classifier is used for recognizing various actions. The experiments were carried out on the standard UCF-Sports action dataset. As experimental results, our proposed action recognition approach is achieved with a significant improvement in recognition accuracy.
fcsconference-paper2017image-processing
Efficient Action Recognition based on Salient Object Detection"Hnin Mya Aye Sai Maung Maung Zaw"February-Download
Action recognition has become an important research topic in the computer vision area. This paper presents an efficient action recognition approach based on salient object detection. Recently, many features were directly extracted from video frames; as a result, unsatisfying results were produced due to intrinsic textural difference between foreground and background. Instead of whole frames, processing only on salient objects suppresses the interference of background pixels and also makes
the algorithm to be more efficient. So, the main contribution of this paper is to focus on salient object detection to reflect textural difference. Firstly, salient foreground objects are detected in video frames and only interest features for such objects are detected. Secondly, we extract features using SURF feature detector and HOG feature descriptor. Finally, we use KNN classifier for achieving better action recognition accuracy. Experiments performed on UCF-Sports action dataset show that our proposed approach outperforms state-of-the-art action recognition methods.
fcsconference-paper2017image-processing
Salient Object based Action Recognition using Histogram of Changing Edge Orientation (HCEO)"Hnin Mya Aye Sai Maung Maung Zaw"AugustAction Recognition, HCEO, HOF,SVMDownload
Real time communication applications including Mobile learning application can be integrated with other software applications into one platform and deployed in private clouds to reduce capital expenditure and lower overall costs of daily based maintenance and real estate required for computer hardware. As a critical component of private clouds, virtualization may adversely affect a real time communication application running in virtual machines as the layer of virtualization on the physical server adds system overhead and contributes to capacity lose. Virtualization in the mobile can enable hardware to run with less memory and fewer chips, reducing costs and increasing energy efficiency as well. It also helps to address safety and security challenges, and reduces software development and porting costs. This study will investigate how to build an effective learning environment for both the University and learners by integrating the virtualization, private cloud technology and mobile learning applications.
fcsconference-paper2015cloud-computing
A Survey on Object Detection, Classification and Tracking Methods"Hnin Mya Aye Si Si Mar Win"AugustObject Detection, Object Classification, Object TrackingDownload
 Object tracking is one of the ongoing research trends in the computer vision field. The object tracking is approximating the route of an object in the video as it moves around a scene and keeping track of its motion, and positioning. The object detection and object classification are preliminary steps for tracking an object in video. Object detection deals with checking the existence of objects in video and definitely locating that objects. The detected objects can be classified in various categories such as humans, birds, trees, or other moving objects. Object tracking is used in various real world applications such as video surveillance, traffic monitoring, and video animation. The main aim of this survey is to present the various steps included in tracking objects in a video sequence: object detection, object classification and object tracking. We survey different methods available for detection, classification and tracking of objects in video.
fcsconference-paper2015image-processing
A Model for Mobile Learning Applications on Virtual Private Cloud"Si Si Mar Win, Hnin Mya Aye "February-Download
Mobile Cloud Computing (MCC) provides a platform where mobile users make use of cloud services on mobile devices. The use of mobile clouds in educational settings can provide great opportunities for students as well as researchers to improve their learning outcomes and minimizes the performance, compatibility, and lack of resources issues in mobile computing environment. This paper proposed a MCC based learning model to create an effective learning environment for both the University and learners by integrating virtualized private cloud technology with two components, social networking and mobile learning applications. This model will construct by using High Performance Computing Cloud Toolkit Ezilla and ubiquitous mobile learning elements. It will evaluate by UTAUT based model and Quality of Experience (QoE).
fcsconference-paper2015cloud-computing
Study on Cryptographic Algorithms for Cloud Data Security"Hnin Mya Aye, Si Si Mar Win, Thin Thin Soe"Februarycloud computing, security issues, cryptography, encryption, decryptionDownload
 Cloud computing is a distributed computing paradigm in which computing resources, software, applications, information, and infrastructures are dynamically offered as services over the internet. Since distributed services are shared via the open network, the security of cloud data is required to be considered as a major issue in the cloud computing environment. Therefore, there is a need to protect cloud data against unauthorized accesses, or denial of service, modification, etc. To provide secure cloud data, cryptography can play as a technical control to address security issues undergone in cloud computing. In this paper, we have studied some of existing, wellknown cryptographic algorithms that are adoptable to provide better security of data in cloud computing.
fcsconference-paper2015cloud-computing
"Nation Building Through Lends a Hand of ICT Innovation: Preliminary Approach to the Multilingual Dictionary for the Prosperity of the Shan State"Amy Aung, Yuzana, Darli Myint Aung, Hsu Mon Kyi, Swe Swe AungMay, "natural language processing, machine readable dictionary, education. ICT "Download
Educational prospects for the regions which are the outside of the main capital in Myanmar are reluctant to reach to the goal. The educational, socioeconomic developments of the local people in the Shan state has been encountered as a lot of challenges in everyday life and the vast areas are beyond the opportunities. According to 2014 official census statistics, the percentage of illiterate in the urban area is only 15% of the population of the Shan state whereas 42.1% are illiterate in the rural area. It is only about 19.75% of primary school students in Shan State reach high school. And also 3.43% of the total population of Shan state who completed the highest level of education such as College and University. This is our main contribution of the research for the ethnic students weak in Myanmar language and English language consequently they have rare chances to connect to the science and technology. As lends a hand of ICT, this paper describes the preliminary approach for the creation of the machine readable dictionary, can provide the meaning of Myanmar, English and Pa-O language words vice versa. In this paper we proposed dictionary building design process model and the framework for the development of the multilingual dictionary that used the traditional building method, such as theory driven process. As Pa-O language is such a kind of an under-resourced language, the data resources are rarely hard to find. Hitherto, there is no officially printed dictionary for Pa-O language to Myanmar language and also English language as well. As a preliminary approach we performed the data collection, the data identification stairs of the proposed design process model in this paper. We collected 26174 Pa-O words with related meaning of Myanmar words and analyzed these words, we discovered the four inference observations presented in this paper.  We noticed that Pa-O language have 33 consonants same as Myanmar language whereas the phonetics,  some vowels, medial are dissimilar.The Pa-O language words started with the consonant of [A,အ] are observed as the most  widely used word about 4782 words and the second most are started  with the consonant of [Ta, တ] about 2612 words. Our purpose is being the University of Computer Studies  (Taunggyi) in the Shan state, we craft with the support of Information technology innovation to this region.  It enforces not only widen the ICT sectors but also for arising homogeneous developing and the enormous prosperity of the economic, education and social sectors in all areas of mountain and land in Myanmar.
fisproceeding-book2019machine-readable-dictionary natural-language-processing
Effective Atomic Cluster Finding Method for Author Name Disambiguation from Publication DatasetAmy Aung, May Phyo ThwalNovemberName ambiguity, Clustering based methodsDownload
Name ambiguity is a real world problem that one name possibly refers to more than one actual persons. It is a critical problem in many applications, such as: expert finding, people  connection finding, and information integration. Previously, several clustering based methods have been proposed although, the problem still presents to be a big challenge for both research and industry communities. In this paper, the proposed approach presents a complementary study to the problem from another point of view. In
this proposed system, an approach of finding atomic clusters to improve the performance of existing clustering-based methods. Experiments results show that significant improvements can be obtained by using the proposed atomic clusters finding approach.
fisonline-journal2014name-disambiguation
Ontology Based Hotel Information Extraction from Unstructured TextAmy Aung, May Phyo Thwal, March(29-30)"attribute value extraction, concept identification, ontology based information extraction, pattern matching techniques,triplet extraction algorithm"Download
Ontologies play a central role in the Semantic Web and in many other technological developments. Multiple ontology based approaches, loosely grouped under the heading ‘semantic interoperability’, have come to the fore as potential solutions to critical interoperability problems. Further, technologies that incorporate and rely on ontologies are used to increase transparency both within and across organizations, and also to enhance communication not only between computers but also between human beings. We describe a proposed framework to populate an existing ontology with instance information present in the natural language text provided as input. This approach starts with a list of relevant domain ontologies created by human experts, and techniques for identifying the most appropriate ontology to be extended with information from a given text. Then the proposal expresses heuristics to extract information from the unstructured text and for adding it as structured information to the selected ontology. As it is used in identifying relevant information in the text, this identification of the relevant ontology is critical. First phase is to extract information in the form of semantic triples from the text and then guided by the concepts in the ontology in the second phase. In the third phase, the proposed system converts the extracted information about the semantic class instances into Resource Description Framework (RDF) and appends it to the existing domain ontology. This enables
us to perform more precise semantic queries over the semantic triple store thus created.
fisonline-journal proceeding-book2014ontology-based-information-extraction
Integration Approach for Relational Database to Ontology GenerationAmy Aung, May Phyo ThwalDecember,9-10"Ontology; Database; Generation, Mapping Analysis of Ontology and Database, Construction Rules "Download
Today, databases provide the best techniques for storing and retrieving data, but they suffer from the absence of semantic perspective. Ontology resources are more and more important for organizing knowledge. It has become an important task to improve the efficiency of ontology construction for semantic applications. How to generate ontology automatically from database resources are very complicated and emerging task in ontology construction. Aiming at solving the problem, a method for automatic ontology building using the relational database resources to improve the efficiency is proposed in the paper. Firstly, mapping analysis of ontology and database is done. Secondly, construction rules of ontology elements based on
relational database, which are used to generate ontology concepts, properties, axioms, instances are put forward. Thirdly, Ontology automatic generation system based on relational database is designed and implemented. Finally, the practical experiments prove the method and system feasibility.
fisproceeding-book2013relational-database-to-ontology-generation
Classification of Road Traffic Accidents Using (CART) Adaptive Regression TreesEi Shwe Zin Naing, Yu Ya WinJulyCART, GiniDownload

Road traffic accidents are amongthe top leading causes of deathsand injuries of all age. The cost
of these fatalities and injuries has a great impact on the socio-economic development of a society. By using these data, it generates the rules based on Classification and Adaptive Regression Trees (CART). CART is binary tree and it splits two branches for each node. Gini splitting rule is used for tre growing. Splitting is stopped when number of observations in the node is less than predefined minimum value (Nmin). Accident type, Accident cause, the type of the vehicle, Road Condition, Light condition at the time of Accident is attributes for prediction of the system. The outputs of the system are fatal, serious, slight and property damage.

fcslocal-journal2019data-mining
Paper Retrieval System using Dice CoefficientYin Wai LwinAugustQ-gram, Maximum consecutive longest common subsequence, dice, string metricsDownload

Redundant methods or redundant algorithm can be applied in preceding papers for new paper publication. This system presents several string metrics, such as Q-gram, maximum consecutive longest common subsequence (MCLCS) and Dice coefficient, etc. If the two texts have some words in common, this system can measure how similar the order of common-words is in two texts. In this system, user has to enter at least one query word to retrieve related paper titles. This system provides the user with the nearest relevant paper titles in rank even if user enters at least one query word. Experimental results of string distance metrics (MCLCS and Dice) on real data are provided and analyzed. This system will display the similar text of methods, algorithm and application comparing with current paper and preceding papers. The preceding papers are now collecting in AICT, ICCA and PSC.

fislocal-journal2019data-mining
Feedback for Teachers’ Performance Using Decision Tree and Neural NetworkSoe Moe Aye, Soe Soe NweDecemberQ-gram, Maximum consecutive longest common subsequence, dice, string metricsDownload

As much as it is important for the institutions to evaluate the performance of the teachers it is equally important to measure the output of the teachers. Teachers are one of the weightiest assisting factors towards the process of effective learning. Accordingly, teachers are to perceive the nature and signature of student, also to pore over the student trueness, student fulfillment of a need, service quality, relationship commitment and hope. Feedback system is based feedback collecting from the students and provides the automatic generation of a feedback which is given by students. Teachers’ evaluation search to enhance the teacher own practice by identifying pros and cons for development of future education. It is schemed at guarantee that teachers perform at their best to better student learning. Data mining applications play a significance role in working out educational puzzling circumstance in higher education. In this day and age, Educational data mining is a becoming prominent discipline concerned with different approaches contains predicting teachers’ performance and improve the quality of education process to raise up students’ achievement. The system purposes at utilization the classification methods of Data Mining for the foretelling of teachers’ performance. The Artificial Neural Network (ANN) technique based on backpropagation and Decision Trees method explicitly C4.5 algorithm were employment and their accuracy were assessed the similarities and differences to each other.

fislocal-journal2019data-mining
Implementation of Voice Recognition System for Real Time Using MFCCs and GLAMoe Thuzar HtweFeb26th, 27thMFCCs, GLA, MATLABDownload
Voice recognition system is very useful in several area of computer science and mathematics . It is the process of automatically recognizing who is speaking on the basic of individual information included in speeh waves . In this paper, text dependent speaker identification method is used . This system contains training phase, testing phase and recognition phase . In the training phase,the features of the word spoken are extracted by using MFCCs(Mel fraquency cepstrum coefficients) . During the testing phase, feature matching is carried out using vector quantization, GLA(Generalized Lioyd Algorathm) that match at given(known/unknown) speaker to the set of knowns speakers in the database . The data base is constructesd from the speech samples of each known speaker . During the recognition phase, the features are extracted by the same techniques and are compared with the template in the database . To implement this system MATLAB is used for programming .

 

fcstinternaional-conference-paper2009signal-processing
Gas Leakage Detector by Using Arduino UNO & MQ-2 SensorMoe Thuzar HtweDecemberArduino UNO, CNG, LPG, MQ-2Download
We know gas is a useful component in enviroment.Gas like CNG is highly combustible.If there is any amount of leakage of these type of gases it can cause a huge loss of life.LPG is highly inflammable and can burn even at some distance from the source of leakage.LPG mostly filled in cylinders which are strong and can’t be damaged easily.However,leaks can occur from gas cylinders, regulator and gas pipe tube connection when these are not in a good condition and may cause an accident.This paper shows a gas detection system that uses an Arduino UNO, MQ-2 gas sensor and a few other components.The main aim of this paper are to detect the gas of the surroundings and send the concentration levels of the gas to arduino UNO, and then send the output to the buzzer, the relay and the water pump.
fcstjournal2019embedded-system
A Comparison of Online and Traditional Tests: Reading Comprehension of the Students at the University of Computer Studies, PyayMya Sandar , Kyu Kyi Win, Chaw Su HlaingMayonline, paper-based reading, pros, cons, comprehension, exactness, perspectiveDownload
This research intends to decide the fondness of Intermediate Students in performing Personal Computer and paper-based reading assignments, and to what extent PC and paper-based reading impact on their understanding rate, exactness and comprehension. The test was carried on at the University of Computer Science and Computer Technology. Two kinds of information were gathered in this research. First, the Questionnaires for Online Reading Comprehension were used to collect information about the members’ perspectives on their PC and paper-based reading exercises. Second, one test was conducted with 12 chipping subjects to comprehend their understanding rate, exactness and comprehension in both PC and paper-based reading tests. The consequences of the research proposed that almost all students preferred paper-based reading to PC reading. Moreover, the study shows that reading speed on the PC was almost 12% quicker than paper-based reading for these students.
languageenglishjournal2020literature-learning-and-teaching
The Importance of Learning Collocations in English Language Learning through Practical ActivitiesThaung Thaung New , Yin Yin Aye , Chaw Su HlaingJuneLearning collocations; Vocabulary development; Strategies; Language learningDownload
This research paper is about the importance of learning collocations in English language learning of the first-year students at the Computer University, Kalay. The aim is to develop students’ vocabulary skills and to share knowledge about the use of complex and difficult to learn. Effective uses of collocations make learners interested in fluency and meaningful depiction of literary texts. Knowledge of collocations is important for the students to take the responsibilities for their own learning and to use the acquired learning collocation strategies to tackle the various kinds of texts. Some various effective techniques are mentioned to improve the students’ abilities of vocabulary development. Through needs analysis survey, needs, lack, wants, and attitudes of various students and teachers can be found out. Hopefully, this paper can give some insights into the difficulties in learning collocations faced by students and teachers.
languageenglishjournal2020language-education
A Study on Students’ Perceptions of their Experiences in Making Oral PresentationPale Mon , Chaw Su Hlaing, Phyo Ei ThuMayperception, experiences, difficulties, oral presentationDownload
The study is concerned with students’ perceptions of their experiences in making oral presentation test at the University of Computer Studies, Thaton (UCST). The research aims to examine students’ attitudes towards making oral presentations by identifying benefits obtained and the challenges faced regarding the three perspectives: personal traits, deficiency in presentation skills, and apprehension of peers and teacher. The participants of the study were 85 fifth-year students from three sections and six teachers. A four-point Likert Scale questionnaire composed with 35 items and semi-structured interview were carried out to collect the data for analysis. The results obtained were described in percentages. The findings showed that students experienced a medium level of difficulties in conducting oral presentation. Analysis of qualitative data indicated students’ challenges such as stress and lack of practice in oral presentation skills. This study recommends that students need more exposure and experience of presentation skills in university.
languageenglishjournal2020literature-learning-and-teaching
Learning Strategies for Vocabulary: A Vital Element of Language LearningChaw Su Hlaing, Mya Sandar, Pale MonJunevocabulary, definition of vocabulary, strategies, vocabulary developmentDownload
The importance of developing vocabulary learning skills is the most essential skill for foreign language learners. Language is sequential _ speech is sequence of sounds whereas writing is a sequence of symbols. To produce an honest piece of writing, learners need to enrich their vocabularies. Therefore, it is needed, especially for language learners, to focus on the vocabulary and effective strategies. Through using a wide range of vocabulary and strategies, learners are able to develop their vocabulary and to produce a good piece of writing. Vocabulary knowledge is additionally one component of language skills, like reading and speaking. To be able to know and cope with a wide range of vocabulary, it is necessary for learners to know effective strategies for learning vocabulary.

 

languageenglishjournal2020language-education
Research and development of the stabilization of a closed-loop stepper motor control systemSoe Lin Aung--Download

ucsmgyinternational-journal2006information-security
Analyzing speed characteristics of a closed-loop stepper motor control systemSoe Lin Aung--Download

ucsmgyinternational-journal2006information-security
Design and implementation of the stepper motor devices in MatlabSoe Lin Aung--Download

ucsmgyinternational-paper2006information-security
Off-line Signature Verification using thresholding TechniqueHtight Htight Wai, Soe Lin Aung--Download

ucsmgyinternational-journal2014information-security
Perceptual Grouping with Region Merging for Automatic Image SegmentationTin Tin Htar, Soe Lin Aung--Download

ucsmgyinternational-journal2014information-security
Enhancement of region merging algorithm for image segmentationTin Tin Htar, Soe Lin Aung--Download

ucsmgyinternational-paper2014information-security
Feature Extraction for off-line Signature VerificationHtight Htight Wai, Soe Lin Aung--Download

ucsmgyinternational-journal2013information-security
Offline signature verification system using neural networkHtight Htight Wai, Soe Lin Aung--Download

ucsmgyinternational-paper2014information-security
Automatic image segmentation using marker-controlled watershed transform and region mergionTin Tin Htar, Soe Lin Aung--Download

ucsmgyinternational-paper2013information-security
Image Segmentation based on Region MergingTin Tin Htar ,Soe Lin Aung--Download

ucsmgyinternational-journal2013information-security
Meaningful region merging approach in image segmentationTin Tin Htar, Soe Lin Aung--Download

ucsmgyinternational-paper2013information-security
Offline signature verification systemHtight Htight Wai, Soe Lin Aung--Download

ucsmgyinternational-paper2013information-security
Automatic Closed-loop stepper motor control system with Intra Step-by-step discrete correction of speedsoe lin aung, Dubovoi N.D.,Demkin V.I--Download

ucsmgyinternational-journal2008information-security
Controlling the speed ratio of a closed-loop control system with two stepper motorssoe lin aung, Dubovoi N.D., Demkin V.I--Download

ucsmgyinternational-journal2007information-security
Designing Graphical User Interface for automatic control systems with stepper motorsSoe Lin Aung--Download

ucsmgyinternational-paper2008information-security
Research and Development of the stabilization of the speed of closed -loop stepper motors control systemwith intra-step- by-step correctionof speed algorithmSoe Lin Aung--Download

ucsmgyinternational-paper2007information-security
Research and Development of the ratio of the speeds of two closed- loop stepper motorsSoe Lin Aung--Download

ucsmgyinternational-paper2007information-security
Using factorial experiment for analyzing the speed stabilization closed-loop stepper motorSoe Lin Aung--Download

ucsmgyinternational-paper2007information-security
Research and development of closed-loop stepper motor control system in different types of switchingSoe Lin Aung--Download

ucsmgyinternational-paper2006information-security
Controlling the speed of stepper motor in non-contact direct current modeSoe Lin Aung--Download

ucsmgyinternational-paper2006information-security
Research and Development of Automatic Control Systems using Electronics Workbench and MatlabSoe Lin Aung--Download

ucsmgyinternational-paper2005information-security
‌ဗေဒါလမ်းကဗျာမှ ဘဝအားမာန်Thein Zaw Win, War War Soe, Htay Htay AungSeptemberအားမာန်၊ ခေတ်စမ်း၊ ဗေဒါ၊ ဒီ၊ ကျူ။Download

ဤစာတမ်းသည် ဆရာဇော်ဂျီ၏ ကဗျာပေါင်းချုပ်ကို အလေ့လာခံနယ်အဖြစ် သတ်မှတ်၍
(၁၉၆၀)ပြည့်နှစ် တက္ကသိုလ်အိုးဝေမဂ္ဂဇင်းတွင် ရေးသားခဲ့သော ဗေဒါလမ်းကဗျာ(၃)ပုဒ်မှ ဘဝ
အားမာန်ကို လေ့လာတင်ပြထားခြင်း ဖြစ်သည်။ ဗေဒါလမ်းကဗျာ(၃)ပုဒ်ပါ ဗေဒါပင်လေး၏ ရေယဉ်
ခရီးတွင် တွေ့ကြုံရသော အခက်အခဲ ပြဿနာများကို ဘဝခရီးတွင် လူသား ကြုံတွေ့ရသော အခက်
အခဲ ပြဿနာများအဖြစ် မြင်ယောင်စေပြီး ဗေဒါပင်ကလေးကဲ့သို့ ဘဝအမောများကို ဖြေတတ်၊
ပြေတတ်စေလိုခြင်း ဖြစ်သည်။ ဤစာတမ်းသည် ဗေဒါလမ်းကဗျာမှ ဘဝအားမာန်ကို အနက်အဓိပ္ပာယ်ရှုထောင့်မှ လေ့လာတင်ပြထားခြင်း ဖြစ်ပြီး ဆရာဇော်ဂျီ၏ ဗေဒါလမ်းကဗျာများ
ကို လေ့လာလိုသူများအတွက် တစ်စိတ်တစ်ဒေသ အထောက်အကူပြုနိုင်မည် ဖြစ်သည်။

languagemyanmarlocal-journal2020literature-poem
ခင်ခင်ထူး၏ 'ကျေးတစ်ရာ ရသစာစု များ'မှ ထိန်းသိမ်းအပ်သော မြန်မာ့ယဉ်ကျေးမှုများWar War Soe, Htay Htay Aung, Thein Zaw WinJulyရိုးရာ၊ ဓလေ့၊ ရသ၊ ယဉ်ကျေးမှု၊ ပွဲတော်။Download

ဤစာတမ်းသည် မြန်မာ့ရိုးရာ ယဉ်ကျေးမှု အမွေအနှစ်များ ထိန်းသိမ်း စောင့်ရှောက်သင့်ကြောင်း
ဖော်ပြလိုသော စာတမ်း ဖြစ်သည်။ မြန်မာ့မြေတွင်ကြီး၊ မြန်မာ့ရေသောက်ပြီး မြန်မာ့ရိုးရာယဉ်ကျေး
မှုကို မထိန်းသိမ်းနိုင်လျှင် စာပေနှင့် ယဉ်ကျေးမှုပျောက်၍ လူမျိုးပျောက်ကာ တိုင်းပြည်ပျောက်ရ
မည့် အဖြစ်ကို မရောက်လို၍ မိမိတို့ ယဉ်ကျေးမှုကို မြတ်နိုးလာစေရန် တင်ပြထားသည့် စာတမ်း
ဖြစ်သည်။ ယနေ့ခေတ်လူငယ်များအနေဖြင့် နည်းပညာပိုင်း ဖွံဖြိုးတိုးတတ်လာသော်လည်း ရသစာ
ပေကို ချစ်ခင်မှု အားနည်းလာကြသည်။ ထို့ကြောင့် ကိုယ့်နိုင်ငံ၊ ကိုယ့်လူမျိုး၊ ကိုယ်စာပေ၊ ကိုယ့်ယဉ်
ကျေးမှုကို အမြတ်တနိုး တန်ဖိုးထားတတ်စေရန် ရည်ရွယ်ပါသည်။

languagemyanmarlocal-journal2020literature-culture
ခင်ခင်ထူး၏ ဖက်စိမ်းကွမ်းတောင် ရွှေဝတ္ထုတိုများအနက် ခရီးဖော်ဝတ္ထုတို လေ့လာချက်Thein Zaw Win, War War Soe, Htay Htay AungJuneရည်ရွယ်ချက်၊ ဇာတ်လမ်း၊ ဇာတ်ဆောင်၊ နောက်ခံဝန်းကျင်။Download

ဤစာတမ်းသည် ခင်ခင်ထူး၏ ဖက်စိမ်းကွမ်းတောင် ရွှေဝတ္ထုတိုများအနက် ‘ခရီးဖော်’ဝတ္ထုကို
အလေ့လာခံနယ်အဖြစ် သတ်မှတ်၍ ဝတ္ထုတို အင်္ဂါရပ်များ ဖြစ်သော ရည်ရွယ်ချက်၊ ဇာတ်လမ်း၊
ဇာတ်ဆောင်၊ နောက်ခံဝန်းကျင်အဖွဲ့များဖြင့် ဝေဖန်သုံးသပ် တင်ပြထားပါသည်။

languagemyanmarlocal-journal2020literature-novel
Performance Analysis of a QAPF Scheduling Scheme in LTE NetworkZin Mar Myo, Zar Lwin Phyo, Aye Thida Win, DecemberLTE, Multimedia Services, OFDMA, QAPF, TDMADownload

Long Term Evolution (LTE) is a cellular network that operates completely in packet domain. It can also support multimedia services. Due to this nature, provisioning the Quality of Service (QoS) requirements has become a challenge to the design of scheduler. Especially, the scheduler must guarantee for transmitting the real-time traffics such as VoIP (voice over IP). In this paper, a scheduler that satisfies the different QoS requirements is presented. This scheduler considers the additional QoS consideration to the Proportional Fair (PF), which is a well-known scheduling algorithm. As the core work of this paper, its performance is evaluated by comparing with the existing well-known schedulers. For non-real-time traffics such as file download, the performance of the scheduling scheme is evaluated by comparing with PF scheduler. For real-time traffics such as video live streaming, its performance is compared with Frame Level Scheduler (FLS).

fcsonline-publication proceeding-book2019mobile-network-computing
မြန်မာကဗျာနှင့် သင်္ကေတအဖွဲ့War War Soe, Htay Htay Aung, Thein Zaw WinJuneကဗျာ၊ သင်္ကေတ၊ ရသ၊ ဖန်တီးသူ၊ ခံစားမှု။Download

ဤစာတမ်းသည် မြန်မာကဗျာများတွင် တွေ့ရသော သင်္ကေတအဖွဲ့ ထင်ဟပ်နေသည့် ကဗျာများကို
လေ့လာတင်ပြထားသော စာတမ်း ဖြစ်ပါသည်။ ရသစာပေအဖွဲ့ဖြစ်သော ကဗျာသည် စာရေးသူ၏ အ
တွေ့အကြုံမှ တစ်ဆင့် ပေါက်ဖွားလာသော အဖွဲ့အနွဲ့များ ဖြစ်သည်။ ထိုသို့ ဖွဲ့နွဲ့ထားသော ကဗျာများ
တွင် မိမိကဗျာကို ပိုမို အားကောင်းလာစေရန် သင်္ကေတအဖွဲ့များဖြင့် ဖန်တီးရေးဖွဲ့ခဲ့ကြပုံကို သိရှိ
စေလိုခြင်း ဖြစ်သည်။ လေ့လာတင်ပြရာတွင် ကဗျာ၊ သင်္ကေတ၊ မြန်မာကဗျာနှင့်သင်္ကေတဟူ၍
အပိုင်း(၃)ပိုင်းခွဲကာ လေ့လာပါမည်။

languagemyanmarlocal-journal2020literature-poem
Replacing Same Meaning in Sentences Using Natural Language UnderstandigTin Htar Nwe , Zar Lwin Phyo, Zin Mar Myo, Aye Thida WinAugustNLP, A chunk based syntax analyzer, Matching technique, Intelligence methodsDownload

NLP (Natural Language Processing) can be used to communicate with computers by means of intelligence methods in a natural language. In other words, it is very useful for classification and analysis. Although there are many natural language syntax analyzers that existed before, it still remains to fulfill the requirements for analyzing English text for English to English Machine Translation. In this paper, we have proposed a chunk-based analyzer for English to English Machine Translation System. The keyword search Technique is one of the techniques of Natural Language Understanding. A simple keyword based matching technique is used for classification. Domainspecific dictionaries of keywords are used to reduce the dimensionality of feature space. The output result is that the word matched to a known word will be replaced in the given sentence, which will not be changed the original meaning.

fcsonline-publication2019natural-language-processing
Document Clustering by using EM clustering algorithm based on K-MeansTheingi AungFebruary, -Download

This paper implements a clustering approach that explores both the content and the structure of XHTML and XML documents for determining similarity among them. A conventional web page clustering technique is utilized to reveal the functional similarity of web pages. In this system, the content and the structure information of the documents are handled by using two different similarity measuring methods. The similarity values produced from these two methods are then combined with weightings to measure the overall document similarity. Text files, HTML, XHTML and XML documents are used as the inputs in this system. Clustering of the text files and HTML documents is based on the content–only information. Both the content and structural information are retrieved according to the XML and XHTML documents clustering processes. Expectation Maximization (EM) clustering algorithm based on k-Means is employed to obtain the document clusters.

fcsproceeding-book2013data-mining web-mining
Text Classification using Vector Space Model and K-Nearest Neighbor AlgorithmHnin Wut Yee, Khin Sein HlaingDecember, frequency, euclidean distance, svmDownload

Text classification is one of the well-studied problems in data mining and information retrieval. Text classification which aims to assign a document to one or more categories based on its content is a fundamental task for web and/or document data mining application. Text classification makes the classification process fast and more efficient since it automatically categorizes documents. In text classification, term weighting between different words is calculated with vector space model. All documents are processed by vector space model to form tf-idf and documents are assigned to the class that corresponds to its least value of distance vector, as measured by k-nearest neighbor algorithm.

fisuniversity-journal-paper2019data-mining machine-learning
Encrypted Session Key Exchange and Authentication (ESKEA) Based on DIFFIE-Hellman ProtocolThidar Htun, Than Naing SoeDecemberpseudorandom number generator(PRNG), encrypted session key exchange and authentication protocol(ESKEA), public key encryption(RSA), pseudorandom number generator(RC4)Download

Key management plays a fundamental role in cryptography as the basis for securing cryptographic techniques providing confidentiality, entity authentication,data origin authentication, data integrity, and digital signatures. And the key exchange protocol is also an essential mechanism for any symmetric cipher. The strength of any cryptographic system relies on efficient key distribution technique. However the main benefit of conventional (symmetric) encryption is its processing speed, the major drawback becomes its secret-key distribution. So, combining Diffie-Hellman protocol with public key encryption and using pseudorandom number generator (PRNG) for generating secret random numbers and random test strings will enhance security level of session key distribution. In our proposed system, the Encrypted Session Key Exchange and Authentication protocol (ESKEA) will be developed, based on Diffie-Hellman key exchange protocol to distribite the generated session key in secure manner by using RSA public key encryption (for secure key exchange) and RC4 pseudorandom number generator (for generating secret random numbers and random test strings).

fislocal-conference-paper2010cryptography
Mining Quatitative Characteristics Rule By Using Attribute-Oriented InductionEi Mon Than,Ingyin Oo10 DecemberData Mining, attribute oriented induction, data generalizationDownload

Data mining or knowledge discovery in database is the search for relationship and global patterns that exists in large database. Domain konwledge in the form of concept hierarchies helps to organize the concepts of the attributes in the database relations. Data generalization summarizes data by replacing low level values with higher level concepts. In this paper, the student data set is generalized by using attribute-oriented induction. This method helps to organize the concepts of the attributes in the student database. This paper is intended to develop an attribute-oriented induction method for knowledge discovery in database and to discover quantitative characteristics rules using the attribute-oriented approach. The output is presented in the form of generalized relations or rule forms.

fcsproceeding-book2010data-mining
Development of Recommendation System for Access to Web SitesEi Ei Han,Than Nwet AungDecember-Download

Web usage mining is one of the categories of web mining task to provide recommendation. Building profiles of registered users is important for collecting information. User profiling is to provide users with what they want without asking for it explicitly. From previous browsing history, the user can know which sites he or she accessed and log-in time for the web sites. From the recommendation systems, web users can know which sites are suitable for them and can save searching time. Collaborative filtering is one type of recommendation systems. It groups users with similar interests and recommends the active user who has the same group based on these neighbor’s interest items. In this paper, we propose a recommendation system for a web site to deliver relevant information and to effectively access data from worldwide sources. The user can reduce searching time and reduce information overload.

fcscd-local-conference2008web-mining
Analyzing and Classifying Web Application AttacksEi Ei Han, February-Download

Web application attack detection is one of the popular research areas during these years. SQL injection, XSS and path traversal attacks are the most commonly occurred types of web application attacks. The proposed system effectively classifies three attacks by random forest algorithm to ensure reasonable accuracy. Request length module is computed based on the certain length of the URL to analyze each record as normal or attack. Regular pattern analysis is emphasized on the content of URL and other features to analyze the certain attack patterns. ECML/PKDD standard web attack dataset is used in this system. Combination of random forest algorithm with request length and regex pattern analysis is proposed to outperform the accuracy.

fcsconference proceeding-book2015mining-and-security
Classification of SQL Injection, XSS and Path Traversal for Web Application Attack DetectionEi Ei Han,Thae Nu Phyu, February-Download

This paper compares two approaches of random forests and K-Means+ID3 algorithms for classifying web application attacks. In the first method, Random Forest algorithm is used for the classification of web attacks. In the second method, the k-Means clustering method first partitions the training instances into k clusters using Euclidean distance similarity. On each cluster, an ID3 decision tree is built. To obtain a final decision on classification, the decision of the k-Means and ID3 methods are combined using two rules: (1) the Nearest-neighbor rule and (2) the Nearest-consensus rule. This paper also describes comparison results of these two approaches.

fcsconference proceeding-book2016mining-and-security
Detection of Web Application Attacks with Request Length Module and Regex Pattern AnalysisEi Ei Han, August-Download

Web application attack detection is one of the popular research areas during these years. Security for web application is necessary and it will be effective to study and analyze how malicious patterns occur in web server log. This system analyzes web server log file, which includes normal and malicious users’ access patterns with their relevant links. This uses web server log file dataset for the detection of web application attacks. This system intends to analyze normal and attack behaviors from web server log and then classify attack types which are included in the dataset. In this system, three types of attacks are detected namely, SQL injection, XSS and directory traversal attacks. Attack analysis stage is done by request length module and regular expressions for various attack patterns.

fcsconference proceeding-book2015
Performance Analysis of a QAPF Scheduling Scheme in LTE NetworkZin Mar Myo, Zar Lwin Phyo, Aye Thida Win, DecemberLTE, Multimedia Services, OFDMA, QAPF, TDMADownload

Long Term Evolution (LTE) is a cellular network that operates completely in packet domain. It can also support multimedia services. Due to this nature,
provisioning the Quality of Service (QoS) requirements has become a challenge to the design of scheduler. Especially, the scheduler must guarantee for transmitting the real-time traffics such as VoIP (voice over IP). In this paper, a scheduler that satisfies the differentQoS requirements is presented. This scheduler considers the additional QoS consideration to the Proportional Fair (PF), which is a well-known scheduling algorithm. As the core work of this paper, its performance is evaluated by comparing with the existing well-known schedulers. For non-real-time traffics such as file download, the performance of the scheduling scheme is evaluated by comparing with PF scheduler. For real-time traffics such as video live streaming, its performance is compared with Frame Level Scheduler (FLS).

fcsonline-publication proceeding-book2019mobile-network-computing
Replacing Same Meaning in Sentences Using Natuaral Language UnderstandingTin Htar Nwel, Zar Lwin Phyo, Zin Mar Myo, Aye Thida WinAugustNLP, A chunk based syntax analyzer, Matching technique, Intelligence methodsDownload

NLP (Natural Language Processing) can be used to
communicate with computers by means of intelligence methods in a natural language. In other words, it is very useful for classification and analysis. Although there are many natural language syntax analyzers that existed before, it still remains to fulfill the requirements for analyzing English text for English to English Machine Translation. In this paper, we have proposed a chunk-based analyzer for English to English Machine Translation System. The keyword search Technique is one of the techniques of
Natural Language Understanding. A simple keyword based matching technique is used for classification. Domain-specific dictionaries of keywords are used to reduce the dimensionality of feature space. The output result is that the word matched to a known word will be replaced in the given sentence, which will not be changed the original meaning

fcsonline-publication2019natural-language-processing
Performance Analysis of EXP-BET Algorithm for Triple Play Service in LTE SystemKu Siti Syahidah Ku Mohd Noh, Darmawaty Mohd Ali, Zin Mar Myo25 MayLTE, Packet Scheduling Algorithm, Exponential Blind Equal Throughput, LTE-SimDownload

Quality of Service (QoS) is defined as user’s satisfaction towards service performance that has
been offered to them. Due to different traffic characteristics and QoS requirements of real-time and non-real-time services, thus provisioning the QoS requirements has become a challenge. In this study, we have compared our proposed algorithm namely the Exponential Blind Equal Throughput (EXP-BET) towards the Exponential Proportional Fairness (EXP-PF) and Frame Level Scheduler (FLS). The comparisons have been made in terms of fairness index,
throughput,packet loss rate (PLR) and delay. From the simulation results, it is observed that EXP-BET delivers higher fairness and throughput and lower PLR and delay for real-time application. Instead, EXP-BET shows 17.72% improvement than FLS and 7.52% from EXP-PF in term of fairness index for the non-real-time application.

fcsonline-publication2016mobile-network-computing
Genetic Algorithm-Based Feature Selection and Classification of Breast Cancer Using Bayesian Network ClassifierYi Mon Aung, SeptemberGenetic Algorithm; Classification; Bayesian Network; Feature SelectionDownload

Recently, large amounts of data are widely accessible in information structures, and students have attracted great attention of academics to turn such data into useful knowledge. Researchers also need related data from vast records of patients using the feature selection method. The selection of features is the process of identifying the most important attributes and removing redundant and irrelevant attributes. The system implements obtaining information in the original data set without taking into account class labels to create the best functional subset using information gain and Pearson correlation. This system was tested on a chronic kidney disease dataset obtained from the UCI machine learning repository. The best data on chronic kidney disease with all the functions using data mining algorithms such as Naive Bayes, Multilayer Perceptron (MLP), J48, K Nearest Neighbor (KNN) to test the effectiveness of the system.

fisuniversity-journal-paper2011 2020data-mining
Analysis of Signal Variation Based on Path Loss in LTE NetworkZin Mar Myo, Myat Thida Mon, 29 AugustLTE, Path loss, nonGBRDownload

Long Term Evolution (LTE) network has a drastic development in the field of telecommunication broadband wireless network. It uses several
propagation models which are available for various networks such as ad hoc networks, cellular networks, etc. Path loss is the significant concept to design and
investigate the efficient signal propagation mechanism. In this paper, we give the analysis results about signal variation concerning with path loss over the LTE network. The analysis is based on an analytical model which is the use of processor sharing queuing theory. The analysis mainly considers at non-Guaranteed Bit Rate (non-GBR) such as FTP or HTTP. This paper also presents the work on the investigation of some parameter settings of the analytical model. Finally, the analysis of signal variation concerned with analytical model will be proved by comparing with the simulation result.

fcsonline-publication proceeding-book2015mobile-network-computing
Performance Analysis of a Scheduler for Multiple Traffics of LTE NetworkZin Mar Myo, Myat Thida Mon6 February-Download

Long Term Evolution(LTE)can support multimedia services.To guarantee the different requirements of the different services, packet scheduler plays an important role in LTE. Especially, the additional challenges are exposed to the design of scheduler when real time traffics such as VoIP (voice over IP) are transmitted over the LTE network.In this paper, a new scheduler is proposed for satisfying the
different QoS requirements. This scheme distinguishes whether this connection is real time or not and give the high priority to real time traffics which are closing to time out. After scheduling all real time traffics, delay tolerant traffics are scheduled.The performance of the proposed scheduling scheme is evaluated in terms of throughput, average delay and fairness. The result will be proofed by comparing with another LTE downlink schedulers.

fcsproceeding-book2015mobile-network-computing
QoS Aware Proportional Fair (QAPF) Downlink Scheduling Algorithm for LTE NetworkZin Mar Myo, Myat Thida Mon, 5 JanuaryLTE, QoS, GBR, nonGBR, QAPFDownload

Long Term Evolution (LTE) supports several QoS (quality of services) classes and tries to guarantee their requirements. There are two main different QoS classes: Guaranteed Bit rate (GBR) such as VoIP and non-Guaranteed Bit Rate (non-GBR) such as FTP or HTTP. Having these different QoS requirements in packet domain introduces an additional challenge on the LTE MAC scheduler design. Therefore, the scheduler has to be aware of the different service requirements and satisfy them. In this paper, QoS aware proportional fair downlink scheduler (QAPF) is proposed. It can optimize the use of available resources while maintaining QoS requirements of different service classes. The result willshow the performance evaluation of the proposed scheduler by comparing with others.

fcsonline-publication proceeding-book2015mobile-network-computing
Analytical Model of the LTE Radio Scheduler for non-GBR BearerZin Mar Myo, Myat Thida MonFebruary-Download

Long Term Evolution (LTE) is a mobile network
that operates completely in packet domain. Due to the variation of radio condition in LTE, the obtainable
bit rate for active users will vary. The two factors for
radio variation are fading and path-loss. By analyzing the previous researches concerning with packet scheduling of LTE network, in this paper, an analytical model of the LTE radio scheduler is proposed. This model is based on the stochastic process to observe the system behavior mathematically and takes the details of the schedluler’s behavior taking into account given in a state of the system, i.e. the number of the active users and their distance from the base station. Instead simulation result, the numerical result is shown. This model mainly considers the path loss variation at non-Guaranteed (nonGBR) bit rate such as FTP or HTTP.

fcsproceeding-book2014simulation-modeling
Consultation about the Related Frequent Patterns for Market Basket AnalysisZin Mar Myo, May Phyo OoOctober-Download

In this paper, frequent patterns are generated by using association rule mining algorithm without
candidate generation. First, frequent pattern tree structure is developed to compress a large database
into a compact structure. These structure stores complete information about frequent patterns. So
there is no need to repeat database scans. Then, for mining the complete set of frequent patterns,
FP_tree_based mining method is applied to avoid the costly generation of a large number of candidate
sets. Moreover, divide_and_conquer method is also used to decompose the mining task into a set of
smaller tasks for mining confined patterns in conditional databases. By decomposing the mining
task, the search cost can be reduced.

fcsproceeding-book-in-soft-version2008data-mining
Performance Analysis of a QAPF Scheduling Scheme in LTE NetworkZin Mar Myo, Zar Lwin Phyo, Aye Thida WinDecemberLTE, Multimedia Services, OFDMA,QoSDownload

Long Term Evolution (LTE) is a cellular network that operates completely in packet domain. It can also support multimedia services. Due to this nature, provisioning the Quality of Service (QoS) requirements has become a challenge to the design of scheduler. Especially, the scheduler must guarantee for transmitting the real-time traffics such as VoIP (voice over IP). In this paper, a scheduler that satisfies the different QoS requirements is presented. This scheduler considers the additional QoS consideration to the Proportional Fair (PF), which is a well-known scheduling algorithm. As the core work of this paper, its performance is evaluated by comparing with the existing well-known schedulers. For non-real-time traffics such as file download, the performance of the scheduling scheme is evaluated by comparing with PF scheduler. For real-time traffics such as video live streaming, its performance is compared with Frame Level Scheduler (FLS).

fcsonline-journal2019network-and-information-security
Replacing Same Meaning in Sentence Using Natural Language UnderstandingTin Htar New, Zar Lwin Phyo, Zin Mar MyoAugustNLP, A chunk based syntax analyzer,Intelligence methods,Machine TechniqueDownload

NLP (Natural Language Processing) can be used to communicate with computers by means of intelligence methods in a natural language. In other words, it is very useful for classification and analysis. Although there are many natural language syntax analyzers that existed before, it still remains to fulfill the requirements for analyzing English text for English to English Machine Translation. In this paper, we have proposed a chunk-based analyzer for English to English Machine Translation System. The keyword search Technique is one of the techniques of Natural Language Understanding. A simple keyword based matching technique is used for classification. Domainspecific dictionaries of keywords are used to reduce the dimensionality of feature space. The output result is that the word matched to a known word will be replaced in the given sentence, which will not be changed the original meaning

fcsonline-journal2018nlp
Sentence Level Reordering System for Myanmar-English Machine Translation SystemAye Thida WinMayMachine Translation, Sentence level reordering, HMMDownload

Machine translation is an application of Natural Language Processing (NLP) technologies. The main issue of machine translation lies in how to map characteristics of source language (SL) to target language (TL). Languages have their own components, expressions, and word order to construct a correct order sentence. Reordering is needed in machine translation when we translate from one language to another to get correct order. In this paper, we describe rule-based reordering system for Myanmar-English sentence level reordering (MESLR) system. Reordering is important reverse words from translation process for Myanmar-English machine translation to get proper English sentence. In this paper, we generated POS movement rules, define function tag algorithm and function tag movement rules. The main process of this paper are splitting and reordering process. In the splitting process, we proposed algorithm for define function tag in the simple sentence and phrase identification algorithm to define function tag for the ambiguity sentences from Myanmar to English translation system. In the reordering process include two steps word level reordering (WLR) and phrase level reordering (PLR).

fcsonline-journal2013nlp
Syntactic Reordering Approach for Myanmar-English Machine Translation SystemAye Thida Win13-15 SeptemberSyntactic MT, Semantic MT, ReorderingDownload

In this paper, we present reordering approach which can be reordered resemble sentence from translation process. Translated word and POS’s tag with noun phrase and verb phrase are entered as input process. In Machine Translation (MT), one of the main problems to handle is word reordering. In Myanmar to English translation, word level (WLR), phrase level (PLR) and clause level (CLR) reordering are crucial important to get the correct order sentence. In this paper described Myanmar to English reordering system. This system includes two phases: Splitting sentence and Reordering sentence. In splitting sentence phase implement proposed algorithm and phrase identification algorithm for split and define function tag based on the Myanmar-English grammar patterns. The reordering sentence phase includes three process WLR, PLR and CLR. When we translated from Myanmar to English, translated words are also reverse within each phrase and such phrases are also reverse in the sentence. Some Myanmar sentences are compound sentence structure. When reordering translated compound Myanmar sentence to get the correct English order sentence must reorder as the clause sentence by the target sentence side. So, CLR algorithm is needed for translated compound Myanmar sentence. Experiments are carried out four related NLP task: WLR, PLR, CLR and function tagging, illustrating the effectiveness of the POS movement rules and algorithms.

fcsonline-publication2013nlp
Proposed Myanmar Word Tokenizer based on LIPIDIPIKAR TreatiseThein Than Thwin, Aye Thida Win, Pyho Phyo Wai16-18 AprilTokenization, phonetic of myanmar words, LIPIDIPIKAR TreatiseDownload

Natural Language Processing (NLP) based technologies are now becoming important and future intelligent systems will use more of these techniques as the technology is improving explosively. But Asia becomes a dense area in NLP field because of linguistic diversity. Many Asian languages are inadequately supported on computers. Myanmar language is an analytic language but it includes special character like killer, medial, etc.. In English or European languages, all of the syllables are formed by combining the alphabets that represent only consonants and vowels but Myanmar language uses compound syllables that make more difficult to analyze. So we can face difficulties in word sorting. In our proposed system, the condensed form of Myanmar ordinary scripts will be transformed into analyzable elaborated scripts based on LIPIDIPIKAR treatise written by Yaw Min Gyi U Pho Hlaing. These elaborated words can be easily sorted by using this treatise. In our proposed system, complexity of Myanmar condensed words sorting compared with complexity of elaborated words sorting.

fcsonline-publication2010nlp
Word to Phrase Reordering Machine Translation System in Myanmar-EnglishAye Thida Win11-15 MarchWord Reordering, Phrase Reordering, Machine TranslationDownload

In machine translation (MT), one of the main problems to handle is word reordering. This paper focuses to design and implement an effective machine translation system for Myanmar to English language. . The framework of this paper is reordering approach for English sentence. We propose an approach to generate the target sentence by using reordering model that can be incorporated into the Statistical Machine Translation (SMTS). Myanmar sentence and English sentence are not semantic. In this paper, we present Myanmar to English translation system that is our ongoing research. Input process, tokenization, segmentation, translation and English sentence generation include in this system. In this paper, we describe about the English sentence generation. The aim of this paper is to reassemble the English word into proper sentence. The resulted raw sentence from translation process is reassembling to form the English sentence. Subject/verb agreement process, article checking process and tense adjustment process will also be performed according to the English grammar rules. The English sentence generation is proper for Myanmar to English translation system

fcsonline-publication2011nlp
Clause Level Reordering for Myanmar-English Translation SystemAye Thida Win28-29 FebruaryMyanmar-English Machine Translation, Clause Level ReorderingDownload

Machine Translation (MT), one of the main problems to handle is word reordering. Myanmar Language and English Language are linguistically different language pairs. Myanmar language is a verb fibal language and English language is a verb initial language. After Myanmar to English translation process, translated English words are needed to reorder to produce proper order of English sentence. Sentence level reordering system consists of word level, phrase level and clause level reordering processes. In order to get the correct order of English sentence, we propose Clause Level Reordering (CLR) algorithm and article checking algorithm for Myanmar to English translation system by mapping English grammar patterns. CLR algorithm can reorder the disorder words and it can also add relative words which are not contain in the translated sentence. Article missing words are added by using Article Checking Algorithm. We evaluate the effectiveness of proposed CLR algorithm with compound and complex English sentence.

fcsproceeding-book2012nlp
Phrase Reordering Translation in Myanmar-EnglishAye Thida Win5-6 MayMachine Translation, Phrase ReorderingDownload

Machine Translation is the attempt to automate all or part of the process for translation from one human language to another. This definition involves accounting for the grammatical structure of each language and using rules and grammars to transfer the grammatical structure of the source language (SL) into the target language (TL). Myanmar to English machine translation can be used to facilitate learning in beginner of MyanmarEnglish language learner or vice versa and to help them to study of grammar, Myanmar and English languages are linguistically different language pairs. The aim of this paper is to reassemble from the not ordering set of English word into proper English sentence. In this paper proposed the position of English POS tag rules. The resulted raw sentence from translation process is reassembling to form the English sentence. In this system, especially we consider subject/verb agreement process, article checking process and tense adjustment process will also be performed according to the English grammar rules. The proposed system of this paper is reordering approach for English sentence. Our proposed system, we considered the position of English POS tags and then which are swapping to get the proper English sentences by using the reordering rules and mapping the English grammar patterns.

fcsproceeding-book2011nlp
Syntactic Reordering Machine Translation System for Myanmar-EnglishAye Thida WinDecemberMachine Translation, Syntatic reorderingDownload

Machine translation is the attempt to automate all or part of the process for translation from one human language to another. In syntax-based statistical translation model operation capture linguistic differences such as word order and case tracking. In Machine Translation (MT), one of the main problem to handel is word reordering. Informally, a word is reordered when it and its translation occupy different positions within the corresponding sentences. Syntactic reordering approaches are an effective method for handling systematic differences in word order between source and target languages within the context of Statistical Machine Translation (SMT) system. This paper applied syntactic reordering to Myanmar-English translatin. In this paper, we presented reordering approach which can be reodered resemble sentence from translation process. Translated word word and POS’s tag with noun phrase and verb phrase are entered as input process. This system reordered raw sentence to get proper English sentence by mapping English grammar patterns. First, movement POS position has based on the correct position text file. Second, define function tag process has based on the reordering algorithm. This system presented reordering pattern and rules of POS position which can be used word level and phrase level.

fcsproceeding-book2011nlp
Framework of Phrase Reordering Machine Translation System in Myanmar-EnglishAye Thida WinDecemberReordering, Natrual Language Processing, Machine TranslationDownload

In Machine translation (MT), one of the main problems to handle is word reordering. This paper focuses to design and implement an effective machine translation system for Myanmar to English language. . The framework of this paper is reordering approach for English sentence. We propose an approach to generate the target sentence by using reordering model that can be incorporated into the Statistical Machine Translation(SMT). In this paper, we present Myanmar to English translation system that is our ongoing research. Input process, tokenization, segmentation, translation and English sentence generation includ in this system. In this paper, we describe about the English sentence generation. The aim of this paper is to reassemble the English word into proper sentence. The resulted raw sentence from translation process is reassembling to form the English sentence. Subject/Verb agreement process, article checking process and tense adjustment process will also be performed according to the English grammar rules. The English sentence generation is proper for Myanmar to English translation system.

fcsproceeding-book2010nlp
Online Asean Directory System Based On Mobile AgentEi Khaing Zaw ,Zaw Win MyintDecemberMobile agent; WebpageDownload

Mobile Agent technology has the ability to travel from one host to another in a network and to choose the places they want at the same time. Proposed systims are computational software processes capable of roaming World Wide Web, interacting with foreign hosts, gathering information on behalf of its owner,and coming ‘ back home’ after performing the dyties set by its owner. Nowadays , as the amount of information on the Web has increased rapidly, it becomes more difficult for common web users to get information which they reqrire. In order to solve such kinds of problems, information retrieval system based on mobile agent is propose to give the users the relevant information according to his/her location.

fcsproceeding-of-international-conference2012multi-agent-system
Review On Continuous Speech Recognition SystemZaw Win Myint, Yin Win Chit, Phyoe Theingi KhaingMarchDeep Belief Network, Deep Neural Network, Support Vector Machines, Convolutional Neural Network, Artifical IntelligenceDownload

This Automatic Speech Recognition (ASR) System translates the human’s speech signal into text words and it has still many challenges in the Myanmar continuous speech signal. The deaf people can’t communicate with each other based on human’s speech and can’t hear broadcast news at every station that need for them in travelling. But deaf people also need to communicate with everybody and need to know about the broadcast news everywhere. This system is design to recognize the greedy words in daily communication for deaf people. First of all, the daily communication words and broadcast news is gathered in a sound database. The audio (.wav) file of daily communication words speech is used as the input of the system. After passing the pre-processing step, these speech audio files are segmented using dynamic thresholding method based on time and frequency domain features. Then important features of segmented speech signal are extracted by Mel Frequency Cepstral Coefficient (MFCC). Especially, DCNN-AlexNet has been applied in image processing because it can perform as a highly accurate, effective and powerful classifier. In the training and classification step of this system, the advantages of DCNN in image processing are applied using the MFCC feature images. In this system, two types of input features are used, the traditional MFCC features and converted MFCC features images from the speech signal (.png format). Then these features are trained to build the acoustic model and are classified using Deep Convolutional Neural Network (DCNN) classifier. Finally, the continuous speech sentence is recognized as Myanmar text using codebook. The experiments shows that the DCNN speech recognition system achieves the average Word Error Rate (WER) of 12.5 % on the MFCC images training dataset and WER of 16.75% on the original MFCC features value training dataset.

fcsonline-publication2020digital-signal-processing
Agent-Based E-Commerce Workflow SystemWai Mi Mi Thaw ,Zaw Win MyintDecemberMobile agent; Web pageDownload

In this paper,an agent-based e-commerce workflow system is performed to help in eliminating ptoblems in e-commerce domain by using agent-based approach. In e-commerce,these es always aproblem of searching a right item in less time without involving the contractors. Both intelligence agent and workflow technology have been applied to business process mnagement. Using these two techniques, the market framework offers time saving and flexibility through interaction among the agents. The system in this paper involves order capture process,order processing process and order fulfillment process in purchasing the supermarket items. These processes are provided by order capture agent order process agent,order fulfillment agent and storage agent.In this system, agents perform different functions when it comes to the online order. Agents can provide refined search and purchasing various items in supermarket in order to takecare of the customer needs. The system in this paper utlizes the dataset of KaungMon supercenter in Magway Township.

fcsproceeding-of-international-conference2011intelligent-agent
Word Stemming Using Porter Stimming AlgorithmSaw Myat Khaing,Zaw Win MyintDecemberStemming technique: Data analyzingDownload

Word stemming is document processing that groyps documents with similar concept. Stemming is very useful and important technique for analyzing data. Porter Stemmer is a very widely used and available. Stemmer and is used in many applications. Stemming algorithms can be used in search engines such as Lycos and Google, and also thesaurases and other products using Natural Languange Processing (NLP) for the purpose of Information Retrieval (IR).IR is essentially a matter of deciding which documents in a collection should be retieved to satisfy a user’s need for information. Stemming algorithm, or stemmer which attempt to reduce a word to its stem or root form has been developded. The key terms of a query of document are represented by stems rather than by the original words. Stemming has many algorithms, among them this system uses porter stemming algorithm.

fcsproceeding-of-international-conference2012database-management-system
Agent-Based Auction System For Online MarketLae Lae Shwe ,Zaw Win MyintDecemberContract Net (CNET) Protocol ; Multi agentDownload

In this paper, Contract Net (CNET) Protocol theory in multi-agent systin is applied to improve the auction techniqyes. Using intelligence agent technology, the market framework offers time saving automation of auctions and flexibility throygh negotiations among the agents.Agents may perform different functions when it comes to the online auction. They can participate in auction selected by the user,being responsible for the bidding process of participating agents. The agent-based aution can be used as a proven mechanism for fast and efficient price allocation. The design of auctioneer agent that makes decisions on behalf of seller and buyer agents is also presented. Proposed agent-based auction sysrim is expected to improve the quality of service in electronic commerce.

fcsproceeding-of-international-conference2011intelligent-agent
Estimating The Project Duration By Using Critical Path MethodNyein Zaw Win ,Zaw Win MyintDecemberCritical Path ; Tree structure ; Project estimation; Gantt chartDownload

Project scheduling involves the creation of various graphical representations of part of the project plan. These include activity charts showing the interrelationships of project activities. The Critical Path is the longest path with the shortest duration, or the shortest amount of time in which a project can be successfully completed.A delay on any one activityon the Critical Path Method (CPM) computer software packages, only afew minutes are required to input the information and generate the critical path. If the projedt manager wants to meet the deadline of the project completion date, the system provides the supporting information, which provides the best critical path that estimate the completion time of the project. In this paper,the estimation of project duration with the Critical Path is presented.This Critical Path of the given project is shown with tree structure. And then the project estimation is shown with Gantt chart.

fcsproceeding-of-international-conference2010software-engineering
Finding Minimum Spanning Tree By Using Prim And Kruskal AlgorithmsKhin Thet Mon ,Zaw Win MyintDecemberMinimum Spanning Tree; Kruskal; PrimDownload

Finding a minimun spanning tree of a given weighted graph G (V,E) is an important graph problem. A minimum spanning tree is a tree formed from a subset of the edges in a given undirected graph, with tow properties: (1) it spans the graph, i.e, it includes every vertex in the graph, and (2) it is a minimum, i.e, the total weight of all the edges is as low as possible. In this paper, minimum apsnning tree is found in a connected graph with edge weight.For the purpose, Kruskal’s and Prim’s algorithm are applied. Then these two algorithms are compared with weight-graph and run-time graph. The implemented system intends to show a graph theory applide to find shortest path by using a minimum spanning tree.

fcsproceeding-of-international-conference2010artificial-intelligence
Prevention for cancer diseases associated with vegetablesAye Aye BoAugust, Volume 4, Issue 8Harvard-Led Physicians health, Cancer, Associations, VegetablesDownload

Vegetables are important for human health because of their vitamins, minerals, phytochemical compounds, and dietary fiber content. Especially antioxidant vitamins (vitamins A, vitamins C, and vitamins E) and dietary fiber content have important roles in human health. Adequate vegetable consumption can be protective some chronic diseases such as diabetes, cancer, obesity, metabolic syndrome, cardiovascular diseases, as well as improve risk factors related to these diseases. Fruits and vegetables are universally promoted as healthy. Fruits and vegetables include a diverse group of plant foods that vary greatly in content of energy and nutrients. Fruits and vegetables also supply vitamins and minerals to the diet and are sources of phytochemicals that function as antioxidants, phytoestrogens, and anti-inflammatory agents and through other protective mechanisms. Multivitamins may slightly reduce the risk of cancer but don’t prevent heart disease. There is a wealth of data available within the health care system. Advanced data mining techniques can help medial situations. The Harvard-Led Physicians Health Study II (PHS II) recently found that taking a multivitamin slightly lowers the risk of being diagnosed with cancer. Fruits and vegetables contain many biologically active ingredients that may help to prevent cancer in ways that vitamins and minerals alone do not. The PHS II was the first study to test a standard multivitamin for the prevention of chronic disease. The cardiovascular disease portion of the study focused on whether taking a multivitamin reduced the risk of heart attacks, strokes, and death from cardiovascular disease. It’s important not to overplay the benefit that PHS II found for preventing cancer. What you eat and don’t eat can have a powerful effect on your health, including your risk for cancer. We conducted a systematic review and meta-analysis to clarify these associations.

itsmjournal2019data-mining
Finding Frequent Itemsets by using Hash-based TechniqueYin Win Chit ,Zaw Win MyintDecemberDatarbase Management System; Hash based techniqueDownload

In data mining,finding frequent itemset plays an important role which is used to identify the correlations among the fields of database. This paper is idended to introduce the general concept of frequent itemsets mining. Finding such frequent pattns plays an essential role in mining associations, correlation, clustering and other data mining tasks ass well. In this paper we present an algorithm hash-based which hashing technology to store the database in horizontal data format. This paper is implemented for finding frequent itemsets on crops data set. By using this system user can reduce the size of candidate generation this system improves the efficiency of apriori method by using hash table directly. The main purpose of this paper is to find the optimal item sets by using the hash based technique.

fcsproceeding-of-international-conference2010data-mining
Contract Net Protocol based-On Multiagent System for Desirable Jeep CarSaw Win,Zaw Win Myint30 DecemberContract Net Protocol ; Multiagent SystemDownload

This paper use Contract Net Protocol based-on multiagent system for purchasing car system.Normally, buyer look for desire car with search engine or buyer must go every industrial Zone or Company (produce jeep car) for desire jeep car.Perhaps you are Lucky and find a map showing how to get to Industrial Zone Or Company. This system using Contract Net Protocol could solve these problems.Contract Net allows tasks to be distributed among a group of agents.One of the most promising uses for Contract Net is to create an electronic marketplace for buying and selling goods. In this agency system, the buyer/customer certainly goes to this system and he/she can view car design and car data. And then, he/she can choose desire jeep car by selecting car’s items.This system automatically choosing suitable jeep car for user. Not to navigate to the every Industrial Zone or Company because this system is special for customers who will buy Jeep car.

fcsproceeding-of-international-conference2009artificial-intelligence
SLA guaranteed migration decision model using MCDM approachZar Lwin Phyo, 29 AugustAHP, PROMETHEE, QoS, SLA, Virtualized Data CenterDownload

In Virtualized Data Center (VDC), the workload in each node changes dynamically overtime, so the workload in some machines may exceed the threshold, which not only cannot guarantee the quality of service requirements (QoS), but also waste resources. One of the key benefits of virtualization is reassignment of a virtual machine to another physical host can be done when resource shortage or poor utilization conditions occurs in host. However, in this case, efficient migration decision is required to reduce migration cost and ping-pong effect. In this paper, we propose SLA guaranteed migration decision model for resource management in VDC based on the integrated approach of Analytical Hierarchy Process (AHP) and the Preference Ranking Organization METHod for Enrichment Evaluations (PROMETHEE). In this case, AHP is used to assign weights to the criteria to be used in selection phase, while PROMETHEE is employed to determine the priorities of the
alternatives.

fcsonline-publication proceeding-book2015virtualization
Comparison of Binning Methods By Using Naïve Bayesian ClassificationThein Kyawe ,Moe Moe HlaingDecemberData Analytics, Naïve Bayes, Binning methods, Hold outDownload

Data Mining is a multidisciplinary field which supports knowledge workers who try to extract information in our “data rich, information poor” environment. Its names stems from the idea of mining knowledge from large amounts of data. The tools of data mining such as Decision Tree Induction, Naïve Bayesian Classification and Genetic Algorithm provide assist us in the discovery of relevant information through a wide range of data analysis techniques. Any method used to extract patterns from a given data source is considered to be a data mining technique. This system classifies the class label of unknown records by using Naïve Bayesian Classification tool. Classification is the forms of data analysis that can be used to extract models describing important data classes. This paper emphasizes on different types of binning methods with numerical data, each of which was tested against the Bayesian classification methodology. Holdout method is used to access the accuracy of the classifier. Testing is made using any data sets. This system implemented by using Microsoft Visual studio 2008 and Microsoft Access tools.

itsmconference2010data-mining
Correlation Based VMs Placement Resource ProvisionZar Lwin Phyo, Thandar Thein, Februaryconsolidation, resource provisioning, resource utilization, SLA, Virtualized Data CenterDownload

In Virtualized Data Center (VDC), a single Physical Machine (PM) is logically divided into one or more Virtual Machines (VMs) that share physical resources. Therefore, dynamically resource provisioning plays an important role in VDC. Moreover the resource provider would like to maximize resource utilization, which forms a large portion of their operational costs. To achieve this goal, several consolidations can be used to minimize the number of PMs required for hosting a set of VMs. However, consolidation is often undesirable for users, who are seeking maximum performance and reliability from their applications. These applications may indirectly increase costs due to under-provisioning of physical resources. Moreover frequent Service Level Agreement (SLA) violations result in lost business. To meet SLA requirements, over provisioning of resources is used. However, if the services do not use all the CPU they have been allocated; the provider will suffer from low resource utilization, since unused resources cannot be allocated to other services running in the provider. Therefore VM provisioning is the most vital step in dynamic resource provisioning. In this paper, correlation based VMs provisioning approach is proposed. Compared to individual-VM based provisioning, correlation based joint-VM provisioning could lead to much higher resource utilization. According to the experimental results, proposed approach can save nearly 50% CPU resource in terms of overall CPU utilization.

fcsonline-publication proceeding-book2013virtualization
Interactive Tutorial Framework for Online LearningMin Min Naing , Moe Moe HlaingApril, 2020 Volume 5, Issue 4Interactive, Tutorial, Online LearningDownload

Nowadays, online learning, teaching and online tutorial have become popular as higher education. Online learning is the delivery of learning, training or educational program according to the electronic means. So, this system is intended to develop an interactive tutorial website for online learners. A web based tutorial is a complete program whose purpose is to assist users in learning. It is a method of transferring knowledge, interactive set of instructions to teach by example. The online tutorial framework is the technique of representing a small kind of exam by a computer program. So, this system will examine the student study level by answering the random comprehensive questions. This system enhances the quality and improves the accessibility within education domain.

itsmjournal2020web-engineering
CPU Usage Prediction Model for Virtualized Data CenterZar Lwin Phyo, Thandar TheinFebruaryCPU Usage Prediction, Machine Learning, SLA, Virtualized Data CenterDownload

In Virtualized Data Center (VDC), a single Physical Machine (PM) is logically divided into one or more Virtual Machines (VMs) that share physical resources. Therefore, dynamically resource provisioning plays an important role in VDC. Moreover the resource provider would like to maximize resource utilization, which forms a large portion of their operational costs. To achieve this goal, several consolidations can be used to minimize the number of PMs required for hosting a set of VMs. However, consolidation is often undesirable for users, who are seeking maximum performance and reliability from their applications. These applications may indirectly increase costs due to under-provisioning of physical resources. Moreover frequent Service Level Agreement (SLA) violations result in lost business. To meet SLA requirements, over provisioning of resources is used. However, if the services do not use all the CPU they have been allocated; the provider will suffer from low resource utilization, since unused resources cannot be allocated to other services running in the provider. Therefore VM provisioning is the most vital step in dynamic resource provisioning. In this paper, correlation based VMs provisioning approach is proposed. Compared to individual-VM based provisioning, correlation based joint-VM provisioning could lead to much higher resource utilization. According to the experimental results, proposed approach can save nearly 50% CPU resource in terms of overall CPU utilization.

fcsproceeding-book2013virtualization
Multi Criteria Decision Making for Resource Management in Virtualized EnvironmentZar Lwin Phyo, Thandar TheinFebruaryMCDM approach, QoS, SLA, Virtualized Data CenterDownload

The workload in each node changes dynamically over time in large-scale data centers, so the workload in some machines may exceeds the threshold, which not only can’t guarantee the quality of service requirements (QOS), may also waste resources. One of the key benefits of virtualization is reassignment of a virtual machine to another physical host can be done when resource shortage or poor utilization conditions occurs in host. In this paper, we proposed the decision support system for resource management in Virtualized Data Center (VDC) based on the integrated approach of Analytical Hierarchy Process (AHP) and the Preference Ranking Organization METHod for Enrichment Evaluations (PROMETHEE). In this case, AHP is used to assign weights to the criteria to be used in selection phase, while PROMETHEE is employed to determine the priorities of the alternatives.

fcsproceeding-book2012virtualization
Best resource node selection in Grid EnvironmentZar Lwin PhyoMayRought Sets,Grid Computing, resource selectionDownload

Grid computing can provide abundant resources to assist many studies and it can be employed in more and more projects that have so far yielded numerous findings. However, in grid environment, many distributed resources are integrated in different sites and resources are often shared by multiple user communities and can vary greatly in performance and load characteristics. Therefore, a central problem in grid environment is the selection of computation nodes for execution. Automatic selection of node is complex as the best choice depends on the user request structure as well as the expected availability of computation and communication resources. This paper considers the ongoing research on Rough Sets based selecting the best resource node in computational grid. Our node selection method is based on the CPU capacity, communication bandwidth and memory and disk availability on the resource nodes. The goal of this paper is to enable the automatically select the best available resource node on the network for execution within a reasonable time. In this paper, random data is used to test the validity of proposed method. The result showed that this method can provide the suitable probability of the best resource selection.

fcsproceeding-book2011grid-computing
Decision Support System for Dynamic Resource Management in Virtualized Data CenterZar Lwin PhyoDecemberAHP, PROMETHEE, QoS, SLA, Virtualized Data CenterDownload

Virtualization has many advantages such as less cooling cost, hardware and software cost and more manageability. One of the key benefits is better load balancing by using of Virtual Machine (VM) migration between hosts, however, VM migration depends on an efficient decision support and variety of criteria is required. In this paper, we present an evaluation model for the decision support of VM migration in Virtualized Data Center (VDC) based on the analytical hierarchy process (AHP) and the Preference Ranking Organization METHod for Enrichment Evaluations (PROMETHEE)

fcsproceeding-book2011virtualization
Best resource node selection using rough sets theoryZar Lwin Phyo, Aye ThidaMarchgrid computing, rough set, resource selectionDownload

In Grid computing, the problem of selecting resources to specific jobs can be very complex, and may require co-allocation of different resources such as specific amount of CPU hours, system memory and network bandwidth for data transfer, etc. Because of the lack of centralized control and the dynamic nature of resource availability, any successful selection mechanism should be highly distributed and robust to the changes in the Grid environment. Moreover, it is desirable to have a selection mechanism that does not rely on the availability of coherent global information. This paper considers selection method in grid environment using rough set theory, which could select the best node in grid environment. The proposed method is designed to achieve the following goals: handling large number of incoming requests simultaneously; assigning each service efficiently to all the incoming requests; selecting the appropriate services for the incoming requests within a reasonable time. Random data is used to test the validity of proposed method in this paper. The result showed that this method can provide the suitable probability of the best node selection.

fcsonline-publication2011grid-computing
Implementation of Shortest Path Finding System by Using Kruskal AlgorithmHtike Htet Aung , Dr Win Win ZawAugust , 3Graphs,shortest path, minimum-cost spanning tree, Kruskal AlgorithmDownload

Graphs have been used in a wide variety of application. Some of these applications are analysis of electrical circuits, finding shortest routes, project planning, and identification of chemical compounds, statistical mechanics, genetics, and social sciences and so on. Indeed, it might be well said that of all mathematical structures, graph are the most widely used.  This paper is intended to study how a graph theory applied to find shortest path by using a minimum spanning tree. In this study, it is implemented popular locations of the Mandalay City as the vertices of an undirected Graph. In this system, the associated distances between each location are represented as weight of the edges of the graph. There are three different algorithms to obtain a minimum-cost spanning of a connected, undirected graph. Our shortest path finding system is focused on Kruskal Algorithm.

Keywords: Graphs, shortest path, minimum-cost spanning tree, Kruskal Algorithm

itsmconference2009artificial-intelligence
Quality Analysis of Pseudorandom Number Generator Using Rough SetsAye Myat Nyo, Zar Lwin Phyo, May Mar Oo, Chaw Yupar Soe, Thein Than Thwin, Aye Thida, Than Naing SoeJuneseed, pseudorandom sequence, search approach, data mimng, decision ruleDownload

In this paper, a rough sets based analyzing system for Pseudorandom Number Generator (pRNG) is proposed to analyze the quality of the pseudorandom number generators. The strength of the cryptosystem relied on the quality of PRNGs. In particular, their outputs must be unpredictable in the absence of knowledge of the inputs and the input can not be guessed. On the other hand, the advance in computer science and technology let to produce the sufficient amount of sequence numbers of all possible input (seeds) and can be stored in a database. By means of the rough set approach, the input (seeds) can be guessed from those databases, using the known output sequence. So, the quality analysis of the (pRNGs) and a simple rule based prediction system is presented in this paper and the design of generators is outside the scope of this paper.

fcsonline-publication2010cryptography
Predictive maintenance for veheicles services by using data mining techniquesAye Aye BoJuly, Volume 4, Issue 7Fleet management systems, Data mining, Maintenance, Vehicle serviceDownload

A well-planned data classification system makes essential data for easy to use and finds and retrieves the data. At the point, when an individual considers buying a vehicle, there are many aspects that could impact his/her choice on which kind of vehicle he/she is interested in. Even if a brand new motor vehicle, it could be a failure in any component without maintaining and serviced a short period. Vehicle uptime is getting increasingly important as the transport solutions become more complex and the transport industry seeks new ways of being competitive. Traditional Fleet Management Systems are gradually extended with new features to improve reliability, such as better maintenance planning. Typical diagnostic and predictive maintenance methods require extensive experimentation and modeling during development. This paper investigates unsupervised and supervised methods for predicting vehicle maintenance. The methods rely on a telematics gateway that enables vehicles to communicate with a back-office system. These are later associated with the repair history and form a knowledge base that can be used to predict upcoming failures on other vehicles that show the same deviations. Data mining presents an opportunity to increase significantly the rate at which the volume of data can be turned into useful information. The purpose of the prediction algorithms is to forecast future values based on present records, in order to estimate the possibility of a machine break-down and therefore to support maintenance teams in planning appropriate maintenance interventions. It is observed that the proposed data mining approach provides promising results in predicting the exact value.

itsmjournal2019data-mining
Rule-based Expert System for Paddy InformationTun Thura Htet, Aye Aye BoFebruaryExpert System, Artificial Intelligence, Knowledge RepresentationDownload

The rapid growth of technology has largely eased the access to information. At the same time increasingly difficult for people to collect, filter, evaluate and really use the vast amount of information. An information system has access to at least one and potentially many information sources. The organization cannot be operated or managed effectively without information system that are using a range of information technologies. In Pwintbyu Township, the production information paddy is stored in documents. In this paper, a computerized paddy information system is implemented by using knowledge representation technique. Knowledge Representation, more common expert systems currently developed, is best suited for applications in which the possible concern are limited in number. With the help of the implemented system, it is expected to retrieve the information of Pwintbyu Township and predict the results concerned with the agriculture of Pwintbyu Township.

itsmconference2013artificial-intelligence
Implementation of Inference in Rule-based system for studying the Nature of BirdsHnin Yu Mon, Aye Aye BoDecemberRule-based System, BirdsDownload

A rule-based system expresses all of its knowledge in terms of explicit rules. Rule-based systems are typically used for applications, which required a lot of expert knowledge, a high degree of explain ability and a low level of creativity. Most rules are expressed in terms of IF X THEN Y. Knowledge is the key for rule-based systems. Rule-based system emphasizes on the inferencing with rules. Rule-based systems have the flexibility to be used in a top down or bottom up method. Top down is also known as forward chaining, and bottom up is synonymous with backward chaining. A rule-based system typically consists of rule base, working memory, and rule interpreter. This paper presents details of birds by using forward chaining and backward chaining of rule-based system. It is intended to display the inferencing of rules and also describe how it can be used to demonstrate the information of bird.

itsmconference2010artificial-intelligence
Knowledge-based System forAntibioticsThet Htar San, Aye Aye Bo16DecemberKnowledge-based System,AntibiotiesDownload

Knowledge-based systems are systems based on the methods and techniques of artificial intelligence. Knowledge-based systems were designed primarily for the purpose of being able to apply knowledge automatically. Knowledge is the base of personal information, which is integrated in a fashion, which allows it to be used in further interpretation and analysis of data. The aim of this paper is to provide the user in order to diagnose the bacterial infections and know the knowledge of Antibiotics. This system makes decision for the diseases which the user suffers and represents knowledge about antibiotics for the user’s disease. This paper presents a knowledge-based system that makes inference of bacterial infections and gives consistent Antibiotics by using Knowledge Acquisition, Knowledge Representation and Knowledge Manipulation methods. Antibiotics is medicine or chemical that can destroy harmful bacteria in the body of limit their growth. Knowledge Acquisition involves the acquisition of user’s age and symptoms that the user suffers. The acquired age and symptoms are organized as a rule in the Knowledge-based system for representation of knowledge. In Knowledge Manipulation, this system matches the rule with the rules in Knowledge-based system. If matches the rule with the rules in Knowledge-based system. If matching, the system makes inference of bacterial infection disease name and provides knowledge of Antibiotics to the user for treatment.

itsmconference2010artificial-intelligence
Computer Problem- Solving ByApplying Depth-First SearchThet Htar San, Aye Aye BoDecemberExpert System, Artificial Intelligence, Problem Solving StrategiesDownload

Artificial Intelligence (AI) is the science and engineering of making intelligent machines, especially intelligent computer programs and AI is related to the similar task of using computers to understand human intelligence. One of the applications of artificial intelligence is in expert system (ES) or knowledge-based. An expert system provides advice derived from its knowledge base. This system intends to solve computer problem without any computer expert. Knowledge base is made with many inferences rules and they are entered as separate rules that use them together to draw conclusions. This system intended to solve problems for Window XP based Operating System Computer by applying depth-first search algorithm. User can get the solution for their specific problems and they can make troubleshooting steps by themselves because the system will give the advice and step by step procedure for solution.

itsmconference2009artificial-intelligence
Comparison for the accuracy of defect fix effort estimationTin Zar Thaw, Myintzu Phyo Aung, Naw Lay Wah, Swe Swe Nyein, Zar Lwin Phyo, Khaing Zarchi HtunAprilDefects Estimation, SOM, Neural Network, Matlab7.0Download

Software defects have become the dominant cause of customer outage and the study of software errors has been necessitated by the emphasis on software reliability. Yet, software defects are not well enough understood to provide a clear methodology for avoiding or recovering from them. So software defect fix efforts play the critical role in software quality assurance. In this paper, the comparison of the defect fix effort estimation by using Self Organizing Map in neural network is presented. To estimate the defect fix effort, KC3 static defect attributes that is one of the publicly available data from NASA products is used in this paper. To implement the SOM, finding different clusters from the SOM and for the calculation of fix effort time of defect SOM Toolbox in MATLAB 7.0.1 is used.

fcsonline-publication2010software-engineering