ICETEAS 2018:Papers with Abstracts

Papers
Abstract. In the database of numeric values, outliers are the points which are different from other values or inconsistent with the rest of the data. They can be novel, abnormal, unusual or noisy information. Outliers are more attention-grabbing than the high proportion data. The challenges of outlier detection arise with the increasing complexity, mass and variety of datasets. The problem is how to manage outliers in a dataset, and how to evaluate the outliers. This paper describes an advancement of approach which uses outlier detection as a pre-processing step to detect the outlier and then applies rectangle fit algorithm, hence to analyze the effects of the outliers on the analysis of dataset.
Abstract. In these days circumstance, the security frames the most essential segment of our lives. Security of the house or the close what's more, dear ones is critical to everyone. Home computerization is an energizing zone for security applications. This field has improved with new advances such Internet of things (IoT). In IoT, each device carries on as a little piece of a web hub and each hub associate and convey. Of late, surveillance cameras are used keeping in mind the end goal to construct security spots, homes, and urban communities. Be that as it may, this innovation needs a man who recognizes any issue in the edge taken from the camera. In this paper, an Internet of Things is joined with PC vision so as to identify the characteristics of individuals. For this reason, to execute this framework, a charge card measure PC that uses its own particular camera board for the security framework, for example, raspberry pi 3 used. In like manner, Passive Infrared Sensor (PIR) mounted on the Raspberry PI is used to identify any developments. So it screens and gets warnings when movement is distinguished, catches the picture and identify the faces, at that point sends pictures to a Smartphone by means of using wire application. Internet of things in light of wire application used to see the action and get sees when development is distinguished.
Abstract. Automated detection of the abnormalities in brain image analysis is very important and it is prerequisite for planning and treatment of the disease. Computed tomography scan is an imaging technique used for studying brain images. Classification of brain images is important in order to distinguish between normal brain images and those having the abnormalities in brain like hematomas, tumor, edema, concussion etc. The proposed automated method identifies the abnormalities in brain CT images and classifies them using support vector machine. The proposed method consists of three important phases, First phase is preprocessing, second phase consists of feature extraction and final phase is classification. In the first phase preprocessing is performed on brain CT images to remove artifacts and noise. In second phase features are extracted from brain CT images using gray level co-occurrence matrix (GLCM). In the final stage, extracted features are fed as input to SVM classifier with different kernel functions that classifies the images into normal and abnormal with different accuracy levels.
Abstract. The wireless sensor network is the network that has large number of sensor nodes that are connected to each other. The wireless nodes sense the event and forward packets to the destination node. A transport layer handles congestion and packet loss recovery for reliable data transfer in WSN. There exist several protocols at the transport layer in WSN for reliable data transfer like ESRT, ATP, Tiny TCP/IP, PORT, CTCP, RTMC, DCDD, RETP etc. Each protocol has its merits and demerits. Traditional network uses TCP and UDP protocol at the transport layer. In WSN, these are not suitable. In this work, the TCP, SCTP and MPTCP are compared in the wireless sensor network environment. The wireless network with packet loss is considered. From the comparative analysis, we get the result that MPTCP gives the better performance than TCP and SCTP in the wireless sensor network.
Abstract. Many embedded systems in a Smart Grid have special constraints in terms of timing, cost and power consumption to ensure security. This paper addresses the Smart Grid security problem with a focus on improving the security of crucial components, and reducing the risks from cyber attacks. A hardware architecture to enhance the security of important embedded devices in the smart grid has been proposed and implemented. This hardware based malware detection system runs on a dedicated hardware implemented with FPGA logic, and allows detection in near real-time. The system architecture and results are presented in the paper.
Abstract. The Li-Fi is another remote innovation to give the availability with in restricted system condition. The primary rule of this innovation is we can transmit the information utilizing light brightening by utilizing light-producing diodes where radio recurrence is media in Wi-Fi and Driven globule light power is speedier than human eye can take after. Prof Harald Haas a specialist in optical remote correspondences at the University of Edinburgh, he was shown how a LED globule furnished with flag handling innovation could stream a top quality video to a PC. By utilizing this innovation a one-watt Driven light would be sufficient to give net network to four PCs. He begat the term "light constancy" or li-fi and set up a privately owned business, Unadulterated VLC, to misuse the innovation. . He imagines a future where information for workstations, PDAs, and tablets is transmitted through the light in a room. What's more, security would be snap – on the off chance that you can't see the light, you can't get to the information.
Abstract. Electrocardiogram (ECG) signal is the signal which consists of the parameters which reflects the electrical representation of heart activity. The main components which are shown by the ECG signal have some important attributes of human heart as well as some hidden information of heart. The information which is found from the ECG signal is so much meaningful to derive various vital parameters related to heart. But the ECG signal can easily be affected with the Noise. Noise is the signal which distorts or interfere the actual power level of ECG signal, this can be due to the motion artifacts or due to the power sources which are resided where this ECG had been taken The ECG system which is typically based on computer have some units, the very first unit is used for pre-processing of ECG signal, second unit is used to detect the heart beats, third unit is used for feature extraction & the last one is used for classification. Signal processing unit which is used for ECG signal is much important for both research & clinical experiments. The artifacts which are analyzed due to the motion & shown in the heart beat processing can effectively remove & the ECG signal is cleaned after using LMS, NLMS and Notch processing. This paper comprises of result and analysis of ECG signal processing using NLMS algorithm & shows its important role in the biomedical applications.
Abstract. Electrocardiogram (ECG) signal is a bio-electrical activity of the heart. It is a common routine and important cardiac diagnostic tool where in electrical signals are measured and recorded to know the functional status of heart, but ECG signal can be distorted with noise as, various artifacts corrupt the original ECG signal and reduces it quality. Therefore, there is a need to remove such artifacts from the original signal and improve its quality for better interpretation. Digital filters are used to remove noise error from the low frequency ECG signal and improve the accuracy the signal. Noise can be any interference due to motion artifacts or due to power equipment that are present where ECG had been taken. Thus, ECG signal processing has become a prevalent and effective tool for research and clinical practices. This paper presents the comparative analysis of FIR and IIR filters and their performances from the ECG signal for proper understanding and display of the ECG signal.
Abstract. Cloud computing (CC) is rising rapidly; an expansive number of clients are pulled in towards cloud administrations for more fulfillments. Distributed computing is most recent developing innovation for expansive scale dispersed processing and parallel registering. CC gives vast pool of shared assets, program bundle, data, stockpile and a broad variety of uses according to client requests at any example of time. Adjusting the heap has turned out to be all the more intriguing examination zone in this field. Better load adjusting calculation in cloud framework builds the execution and assets use by progressively dispersing work stack among different hubs in the framework. Virtual machine (VM) is an execution unit that goes about as an establishment for distributed computing innovation. Bumble bee conduct propelled stack adjusting enhances the general throughput of handling and need construct adjusting centers with respect to decreasing the measure of time an errand needs to look out for a line of the VM.
Abstract. The concept of an e-Governance system came into existence just few years back. Its main objective is to provide access to government services to every individual 24 hours a day as well as 7 days a week. It involves large number of confidential data and information that can be accessed by citizens through media. In nutshell, we can say that online working of a government or providing its services online to its citizens at their door step is known as e-Governance. In today’s scenario, growth of any country can be measured by the scope of e-Governance in that country. In this paper, we highlighted various challenges and issues including security issues, different languages issues, geographical areas issues etc being faced by e-Governance system in rural India.
Abstract. Automated detection of the abnormalities in brain image analysis is very important and it is prerequisite for planning and treatment of the disease. Computed tomography scan is an imaging technique used for studying brain images. Classification of brain images is important in order to distinguish between normal brain images and those having the abnormalities in brain like hematomas, tumor, edema, concussion etc. The proposed automated method identifies the abnormalities in brain CT images and classifies them using support vector machine. The proposed method consists of three important phases, First phase is preprocessing, second phase consists of feature extraction and final phase is classification. In the first phase preprocessing is performed on brain CT images to remove artifacts and noise. In second phase features are extracted from brain CT images using gray level co-occurrence matrix (GLCM). In the final stage, extracted features are fed as input to SVM classifier with different kernel functions that classifies the images into normal and abnormal with different accuracy levels.
Abstract. This research paper proposes an approach to solve the issue of duplicacy in clouds. there are lots of unused and duplicate data on the internet that only consumes the limited amount of data storage these service providers can provide. If the cloud is not efficient enough it can lead to much higher cost than expected for the service providers. Another issue is that the data is stored in a server that might be present several miles away and not close to the user. This further adds to the cost and efficiency. These issues can be solved with the help of a Smart Cloud. We will take the proportions of two vectors and create a ratio which rates the document in the classification.
Abstract. In the world of Digital Innovation “Cloud Computing” is not just a word or a technology but a paramount to the organizations now days. Because it is not easy to store, compute the data on an internet and central remote server to manage a huge bulk of data and information. It is well known that cloud computing provides data, storage of data, computation of data to the end user also by providing the services to the end users by the different applications. So, now the Fog Computing Is generally a concept to extend the cloud computing technology as it also does the same function which cloud computing functionality as well. It is not the replacement but the enhanced version of cloud which provides a security on the cloud environment by isolating user’s data which is saved on the Edge Devices. Fog Computing enables a user to save their data to nearby devices. In this paper the security issues also the technology which is used for security in this enhanced concept of cloud is mentioned.
Abstract. Information mining is known as the extraction of concealed prescient data from expansive databases whose primary concentration is to enable organizations to concentrate on the most critical data to display in their information stockrooms. Data mining can in like manner be called as the examination of data and the use of the distinctive programming techniques for finding illustrations and regularities in the given courses of action of data. The articulation "Electronic exchange" (or web business) ordinarily implies the usage of an electronic medium to finish all the business trades. Numerous a times it alludes to the offer of items by means of Internet, however it likewise incorporates the acquiring of systems through Internet. This primary concentration of this paper is to give the fundamental presentation about the different information mining methods accessible and furthermore to break down these procedures based on their execution. The paper additionally characterizes the different goals of the information mining in internet business. The paper additionally concentrates on the bunching methods and tries to think about the different grouping procedures.
Abstract. With the rapid advance in digital network, information technology, digital libraries, and particularly World Wide Web services, many kinds of information could be retrieved any time. So in this digital scenario invisible communication between two parties is the prime concern. Steganography is the technique of hidden communication. It not only hides the message contents, instead it hides the existence of the message. In this paper, a new image steganography method based on spatial domain is proposed secret data hided in image segments using least significant bit (LSB) steganography is proposed. The color image is divided into four equal parts and the secret data is also divided into four equal parts and those divided data is hided in the image segments using least significant bit steganography. In this paper we have critically analyzed various steganographic techniques and also have covered steganography overview its major types, classification, applications.
Abstract. In today’s world, Internet is becoming more popular for every human. There are various issues and challenges in data security in internet system like the authenticity of content or matter is crucial factor for solving the problem of copying, modifying, and distributing the intellectual properties in an illegal way. Watermarking can resolve the stealing problem of intellectual properties. This includes design and implementation of watermarking techniques such as basic DCT, DCT with existing Butterworth filter and DCT with modified Butterworth filter. It concluded that PSNR value is highest and MSE value is lowest for DCT Watermarking technique using Modified Butterworth filter.