CATA 2019:Papers with Abstracts

Abstract. Recent advancement in the biological sciences field led to an increase in the development of large biological software research projects. These projects are complex and interconnected and have proven to be hard to manage. In many cases, these projects are not completed within their deadlines and fail to provide the users with a reliable piece of software that can be managed and maintained in the future. This observation was also confirmed by scientists working in other research and scientific fields. It is well known that the non-deterministic nature of science requires it to be always evolving and changing. To meet such requirements using conventional software engineering practices will be a very hard goal to achieve. For this reason, I suggest the use of agile approaches such as Scrum, Kanban, and extreme programming, as an attractive choice to many developers working in many academic and scientific fields especially in areas such as computational biology. In this paper, I will discuss some biological software systems that used agile approaches in developing or enhancing their projects and I will look into the benefits gained from adopting those approaches and how it can benefit the outcome of many scientific projects.
Abstract. Academia has always sought to ride the line between established thought and new developments. No much more so than in the terms of technology. Universities seek to teach using known and proven methods and resources but also stay relevant with new technologies to provide students the knowledge they will need to be competitive in the work place or graduate field. In this work we will present how the University of Nevada approaches this problem with its Real Time Operating system course. Namely on how using the established Micro C/OS II Real time Operating System with the new builder phenomena the Raspberry Pi we can overcome the challenge of updating a tried and true lesson plan in order to use technology relevant and interesting to the students of today.
Abstract. Virtual Reality has become a popular entertainment medium; however, it could also potentially be useful in creating interactive experiences that act as educational tools for students. Through the use of this technology, virtual experiences facilitate the learning of various concepts through interactive simulation. This project focuses on the development of a virtual reality application that can be used to help teach different physics concepts to young students, in engaging virtual environments and further promote STEM education principles. Each level developed for the game instructs on concepts such as force, acceleration, velocity, position, etc. This paper will discuss both the specifications and design of the project as well as discuss the initial demo results and future development.
Abstract. Computer Information Systems (CIS) is manifesting itself as an important discipline and career path in most universities with excellent career potentials. There is some little misperception and mixing of concepts in CIS with other closely related subjects. This paper is deliberate layout and manifestation of CIS in today’s world of industry and commerce. Specifically, we explain CIS and compare it with other subjects like Computer Science in particular because of the big overlap between these two areas. We will layout the main points and concepts in three dimensions: (1) What is CIS based on how we as faculty and educators understand it. (2) The confusion in CIS, and how people understand it, and think of it. (3) How official sources (AIS, ABET, ACM) explain it. We discuss and reason that CIS programs, which are basically IS programs hosted in computing departments, are not meeting their expectations very well which led to new disciplines developed in the past few years like Data Science, Data Analytics, and Business Intelligence. Finally, we explain and present, an investigation dimension using two methods of investigation to support our findings.
Abstract. Advanced Encryption Standard (AES) represents a fundamental building module of many network security protocols to ensure data confidentiality in various applications ranging from data servers to low-power hardware embedded systems. In order to optimize such hardware implementations, High-Level Synthesis (HLS) provides flexibility in designing and rapid optimization of dedicated hardware to meet the design constraints. In this paper, we present the implementation of AES encryption processor on FPGA using Xilinx Vivado HLS. The AES architecture was analyzed and designed by loop unrolling, and inner-round and outer-round pipelining techniques to achieve a maximum throughput of the AES algorithm up to 1290 Mbps (Mega bit per second) with very significant low resources of 3.24% slices of the FPGA, achieving 3 Mbps per slice area.
Abstract. One of the advantages of Euclidean distance is that it measures the regular distance between two points in space. For this reason, it is widely used in the applications where the distance between data points are needed to be calculated to measure similarities. However, this method is costly as there involve expensive square and square root operations. One useful observation is that in many data mining applications absolute distance measures are not necessary as long as the distances are used to compare the closeness between various data points. For example, in classification and clustering, we often measure the distances of multiple data points to compare their distances from known classes or from centroids to assign those points in a class or in a cluster. In this regards, an alternative approach known as Squared Euclidean Distance (SED) can be used to avoid the computation of square root to get the squared distance between the data points. SED has been used in classification, clustering, image processing, and other areas to save the computational expenses. In this paper, we show how SED can be calculated for the vertical data represented in pTrees. We also analyze its performance and compared it with traditional horizontal data representation.
Abstract. Deadlock between processes and resources is a serious problem in development of operating system. Multiple methods were invented to deal with deadlock issue. Deadlock detection is one method that allows a deadlock to take place then detects thereafter which processes and resources have caused it. In traditional process-resource graph, we propose an approach to detect a deadlock by implementing model checking technique and Computation Tree Logic (CTL) specification. In this paper, we modified traditional process-resource graph such that the outcome graph satisfied valid model of Kripke structure, which over- came limitations of traditional representation of process-resource graph and still preserved every proposition, correctness, and property of the system. With the modified graph, we designed a CTL specification that verified whether or not there existed a deadlock caused by one or more pairs of process and resource. A Java application was developed to implement the proposed approach such that it was capable of dynamically generating a valid model for any process-resource graph input, dynamically generating CTL formula for specification, and verifying the model with corresponding CTL formula.
Abstract. Extensible Access Control Markup Language (XACML) is an OASIS standard for security policy specification. It consists of a policy language to define security authorizations and an access control decision language for requests and responses. The high-level policy specification is independent of underlying implementation. Different from existing approaches, this research uses a graph database for XACML implementation. Once a policy is specified, it will be parsed and the parsing results will be processed by eliminating duplicates and resolving conflicts. The final results are saved as graphs in the persistent storage. When a XACML request is submitted, the request is processed as a query to the graph database. Based on this query result, a XACML response will be produced to permit or deny the user’s request. This paper describes the architecture, implementation details, and conflict resolution strategies of our system to implement XACML.
Abstract. In previous works we have ilustrate a procedure to obtain spherical tiling with GeoGebra. We have found new classes of monohedral spherical tiling by four spherical pentagons, and new class of dihedral spherical tiling by twelve spherical pentagons. One again, we would make use of GeoGebra to show how we can do generate new classes of monohedral non-convex hexagonal spherical tilings, H(C,τ), changing the side gluing rules of the regular spherical octahedral tiling, by local action of particular subgroups of spherical isometries.
In relation to one of the new classes, by hexagonal tiles, we describe some of its properties. We also show the existence of a a new family of monohedral pentagonal tiling which arises as a degenarated case associated to the family H(C ,0) . All these classes of spherical tilings have emerged as a result of an interactive construction process, only possible by the use of newly produced GeoGebra tools and the dynamic interaction capabilities of this software.
Abstract. Channel Rendezvous between secondary users remains a key challenge to the development of cognitive ad-hoc networks. The decentralized and heterogeneous nature of ad-hoc CRNs makes guaranteeing rendezvous across multiple users within a short time difficult. Current research focuses on single hop networks or on multi-radio platforms to reduce the Time To Rendezvous (TTR). This work presents a Novel Multi-radio Rendezvous algorithm that leverages increasing availability of multi-radio secondary users to reduce TTR in heterogeneous and anonymous CRNs with multiple users.
Abstract. We apply the recent important results of serial sorting of real numbers in O(n√logn) time to the design of a parallel algorithm for sorting real numbers in O(log1+n) time and O(nlogn/√loglogn) operations. This is the first NC algorithm known to take o(nlogn) operations for sorting real numbers.
Abstract. Underwater acoustic sensor networks have been developed as a new technology for real-time underwater applications, including seismic monitoring, disaster prevention, and oil well inspection. Unfortunately, this new technology is constrained to data sensing, large-volume transmission, and forwarding. As a result, the transmission of large volumes of data is costly in terms of both time and power. We thus focused our research activities on the development of embedded underwater computing systems. In this advanced technology, information extraction is performed underwater using data mining techniques or compression algorithms. We previously presented a new set of real-time underwater embedded system architectures that can manage multiple network configurations. In this study, we extend our research to develop information extraction for seismic monitoring underwater application to meet real-time constraints. The system performance is measured in terms of the minimum end-to-end delay and power consumption. The simulation results are presented to measure the performance of our architecture based on the information extraction algorithm.
Abstract. Software application development must include implementation of core functionality along with secure coding to contain security vulnerabilities of applications. Considering the life cycle that a software application undergoes, application developers have many opportunities to include security starting from the very first stage of planning or requirement gathering. However, before even starting requirement gathering, the software application development team must select a framework to use for the application’s lifecycle. Based on the application and organizational characteristics, software application developers must select the best-fit framework for the lifecycle. A software application’s functionality and security start with picking the right lifecycle framework.
When it comes to application development frameworks, one size does not fit all. Based on the characteristics of the application development organization such as the number of application developers involved, project budget and criticality, and the number of teams, one of the five frameworks will work better than others.
Keywords: Software development lifecycle, software functionality, software security, application development, framework security
Abstract. The Internet enables world-wide communication for all areas of human activity. To deal with the massive data involved, companies deploy database products such as Oracle® Database, MySQL, Microsoft® SQL Server, and IBM® DB2. Databases are continuously under attack by intruders who probe for valuable customer and corporate information. Commercial databases have auditing support that facilitates after-the-fact review and analysis of data access. However, audit data collected has vendor-specific structure and content. Tools are needed to optimize response to security incidents and to proactively mine audit logs for vulnerabilities. This paper1 demonstrates some database-independent techniques aimed toward automating the management of a site’s audit information.
Abstract. Ransomware is an ever-increasing threat in the world of cyber security targeting vulnerable users and companies, but what is lacking is an easier way to group, and devise practical and easy solutions which every day users can utilise.
In this paper we look at the different characteristics of ransomware, and present preventative techniques to tackle these ransomware attacks. More specifically our techniques are based on ransomware behaviour as opposed to the signature based detection used by most anti-malware software. We further discuss the implementation of these techniques and their effectiveness. We have tested the techniques on four prominent ransomware strains, WannaCry, TeslaCrypt, Cerber and Petya. In this paper we discuss how our techniques dealt with these ransomware strains and the performance impact of these techniques.
Abstract. Botnet communications are obfuscated within legitimate network protocols to avoid detection and remediation. Domain Name Service (DNS) is a protocol of choice to hide communication with Command & Control (C&C) servers, where botmasters tunnel these communications within DNS request and response. Since botnet communications are characterized by different features, botmasters may evade detection methods by modifying some of these features. This paper proposes a multi-staged detection approach for Domain Generation Algorithm (DGA) using domain fluxing, Fast Flux Service Network (FFSN), and encrypted DNS tunneled-based botnets using BRO Network Security Monitor. This approach is able to detect DNS-tunneled botnet communications by analyzing different techniques used to find C&C servers, and also using signature matching technique to detect DNS-tunneled SSH handshake between bots and C&C servers.
Abstract. Cooperative system is a tendency for future communications because of its spatial diversity to improve the system performance. However, the security is a critical issue in the wireless application with a highly private request. Although the encryption schemes have been proposed to approach the secure purpose, those schemes need a lot of computing resource. It is not practical for the applications with limited computing ability, such as IoT. According to Shannon theory of perfect secrecy, the security could be implemented on the physical layer. Based on the positive secrecy rate, the evaluation of security, secure communication could be practical. This work concentrates on the theoretical solution to the secrecy rate in the AF mode cooperative communication system. Also, the numerical results with the proposed methodology are given. It shows the effects of eavesdropper could not affect the secure communication if the number of the eavesdropper is less than that of relays in the system. The appropriate relay assignment benefits the secure communication.
Abstract. Static analyzers for JavaScript use constant propagation and interval domains to dis- cover numerical properties of program variables. These domains are non-relational and incapable of tracking relationships between variables, leading to imprecise analysis. This paper presents a static analyzer for the full language of JavaScript that employs the octagon domain to capture numerical properties of the program. Our work is built on top of TAJS (type analyzer for JavaScript) which employs a constant propagation domain for numerical properties. We reengineered TAJS’s abstract domain for abstractions of primitive values and its abstract domain for object abstractions and related transfer functions, resulting in an analyzer that is much more precise. Our experiments show an improvement in analysis precision of JavaScript programs with an acceptable increase in cost.
Abstract. General purpose GPU (GPGPU) is an effective many-core architecture that can yield high throughput for many scientific applications with thread-level parallelism. However, several challenges still limit further performance improvements and make GPU program- ming challenging for programmers who lack the knowledge of GPU hardware architecture. In this paper, we design a compiler-assisted locality aware CTA (cooperative thread array) mapping scheme for GPUs to take advantage of the inter CTA data reuses in the GPU kernels. Using the data reuse analysis based on the polyhedron model, we can detect inter CTA data reuse patterns in the GPU kernels and control the CTA mapping pattern to improve the data locality on each SM. The compiler-assisted locality aware CTA mapping scheme can also be combined with the programmable warp scheduler to further improve the performance. The experimental results show that our CTA mapping algorithm can improve the overall performance of the input GPU programs by 23.3% on average and by 56.7% when combined with the programmable warp scheduler.
Abstract. Software Engineering principles have connections with design science, including cybersecurity concerns pertaining to vulnerabilities, trust and reputation. The work of this paper surveys, identifies, establishes and explores these connections. Identification and addressing of security issues and concerns during the early phases of software development life cycle, especially during the requirements analysis and design phases; and importance of inclusion of security requirements have also been illustrated. In addition to that, effective and efficient strategies and techniques to prevent, mitigate and remediate security vulnerabilities by the application of the principles of trust modelling and design science research methodology have also been presented.
Abstract. With Kotlin becoming a viable language replacement for Java, there is a need for translators and data flow analysis libraries to create maintainable and readable source code. Instagram, Uber, and Gradle are only a few of the large corporations that have either switched from Java to Kotlin completely or started to use it in internal tools in order to reduce code base size. Developers have claimed that Kotlin is fun to use in comparison to Java and much of the boilerplate code is reduced. With Java being the main language for the open source organization, PhenoApps, there is a need to support both Java and Kotlin to increase the maintainability of the code. Fortunately, JetBrains has an open-source IDE plugin for translating Java to Kotlin; however, the translation has some fundamental issues which shall be discussed further in this paper. Introducing, j2k, a CLI translation tool which includes various anti-pattern detection for syntactical formatting, performance, and other Android requirements. The new tool introduced within this paper, j2kCLI allows users to directly translate strings of Java code to Kotlin, or entire directories. This facilitates the maintainability of a large open source code base.
Abstract. If one could predict future web requests, it would be possible to make the web much faster. One could fetch web resources before they are needed. When the human user clicks on a link, the needed data would already have been downloaded.
We have created several algorithms that attempt to predict future web requests based on past histories. Our research evaluates and compares these prediction algorithms against real histories of web usage. Prediction algorithm results are compared based on correct predictions, erroneous predictions, and prediction rate.
Some algorithms make predictions rarely but accurately, while others may predict more often but with less accuracy. To take full advantage of this, we combine multiple algorithms and use different voting strategies to determine the best prediction.
Abstract. A recent medical survey indicates that healthcare social networks are very helpful in promoting awareness of health issues, discussing related health problems with other patients and healthcare providers, and finding quick solutions for some of the health problems[12]. Social media and advances in mobile technology make healthcare information accessible to many patients. Healthcare providers are also able to gain more benefits from healthcare networks by exchanging information with other providers and disseminating valuable health related information. The two major problems in using healthcare networks as re- ported in the literature are security of information stored and passed through the network, and privacy of patients’ health information. This paper describes the design and implementation of a healthcare network focusing on the two aspects - security and privacy. The authors chose Nephrology, the study of kidney diseases, for illustration. However, the de- sign and implementation of the network has been made sufficiently generic so that it can be used for other health domains such as gynecology and psychiatry.
Abstract. Pregnancy is a period of changes. With all the information available and all the questions raised, it may also be an overwhelming period. Mobiles phones might be a solution for pregnant women to follow their pregnancy through Electronic Maternity Records (EMR). Therefore, this paper aims to propose an EMR to help women during their pregnancy. Firstly, the importance of Personal Health Records (PHRs) as well as mHealth is overviewed. Secondly, the types of mobile apps are presented with their pros and cons and the concept of Progressive Web App (PWA) is introduced. In order to understand the features that pregnancy mobile apps are now offering and the ones they are missing, eight apps are analysed. Lastly, the features and architecture of the proposed EMR are described and discussed. Since PWAs are a recent technology and a promising alternative to the three classic types of mobile development, it is also the technology used to develop the proposed EMR.
Abstract. Careless driving is the most common cause of traffic accidents. Being in a drowsy state is a cause of careless driving, which can lead to a serious accident. Therefore, in this study, we focus on predicting drowsy driving. Studies on the prediction of drowsy driving focus on the prediction aspect only . However, users have various demands, like not wanting to wear a device while driving, and it is necessary to consider such demands when we introduce the prediction system. Hence, our purpose is to predict drowsy driving that can respond to a user’s demand(s) by combining two approaches of electroencephalogram (EEG ) and facial expressions. Our method is divided into three parts by type of data (facial expressions, EEG, and both), and the users can select the one suitable for their demands. We acquire data with a depth camera and an electroencephalograph and make a machine-learning model to predict drowsy driving. As a result, it is possible to correctly predict drowsy driving in the order of facial expression < EEG < and both combined. Our framework may be applicable to data other than EEG and facial expressions.
Abstract. Breast cancer prognostication is a vital element for providing effective treatment for breast cancer patients. Different types of breast cancer can be identified based on the existence or lack of certain receptors (i.e., estrogen, progesterone, her2 receptors). Triple-negative breast cancer (TNBC) is characterized by a lack of estrogen receptor (ER), progesterone receptor (PR) and human epidermal growth factor receptor 2 (HER2) expression. Existing studies suggest that TNBC patients tend to have worse prognosis compared to non-TNBC counterparts. The incidence of breast cancer and prognosis in women differ according to ethnicity. Given the poor prognosis of TNBC, cancer-related outcomes must be estimated accurately. Several factors responsible for the poor clinical outcomes observed in TNBC, including age, race/ethnicity, grade, tumor size, lymph node status among others, have been studied extensively. Available research data are not conclusive enough to make a convincing argument for or against a biological or clinical difference in TNBC patients based on these factors. This study was designed to investigate the effects of the ethnicity on breast cancer survivability among TNBC patients utilizing population-based Surveillance, Epidemiology, and End Results (SEER) data to confirm whether ethnicity factor has prognostic significance.
Abstract. In the field of Human Computer Interaction and Psychology, it is accepted that spatial visualization (VZ) is one ability that can indicate individual’s performance on computer applications. Since users with different levels of VZ seem to prefer different types of user interfaces (UI), knowing a user’s level of VZ provides a great opportunity for application developers to design software with higher satisfaction and usability. In this paper, we proposed three models to predict a participant’s level of VZ based on the participant’s actions (taps) on the tablet screen while doing an address verification task in the neighbor- hood using the tablet. After applying the proposed prediction models with data of thirty participants, they yielded an optimal accuracy of 93.33%.
Abstract. NetLogo is a popular agent-based modeling system for good reason. It is relatively easy to learn; it allows an intuitive user interface to be built with predefined objects, such as buttons, sliders, and monitors; and available documentation is extensive, both on the NetLogo Website and in public forums. The Geographic Information Systems (GIS) extension for NetLogo allows real-world geographic or demographic data to be incorporated into NetLogo projects. When GIS is combined with NetLogo, simulations can be transformed from a basic representation to one that accurately replicates the characteristics of a map or population. This paper describes the necessary steps for incorporating GIS within a NetLogo project and the primitive commands used for associating shape properties to NetLogo patches. A practical example is included that demonstrates how to import a map of Texas into a NetLogo project and use the vector data in conjunction with NetLogo patches to randomly color each county.
Abstract. In this paper we approach the problem of searching for available parking in busy lots. While research has been done to allow users to view where empty parking spaces are located, it often involves expensive methods that are difficult to maintain. We attempt to address these restrictions. We also develop a platform to disperse the information to the target users.
Abstract. The development of affordable virtual reality (VR) hardware represents a keystone of progress in modern software development and human-computer interaction. Despite the ready availability of robust hardware tools, there is presently a lack of video games or software in VR that demonstrates the gamut of unique and novel interfaces a virtual environment can provide. In this paper, we present a virtual reality video game which introduces unique user interface elements that can only be achieved in a 3D virtual environment. The video game, titled Wolf Hunt, provides users with a menu system that innovates on traditional interfaces with a virtual representation of a common item people interact with daily: a mobile phone. Wolf Hunt throws users into a procedurally generated world where they take the role of an individual escaping a wolf assailant. Deviating from traditional locomotion options in VR interfaces, such as teleportation, Wolf Hunt measures the displacement of hand-held VR controllers with the VR headset to simulate the natural action of running. Wolf Hunt provides an alternate interfacing solution for VR systems without having to conform to common 2D interface design schemes.
Abstract. An approach for modeling linear time-dependent auto-regressive moving-average (TDARMA) systems using the time-frequency (TF) distribution is presented. The proposed method leads to an extension of several well-known techniques of linear time- invariant (LTI) systems to process the linear, time-varying (LTV) case. It can also be applied in the modeling of non-stationary signals. In this paper, the well-known modified least square (MLS) and the Durbin's approximation methods are adapted to this non- stationary context. A simple relationship between the generalized transfer function and the time-dependent parameters of the LTV system is derived and computer simulation illustrating the effectiveness of our method is presented, considering that the output of the LTV system is corrupted by additive noise.
Abstract. We have proposed in a previous work an unrestricted character encoding for Japanese (UCEJ). This encoding features an advanced structure, relying on three dimensions, in order to enhance the code usability, easier character lookup being one application. This is in comparison of, for instance, Unicode. In this paper, we propose several important refinements to the UCEJ encoding: first, the addition of the Latin and kana character sets as ubiquitous in Japanese, and second, the inclusion of character stroke order and stroke types into the code and the corresponding binary representation. We estimate the average and worst-case memory complexity of the proposed encoding, and conduct an experiment to measure the required memory size in practice, each time comparing the proposal to conventional encodings.
Abstract. Path planning is a key factor that determines how well a robotic vehicle performs in executing automated formations and maneuvers as in multi-vehicle platooning and self- organizing leader following with safe and graceful movements. Many types of path- planning schemes have been employed in the autonomous robotics and driving systems. In this paper, we will focus on the application of a smooth path-planning (SPP) algorithm that produces simple-to-implement robotic maneuvers. The algorithm is derived from using a well-established Lyapunov stability criterion and a clever dynamical control synthesis. We show that the SPP can be adapted to many autonomous guidance scenarios. Simulations show that the SPP resulted in autonomous behaviors similar to that parallel those of human or animal actions. The paper presents results using Matlab simulations as well as Gazebo animation. The results will provide a foundation for an implementation of SPP on actual robotic vehicles.
Abstract. One of the major problem farmers face is that of a parturition accident. A parturition accident result in the death of the calf when the cow gives birth. In addition, it reduces the milk yield. The farmer must keep the cow under close observation for the last few days of pregnancy.
A novel method to predict a cow’s delivery time automatically using time-series acceleration data and global position data by machine learning is proposed. The required data was collected by a small sensor device attached to the cow’s collar. An inductive logic programming (ILP) method was employed for a machine learning model as it can generate readable results in terms of a formula for first-order logic (FOL). To apply the machine learning technique, the collected data was converted to a logical form that includes predefined predicates of FOL. Using the obtained results, one can classify whether the cows are ready for delivery.
Data was collected from 31 cows at the NAMIKI Dairy Farm Co. Ltd. Using the method described above, 130 readings were obtained. The five-fold cross-validation process verified the accuracy of the model at 56.79%.
Abstract. In this paper, we reviewed the tiered architecture and MVC pattern for web development. We also discussed common vulnerabilities and threats in web applications. In order to better understand how to develop a secured web application, we furthermore examined best practices from Angular and ASP.NET core frameworks as well as sample codes for secured web apps.
Abstract. The Emergency 9-1-1 Resiliency Platform (E9RP) is a web tool designed as a ticketing system and an efficiency database tool. The ticketing system will allow users to initiate, update, track, close, and view notes of an emergency 9-1-1 system. Furthermore, the ticketing system will archive ticket, and their pertinent details, for users to retrieve at a later date. Primarily, these users will be less experienced technicians that find themselves stumped by a particular problem that a customer has reported. These users will be able to access and search the historical reference to compare past comparable issues to get notions as to how to repair their current issue. Secondly, the Emergency 9-1-1 Resiliency Platform will host a comprehensive database of PSAP information that is accessible through a web interface for technicians to quickly and efficiently access pertinent information about the network and its resources. This will allow technicians to correct any network failures with greater efficiency, as much of this information is stored on hard copies located in file cabinets, and can be accessed only by “sneaker net”. The target audience for the Emergency 9-1-1 Resiliency Platform is the system administrators and technicians who maintain these systems. This unrelenting 24/7/365 task can be made simpler by centralizing the data into an easy to access Web Interface. Auxiliary users and vertical personnel will also find great use, however, the primary target for this project is the heavy users, or system administrators.
Abstract. A very important issue with the e-commerce delivery service in most of the emerging economies including India is the last mile connectivity. Delivering products, booked online to the remote tier-2 and tier-3 cities remained “costly”. It is observed from firsthand experience with some well-known e-commerce brands in India that their delivery service partners tend to cancel orders that are far away from their tier-2 logistics hubs with the reason shown as “address out of delivery range”. Due to low order density in the far flanges of tier-2 and tier-3 cities arranging vehicles and delivery personnel become costly. In this paper, we propose an innovative delivery model to serve the remote areas by opening edge-hubs at selected places and employing local daily commuters for last mile delivery. Identifying the edge-hubs for opening distribution centers is a costly business if done using traditional field surveys. Here we propose the use of telecom call detail record (CDR) location data as an alternate way of identifying the hubs in real time with much less cost and time.
Abstract. To recommend an item to a target user, Collaborative Filtering (CF) considers the preferences of other similar users or neighbors. The accuracy of the recommendation depends on the effectiveness of assessing the neighbors. But over the time, the mutual likings of two individuals change; hence, the neighbors of the target user also should change. However, this shifting of preferences is not considered by traditional methods of calculating neighborhood in CF. As a result, the calculated set of neighbors does not always reflect the optimal neighborhood at any given point of time. In this paper, we argue for considering the continuous change in likings of the previous similar users and calculating the neighbor- hood of a target user based on different time periods. We propose a method that assesses the similarity between users in the different time period by using K-means clustering. This approach significantly improves the accuracy in the personalized recommendation. The performance of the proposed algorithm is tested on the MovieLens datasets (ml-100k and ml-1m) using different performance metrics viz. MAE, RMSE, Precision, Recall, F-score, and accuracy.
Abstract. Cloud computing is a transformative technology that organizations cannot ignore. Before adopting cloud computing, an organization must determine its needs and risks and encapsulate them into cloud service provider selection criteria. Due to being sharable and the openly accessible nature of cloud computing, the criteria must highlight data security and protection against malware. Only after developing the criteria, should the organizations select a cloud service provider. Choosing the right provider is essential.
Abstract. The spread of various sensors and the development of cloud computing technologies en- able the accumulation and use of large numbers of live logs in ordinary homes. To operate a service that utilizes sensor data, it is difficult to install servers and storage in ordinary homes and to analyze the collected data from sensors. Those data are typically transmitted from sensors to a cloud and analyzed in the cloud. However, services that involve moving image analysis must transfer large amounts of data continuously and require high computing power for analysis. Hence, it is highly difficult to process them in real time in the cloud using a conventional stream data processing framework. In this research, we propose a construction scheme for a highly efficient distributed stream processing infrastructure that enables scalable processing of moving image recognition tasks according to the amount of data that are transmitted from sensors. We implement a prototype system of the proposed distributed stream processing infrastructure using Ray and Apache Kafka, which is a distributed messaging system, and we evaluate its performance. The experimental results demonstrate that the proposed distributed stream processing infrastructure is highly scalable.
Abstract. Cloud computing helps organizations to dynamically increase the resource needs as and when needed, without the need to purchase them. Security is a basic concern in cloud computing, and threats can occur both internally and externally. Users can access the cloud infrastructure for software, operating system and network infrastructure provided by the Cloud Service Providers (CSP). Evaluating the user behavior in the cloud computing infrastructure is becoming more and more important for both Cloud Users (CSs) as well as Cloud Service Providers The CSPs must ensure the safety of users accessing the cloud. Since user authentication alone is not enough to ensure the safety of users, user behavior trust plays a critical role in ensuring the authenticity of the user as well as safety. In this paper, we present the importance of user behavior in modeling trust, associated evaluation principles and comparison between different trust models.
Abstract. Cloud computing is a relatively mature and robust technology that has promised its users with several proven advantages, such as cost reduction, immediate scalability, and resource sharing. The Cloud is built based on providing resources as services, such as providing Infrastructure, Platform, and Software as a Service. Such approach enables Cloud users to access these services based on their demand. In the government sector of Saudi Arabia, adoption and utilization of the Cloud is minimal. Despite being adopted officially, the Cloud has not been yet implemented properly. In our work we introduce how the government sector in Saudi Arabia can adopt and implement a Cloud Solution through utilizing its services and while considering issues related to its security.
Abstract. Nowadays cloud computing become the most popular technology in the area of IT industry. It provides computing power, storage, network and software as a service. While building, a data warehouse typically necessitates an important initial investment. With the cloud pay-as-you-go model, BI system can benefit from this new technology. But, as every new technology, cloud computing brings its own risks in term of security. Because some security issues are inherited from classical architectures, some traditional security solutions are used to protect outsourced data. Unfortunately, those solutions are not enough and cannot guarantee the privacy of sensitive data hosted in the Cloud. In particular, in the case of data warehouse, using traditional encryption solutions cannot be practical because those solutions induce a heavy overhead in terms of data storage and query performance. So, a suitable schema must be proposed in order to balance the security and the performance of data warehouse hosted in the cloud. In this paper, we propose (TrustedDW) a homomorphic encryption schema for securing and querying a data warehouse hosted in the cloud.
Abstract. 3D printing has allowed complex designs to be produced that were impossible to create using conventional manufacturing processes. Aircraft wings are optimized as much as possible given manufacturability considerations, but more complex geometry could provide the same strength for less weight, increasing aircraft performance. Although carbon fiber composites are some of the best known materials for conventional optimized aircraft wings, current 3D printing technology cannot produce this material. Instead, it is currently limited to metals and polymers. To determine if the more complex geometry which can be produced by 3D printing can offset the material limitations, a carbon fiber composite wing and a redesigned, 3D printed 7075-T6 aluminum wing were compared using Finite Element Analysis. The unoptimized 3D printed aluminum wing had a superior safety factor against fracture/yielding (1,109% higher) and buckling resistance (127.3% higher), but at the cost of a 23.99% mass increase compared to the optimized carbon fiber composite wing. If the 3D printed aluminum wing had been optimized to provide the same safety factor against fracture/yielding and buckling resistance as the carbon fiber composite wing, it is anticipated that the resulting design would be significantly lighter, thus increasing aircraft performance.
Abstract. This paper presents the extension and application of three predictive models to time series within the financial sector, specifically data from 75 companies on the Mexican stock exchange market. A tool, which generates awareness of the potential benefits obtained from using formal financial services, would encourage more participation in a formal system. The three statistical models used for prediction of financial time series are a regression model, multi-layer perceptron with linear activation function at the output, and a Hidden Markov Model. Experiments were conducted by finding the optimal set of parameters for each predicting model while applying a model to 75 companies. Theory, issues, challenges and results related to the application of artificial predicting systems to financial time series, and performance of the methods are presented.
Abstract. Foreign metal removal is a key process in the quality control of food and pharmaceutical industries. Previously, foreign metal removal involved the use of metal detectors. However, in recent years, magnet separators have been used to capture small metal particles and to improve the manufacturing yield when installed with previous metal detectors. Currently, most foreign metal material is austenitic stainless steel because product process equipment are manufactured using the same in order to make them corrosion proof. SUS304 and SUS316L are used commonly. Small metal particles adhere to the equipment by sliding and other processes thus contaminating the equipment. Austenitic stainless steels are not magnetized; however, weak magnetization is observed through martensite transformation during sliding and collisions. However, it is not easy to remove small stainless steel particles in production processes that involve powder flow. In this study, we investigated the removal rate of small stainless steel particles by three magnets of different shapes under the same conditions.
Abstract. Additive manufacturing technology has become a viable solution for making molds for plastic injection molding applications. The molds are usually made of high temperature plastic resins suitable for plastic injection molding. Molding resins have superior mechanical properties necessary to withstand the high temperatures and pressures of the injection molding process. It is known that high temperature mechanical properties of resins influence mold performance but it is not established which properties are most important and to what extent they influence the mold performance. Identifying the most important properties influencing mold performance would help resin manufacturers to develop better mold-making materials. In order to study the performance of mold materials we have built a device for measuring the mechanical properties of 3D printed resins including their strength, surface hardness, and wear resistance at molding temperatures of up to 260 oC. We then quantified the mechanical properties of three high-temperature resins along with ABS at the injection molding temperatures. This paper describes the test device and the results of characterizing the mechanical properties of the selected plastics.
Abstract. Electronic Brake System (EBS) is considered as one of the most complicated systems whose performance depends on the subsystems parameters. Usually these parameters are difficult to predict. Based on the task to improve the EBS performance, this article presents a mathematical modeling approach based on neuro-fuzzy network method to model a subsystem of EBS. For the model parameters identification, a neuro-fuzzy network has been implemented based on Least Square Error (LSE) and Levenberg- Marquardt Algorithm (LMA) as the optimization algorithms. Finally, the performance of identified model has been evaluated.
Abstract. There is a lack of research into the impact of road roughness on ride quality and route choice. The scarcity of ride roughness data for local and urban roads is likely one reason for the lack of such studies. Existing methods of obtaining ride roughness data are expensive and require expert practitioners and laborious data processing by trained personnel. Sensors in most current vehicles provide an alternative source for road roughness data. This study emulated the data needed from vehicle sensors by using the accelerometer and gyroscope of a smartphone. The authors used data collected from two different bus routes to classify segments of roads into objectively distinct roughness clusters. The output enables map service applications to suggest better routing options based on expected ride quality and also quantifies road roughness consistently to enable optimized maintenance planning and decision-making for roadway assets.
Abstract. Although organizations face continuously evolving Information Security (IS) risks, the scholarly literature is unclear as to whether transformational, transactional, and passive-avoidant leadership styles influence IS risk management. The study was conducted using a quantitative, non-experimental, and descriptive research design. The sample consisted of senior IT leaders with a range of titles including Chief Information Officer (CIO), Chief Information Security Officer (CISO), Director of IT, and IT Manager. This population is characterized by extensive knowledge of IT and IS issues, and these individuals are generally responsible for directing an organization’s approach to IS risk management. Data from 250 participant surveys were analyzed using the Pearson product-moment coefficient correlation and multiple regression analysis. The results of the analysis demonstrated that both IT leadership is significantly related to IS risk management.
Abstract. Here we propose an open source algorithm, L,M&A(Lyrics, Mine and Analyse) to create a dataset of lyrics of the works of various artists. The aim of this approach is to facilitate the generation of a large data set that can be used for improving accuracy of song recommendation algorithms. The limited availability of such datasets has excluded the sentiment analysis of lyrics from music recommendation systems. By using the L,M&A algorithm, it is possible to generate a large dataset which can function as training dataset for future classifier systems. We have used iterative API requests from musixmatch and Genius servers to text mine lyrics data of songs by multiple artists. The data is processed and then analysed for sentiment using lexicons provided in the Tidytext package (BING, AFINN, NRC) and the overall sentiment of artist was determined through modal counts. The occurrence of each sentiments was evaluated and visualized using ggplot2. This representation exhibits the merit of our approach and the applicability of our data. The key feature of our approach is the open source platforms utilized and simplicity of input required from user.