Essay:

Essay details:

  • Subject area(s): Engineering
  • Price: Free download
  • Published on: 7th September 2019
  • File format: Text
  • Number of pages: 2

Text preview of this essay:

This page is a preview - download the full version of this essay above.

ANALYTICAL SCHEMES TO OPTIMIZE THE MINING RESULTS USING INCREMENTAL MAP REDUCE

D.Hemalatha

Student:ME(CSE)

Sathyabama University

Chennai.

[email protected]

B.Bharathi

Professor

Department of CSE

Sathyabama University, Chennai.

[email protected]

Abstract- In this developing internet and data storage, big data implementation became extreme among organizations. In Big data processing, the data comes from numerous, heterogeneous, separate sources with complicated and developing relationships, and possesses growing. Big data is problematic to figure with victimisation most computer database management systems and desktop statistics and visual image packages. Authors has projected Map reduce programming method, a distinct progressive process extension to Map reduce, the foremost wide used framework for mining Big data. The term Big Data intracts with the act of collecting and storing Brobdingnagian amounts of information for ultimate inquiry. Map reduce is a programming model for process and generating large amount of information in parallel time. So as to decrease time interval the writers have implemented Naïve Bayes algorithm  that delivers fewer maps with larger time interval.

Keywords--Big Data; Map Reduce; Naïve Bayes

I. INTRODUCTION

Defining Big Data

With the recent introduction of Oracle Big Data Appliance and Oracle Big Data Connectors, Oracle is the initial seller to supply a whole and integrated resolution to handle the total spectrum of enterprise Big Data needs. Oracle’s Big Data strategy is targeted on the thought that you just will evolve your current enterprise information design to include Big Data and deliver business worth. By evolving your current enterprise design, you\'ll leverage the tested dependability, flexibility and performance of your Oracle systems to handle your Big Data needs.

Big data typically denotes to the ensuing types of data:

• Traditional enterprise data – contains customer info from CRM structures, transactional ERP data, web collection transactions, and overall ledger data.

• Machine-generated/sensor data – comprises Call Detail Records (“CDR”), weblogs, smart meters, business sensors, equipment logs (often mentioned to as digital expend), trading schemes data.

• Social data – contains customer feedback streams, micro-blogging places like Twitter, social media platforms similar to Facebook.

The McKinsey world Institute estimates that knowledge volume is growing four-hundredth each year, and can grow 44x between 2009 & 2020. however whereas it’s usually the foremost visible parameter, volume of knowledge isn\'t the sole characteristic that matters. In fact, there area unit four key characteristics that outline massive data:

• Volume. Machine-generated information is made in a lot of bigger amounts than non-conventional information. for instance, one response impetus engine will produce 10TB of information into equal parts hour. With very twenty five,000 aircraft flights for every day, the day by day volume of essentially this single learning supply keeps running into the Petabytes. great meters and noteworthy modern instrumentation like oil refineries and penetrating apparatuses create comparative learning volumes, mix the matter.

• Velocity. Social media data streams – while not as enormous as machine-generated data – yield a great influx of views and relationships valuable to consumer relationship management. Even at 140 characters per tweet, the high velocity of Twitter statistics ensures huge volumes (over 8 TB per day).

• Variety. Traditional information groups have a tendency to be moderately all around characterized by an information outline and change gradually. Interestingly, non-traditional information groups show a dizzying rate of progress. As new services are included, new services are added, or new advertising efforts executed, new information sorts are expected to catch the resultant data.

• Value. The economic estimation of various information changes altogether. Normally there is good information covered up among a bigger group of non-conventional information; the test is recognizing what is profitable and afterward changing and separating that information for examination.

To benefit as much as possible from big data, enterprises must advance their IT foundations to deal with these new high-volume, high-speed, high-assortment sources  of information and incorporate them with the prior big business information to be examined.

The Importance of Big Data

At the point when enormous information is refined and analyzed in combination with conventional enterprise information, ventures can build up a more exhaustive and quick comprehension of their business, which can prompt upgraded profitability, a more grounded competitive position and more prominent advancement – all of which can significantly affect the main issue.

For instance, in the conveyance of social insurance administrations, administration of unending or long haul conditions is costly. Utilization of in-home monitoring devices to measure key signs, and visualization process is only one way that sensor information can be utilized to enhance patient health and decrease both office visits and healing facility induction.

At long last, online networking locales like Facebook and LinkedIn basically wouldn\'t exist without enormous information. Their plan of action requires a customized understanding on the web, which must be conveyed by catching and utilizing all the accessible information about a client or part.

Building a Big Data Platform

Similarly as with data warehousing, web stores or any IT platform, a framework for big data has novel requirements. In considering all the components of a big data platform, the true objective is to effectively incorporate your enormous information with your enterprise information to allow you to direct deep analytics on the consolidated informational data set.

Infrastructure Requirements

The requirements in a big data infrastructure area data acquisition, data organization and data exploration.

Acquire Big Data

The obtaining stage is one of the significant changes in infrastructure from the prior days enormous information. Since enormous information alludes to data streams of higher speed and higher variety, the infrastructure required to support the securing of huge information must delivery low, predictable inertness in both catching information and in executing short, basic queries; have the capacity to deal with high exchange volumes, frequently in a distributed domain; and support flexible, dynamic information structures.

Organize Big Data

In traditional information warehousing terms, organizing data is called data integration. Since there is such a high volume of big data, there is a propensity to compose information at its initial destination location, consequently sparing both time and money by not moving around expansive volumes of data. The infrastructure required for sorting out huge information must have the capacity to prepare and control information in the first stockpiling area; bolster high throughput (often in group) to manage expansive information preparing steps; and handle a largevariety of information configurations, from unstructured to organized.

Hadoop is another innovation that permits vast information volumes to be sorted out and prepared while keeping the information on the first information storage cluster. Hadoop Distributed File System (HDFS) is the long term storage framework for web logs. These web logs are transformed into perusing conduct (sessions) by running MapReduce programs on the group and creating accumulated outcomes on a similar cluster. These accumulated outcomes are then stacked into a Relational DBMS framework.

Analyze Big Data

Since information is not generally moved amid the organization stage, the analysis may likewise be done in a distributed domain, where a few information will stay where it was initially put away and be straightforwardly gotten to from an data warehouse. The infrastruture required for analysing enormous information must have the capacity to bolster further examination, for example, statistical analysis and data mining, on a more extensive assortment of information sorts put away in differing frameworks; scale to extraordinary information volumes; deliver faster response times driven by changes in conduct; and mechanize decisions in view of scientific models. Above all, the infrastructures must have the capacity to integrate analysis on the combination of big data and traditional enterprise data. New knowledge comes not simply from dissecting new information, but rather from breaking down it inside the setting of the old to give new points of view on old issues.

II. RELATED WORK

Murari Devakannan Kamalesh et al., [1] proposed strategies that have been created for information distribution and information distributing and slicing. Speculation framework loses broad scope of information, fundamentally for high dimensional information. Bucketization system doesn\'t have an undeniable segment among semi distinguishing angles and insightful viewpoints. Result: Slicing preserves unrivaled information proficiency than bucketization and speculation. This system parts the information both vertically and horizontally. Cutting may deal with expanding dimensional information.

Mary Posonia et al., [2] proposed an effective keyword examine model. This work incorporates a real scan for a keyword in XML based continuing XReal H-decrease highlight and Interactive calculation to relentlessness the diminishing component issue, deviation issue and the imparting way to deal with keyword search individually. Experimental outcomes and assessment demonstrates the productivity of this technique.

Russell Power et al., [7] proposed a new information driven programming model for composing parallel in-memory applications in data centers. Dissimilar to existing information stream models, Piccolo permits calculation running on various machines to share circulated, changeable state by means of a key-value table interface. Piccolo empowers productive application usage. Utilizing Piccolo, the creators have executed applications for a few issue areas, including the PageRank calculation, k-means clustering and a discrete flatterer. Tests utilizing 100 Amazon EC2 cases and a 12 machine group demonstrate Piccolo to be speedier than existing information stream models for some issues, while giving comparable adaptation to non-critical failure ensures and a helpful programming interface.

Grzegorz Malewicz, Matthew H. et al., [8] proposed a practical computing processing issues concern large charts. Standard cases incorporate the Web diagram and different social networks. The size of these graphs—sometimes billions of vertices, trillions of edges—stances difficulties to their proficient preparing. In this paper the creators has introduced a computational model appropriate for this task. Projects are communicated as an arrangement of emphases, in each of which a vertex can get messages sent in the past cycle, send messages to different vertices, and change its own particular state and that of its active edges or transform chart topology. Circulation related points of interest are taken cover behind an abstract API. The outcome is a system for processing large charts that is expressive and simple to program.

III. OVERALL ARCHITECTURE

Figure 1 shows a typical Architecture for Big Data Analytics where pre-processing of statistics is done at first level and then examination of data is done to separate the data and finally the optimized results is given. The authors has used java-programming language (NetBeans) along with WEKA tool and ARFF datasets in testing our system. ARFF is the major machine-learning repository. The four level of analysing the data’s are collection, creation, extraction and classifying the data. The authors have used support vector machine and naïve bayes classifier algorithm for comparing the analysis of the data. Naïve bayes algorithm is used along with the map reduce to get optimized output. Map reduce is a concept where it consists of two functions shuffle and reduce. The advantages of using map reduce is Automatic parallelization & distribution, Fault tolerance, I/O scheduling Monitoring & status updates.  The authors use Banking credit card data to analyse and optimize the results. The authors will use other datasets in our future work to get better results with good processing time.

 Figure 1. Architecture Design

IV. SYSTEM ARCHITECTURE

Figure 2. Defines a structure of map reduce framework which comprises of three functions Map function, Shuffle function and Reduce function.

Mapper ->map (k, v) → <k’, v’>*

Reducer ->reduce (k’, v’) → <k’, v’>* All values with the identical key are sent to the same reducer for additional processing

Figure. 2 Map-Reduce Architecture

V. ALGORITHMS

A. SUPPORT VECTOR MACHINE

A support vector machine (SVM) is an idea in insights and software engineering for a set of related supervised learning strategies that analyse information and perceive designs, utilized for grouping and regression analysis. The standard SVM takes a set of input information and predicts, for each given info, which of two conceivable classes contains the info, making the SVM a non-probabilistic parallel direct classifier. Given a set of preparing cases, each set apart as having a place with one of two classifications, a SVM training algorithm builds a model that assigns new set of data into one class or the other. A SVM model is a representation of the examples as focuses in space, mapped so that the cases of the different classes are separated by an unmistakable crevice that is as wide as could be expected under the circumstances. New cases are then mapped into that same space and anticipated to have a place with a classification in light of which side of the hole they fall on.

B. NAÏVE BAYES CLASSIFIER

A Naive Bayes classifier is a probabilistic classifier in view of applying Bayes\' hypothesis with robust individuality suppositions. A more distinct term for the fundamental probability model would be \"uniqueness feature model\". Naive Bayes has a place with a gathering of group of statistical techniques that are called \'Supervised grouping\' as various to \'unsupervised grouping.\' In \' Supervised grouping\' the algorithms are advised around at least two classes to which texts have in the past been allotted by some human(s) on whatever premise.

VI. WEKA APPROACH

Weka workbench is a collection of machine learning algorithms, a data mining tool that is inscribed in java and has been verified under Linux, Windows and Macintosh operating systems. It offers implementations of dissimilar data mining algorithms that can be used straight or called from java code. Weka comprises different data pre-processing tools, classification algorithms, clustering association’s rules, regression … etc.

VII. PROPOSED SYSTEM

In our proposed work, the authors introduce a Map Reduce concept to support iterative computation efficiently on the Map Reduce platform. Let’s take key/value pairs and added in a list, finally the reduce task takes the sums into one, and produce single output. This is achieved by Naive Bayes algorithm. By using Naive Bayes Classification utility, time consumption for any computations of the system will be less comparing to the previous works. This may lead to obtain greater performance and competence. To reduce I/O overhead for accessing preserved fine-grain computation conditions. Let us discuss about different modules in the following:-

1. Data Collection- Figure 3 comprises a creation of information based on the different location. In this stage, informational set comprises of huge number of documents 1000 information from geo distributed information out of which 100 are from particular area. This hosts data about various sorts of people and their location data. In this venture the creator has utilized a banking data to characterize with Naïve bayes utilizing Map Reduce concept.

Figure 3. Data collection

2. Dataset Creation ARFF databases in view of recurrence and common features were produced. All input characteristics in the data set are represented by either 1 or 0. Figure 4 implies collection of information from different location, from these collected information a dataset is created with similar data for further processing. Attribute relative file is a record that the creators are preparing in this project. The data collections are refreshed step by step so the document measure with greater step by step with refreshed data.

Figure 4. Creation of Data

3. Feature Extraction - Figure 4 suggests output from the parsing remains further subjected to feature extraction. The creators extricate the components by utilizing taking after methodologies, the Common Feature-based Extraction (CFBE) and Frequency-based Feature Extraction. The event of a component and the recurrence of an element. Both techniques are utilized to get Reduced Feature Sets (RFSs) which are then used to create the ARFF documents.

Figure 5. Feature Extraction

Classification - The creators utilize Naïve bayes classifier, Naïve bayes (NB) is an idea in statistics and Computer science for a set of related supervised learning strategies that examine data and recognize patterns, utilized for grouping and regression analysis. The standard NB takes a set of info information and predicts, for each given info, which of two conceivable classes includes the input, making the NB a non-probabilistic binary linear classifier. Given a set of training examples, each set apart as having a place with one of two classes, a NB training algorithm builds a model that allocates new examples into one classification or the other. Figure 6 infers a NB classifier algorithm for classifying the credit card information. Naive Bayes classifiers can be prepared proficiently in a supervised learning setting.

Figure 6. Classification

VIII. EVALUATION AND RESULTS

In this analysis, the creators evaluate the processing time and exactness. The essential trials have finished effectively and prompts store learning history of every classifier utilized as a part of this exploration. Here in this outcome the creators demonstrate a distinction in time exactness for the calculation Support Vector machine and Naïve bayes. The processing time and exactness, with naive bayes algorithm is immaculate contrasted with support vector. Table 1 demonstrates a difference support vector and naïve bayes algorithm in processing time.

Table 1.

RESULTS OF SVM and NAÏVE

(CREDIT CARD DATA)

Technique SVM NAIVE

Processing time 1.99 0.34

IX. CONCLUSIONS

In summation, the examination of an accomplishing the speed and quality of the information is an imperative issue in today\'s world. In this paper, the creators presented a common model worked with Eclipse over utilizing WEKA, in Big data platform. This model can be connected in various fields like medicinal, agribusiness, Internet, E-Marketplace ... and so forth. The creators have reached a conclusion that map reduce utilizing naive bayes have accomplished to give quality of data with minimum response time, greater efficiency

References

[1] M.D.Kamalesh and B.Bharathi (2014), “Slicing an Efficient transaction data publication and for data publishing” -Indian Journal of Science and Technology, Vol.8, Iss.8, pp. 306-309.

[2] Mary Posonia .A, V.L.Jyothi, Efficient XML Keyword search using H-Reduction factor and Interactive Algorithm Journal name: International Review on Computers and Software (IRECOS), Vol-9, N-12, pp-2022-2030 ISSN 1828-6003.

[3] Engy F. Ramadan, Mohamed Shalaby and EssamElFakhrany, “Cooperation among Independent Multi-Agents in A Reliable Data Mining System”, ISBN: 978-1-4673-7504-7 ©2016 IEEE.

[4] SiddhantPatil, SayaliKarnik, VinayaSawant, “A Review on Multi-Agent Data Mining Systems”, International Journal of Computer Science and Information Technologies, Vol. 6 (6), 2015, 4888-4893.

[5] MateiZaharia and MosharafChowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley,  Michael J. Franklin, Scott Shenker, Ion Stoica, “Resilient distributed datasets:

A fault-tolerant abstraction for in-memory cluster computing”, NSDI\'12 Proceedings of

the 9th USENIX conference on Networked Systems Design and Implementation, San Jose, CA — April 25 - 27, 2012.

[6] Reecha B. Prajapati, SumitraMenaria, “multi agent-based distributed data mining”, International Journal of Advanced Research in Computer Engineering & Technology (IJARCET) Volume 1, Issue 10, December 2012.

[7] Russell Power, Jinyang Li, “Piccolo: Building Fast, Distributed Programs with Partitioned Tables”, New York University http://news.cs.nyu.edu/piccolo, OSDI, 2010.

[8] GrzegorzMalewicz, Matthew H. Austern, Aart J. C. Bik, James C. Dehnert, Ilan Horn and NatyLeiser Google, Inc, “Pregel: A System for Large-Scale Graph Processing”, Proceedings of the 2010 ACM SIGMOD International Conference on Management of data,

Indianapolis, Indiana, USA — June 06 - 10, 2010.

[9] Jeffrey Dean and Sanjay Ghemawat, “MapReduce: Simplified Data Processing on Large Clusters”, Communications of the ACM, New York, NY, USA Volume 51 Issue 1, January 2008.

...(download the rest of the essay above)

About this essay:

This essay was submitted to us by a student in order to help you with your studies.

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, . Available from:< https://www.essaysauce.com/essays/engineering/2017-3-22-1490172214.php > [Accessed 23.10.19].