Essay: Intrusion detection systems (IDS) (Page 2 of 2)

Essay details:

  • Subject area(s): Computer science essays
  • Reading time: 17 minutes
  • Price: Free download
  • Published on: September 4, 2019
  • File format: Text
  • Number of pages: 2
  • Tags: References
  • Intrusion detection systems (IDS) Overall rating: 0 out of 5 based on 0 reviews.

Text preview of this essay:

This page of the essay has 3202 words. Download the full version above.

CHAPTER I

INTRODUCTION

Computer networks are playing a principal role in lots of areas. The increasing measurement and complexity of networks result in the growth of complexity of their protection analysis. Viable fiscal, political, and different advantages, which can be received with the aid of cyber assaults, lead to tremendous increase of the quantity of capabilities malefactors. Despite these facts, the prevailing security analysis is a process which nonetheless dependents almost always on the expertise of safety directors. All these problems define the importance of the research and tendencies in the subject of automatic safety analysis of computer networks. This method suggests a framework for designing the Cyber assault Modeling and affect factor which implements the attack classification. In contrast to the present works describes the assault modeling and have an effect on evaluation options directed to optimization of assault classification and analysis approach with the goal to allow their usage within the systems operating in near real time. The fundamental contributions of the process is classify the following assaults: Probe,DOS,U2R,R2L headquartered on back propagation algorithm for assault classification, the fundamental principles of actual-time occasion analysis, the procedure to identify feasible visitors with the aid of inspecting the compliance between protection events and assaults, the applying of each time approach for the attack classification.

An intrusion detection system (IDS) displays community visitors and screens for suspicious undertaking and indicators the system or community administrator. In some cases the IDS may additionally reply to anomalous or malicious site visitors with the aid of taking motion similar to blocking off the user or supply IP tackle from accessing the community.

IDS come in a kind of “flavors” and strategy the purpose of detecting suspicious site visitors in different ways. There are community headquartered (NIDS) and host centered (HIDS) intrusion detection techniques. IDS that observe headquartered on looking for distinct signatures of identified threats- much like the way antivirus application probably detects and protects against malware- and there are IDS that become aware of situated on evaluating traffic patterns in opposition to a baseline and looking for anomalies, it is effectively screen and alert and participate in an motion or movements in keeping with a detected danger. Community Intrusion Detection techniques are positioned at a strategic factor or elements within the network to observe site visitors to and from all devices on the community. Ideally you would scan all inbound and outbound site visitors; nonetheless doing so could create a bottleneck that will impair the total velocity of the community. Host Intrusion Detection methods are run on individual hosts or instruments on the network. A HIDS screens the inbound and outbound packets from the gadget best and will alert the user or administrator of suspicious endeavor is detected.

A signature headquartered IDS will monitor packets on the network and examine them against a database of signatures or attributes from known malicious threats. That is much like the way most antivirus application detects malware. The limitation is that there might be a lag between a brand new risk being discovered in the wild and the signature for detecting that chance being applied to your IDS. For the duration of that lag time your IDS could be unable to become aware of the new risk. An IDS which is anomaly centered will screen community traffic and evaluate it in opposition to an established baseline.

The baseline will determine what’s “common” for that community- what kind of bandwidth is ordinarily used, what protocols are used, what ports and instruments typically join to one another- and alert the administrator or person when visitors is detected which is anomalous, or significantly distinctive, than the baseline. Know-how techniques and Networks are subject to digital assaults. Makes an attempt to breach understanding security are rising day-to-day, together with the provision of the Vulnerability comparison instruments which can be largely to be had on the web, at no cost, as good as for a commercial use.

The real lifestyles example above is the detailed same analogy of what might occur to the network. What’s valued at is that the thief could also be on your network for a very long time, and you might now not even comprehend it. Firewalls are doing a just right job guarding your entrance doorways, but they do not have a probability to provide you with a warning in case there is a backdoor or a gap within the infrastructure.

Script kiddies are continuously scanning the internet for identified bugs in the process, together with steady scans via subnets. More skilled crackers is also hired by your competitors, to goal your community above all, with a purpose to achieve aggressive advantage.

CHAPTER II

LITERATURE SURVEY

2.1. Cyber-Attack Detection: Modelling the Effects of Similarity and Scenario

Authors : Jajodia, S., Liu, P.,Swarup,V.,&Wang,C

Investigate the role of similarity ( An analyst’s way of comparing network events with experiences in memory) and the role of attack strategy (The timing of cyber attacks by an attacker) in influencing timely and accurate cyber attack detection. Its manipulate the attack strategy and similarity assumptions in the model and evaluate the effects of their manipulation on model’s accurate and timely detection of cyber attacks. An IBL model was defined by different similarity mechanisms to compare experiences in memory with network events: geometric (model uses geometric distance to evaluate similarity) and feature-based (model uses common and uncommon features to evaluate similarity). Also, attack strategy was manipulated as patient (all threats occur at the end of a scenario) and impatient (all threats occur at the beginning of a scenario). Results reveal that although attack strategy plays a significant role in cyber attack detection; the role of similarity is non-existent.

METHOD

Instance-based learning theory, security analyst and cognitive modeling.

PROBLEM

• The similarity mechanisms do not seem to influence the accuracy and timeliness in the model.

• This model did not focus on features of attributes.

2.2. How to Use Experience in Cyber Analysis: An Analytical Reasoning Support System

Authors : Chen,P. C.,Liu,P.,Yen,J.,&Mullen,T.

Cyber analysis is a difficult task for analysts due to huge amounts of noise, abundant monitoring data and increasing complexity of the reasoning tasks. Therefore, experience from experts can provide guidance for analysts’ analytical reasoning and contribute to training. Despite its great potential benefits, experience has not been effectively leveraged in the existing reasoning support systems due to the difficulty of elicitation and reuse. To fill the gap, and propose an experience-aided reasoning support system which can automatically capture experts’ experience and subsequently guide the novices’ reasoning in a step-by-step manner. Drawing on cognitive theory, our model uses experience as a reasoning process involving “actions”, “observations”, and “hypotheses”. Computability and adaptability are the comparative advantages of this model: the “hypotheses” capture analysts’ internal mental reasoning as a black box, while the “actions” and “observations” formally representing the external context and analysts’ evidence exploration activities. This project demonstrates how this system, built on this experience model, can capture and utilize experience effectively.

Propose an experience-aided reasoning support system for cyber analysis. The main motivations for such a system are:

(1)To capture and represent experience from experts.

(2) To provide novice analysts with step-by-step guidance using the captured experience.

(3) To enable analysts to effectively communicate with others to benefit from other analysts experience.

The contribution of this work is mainly two-fold:

• Model experience as a reasoning process involving action, observation and hypothesis. The model makes experience capturing and reusing computational and well adapted to analysts‟ reasoning which is highly uncertain due to the dynamic cyber environment.

• An experience-aided analytical reasoning support system is developed based on this model to capture experience and provide sequential guidance to analysts.

2.3. Game strategies in network security

Authors : Kong-wei Lye , Jeannette M. Wing

PROBLEM STATEMENT

How the network security problem can be modeled as a general-sum stochastic game between the attacker and the administrator.

WORK DONE

The act of seeking the interactions between an attacker and the administrator as a two-player stochastic game and construct a model for the game. Using a nonlinear program, its compute Nash equilibria or best-response strategies for the players (attacker and administrator). Using the nonlinear program NLP-1, computed multiple Nash equilibria, each denoting best strategies (best responses) for both players. An attacker on the Internet attempts to deface the home page on the public Web server on the network, launch an internal denial-of-service (DOS) attack, and capture some important data from a workstation on the network.With proper modeling, the game-theoretic analysis , presented here can also be applied to other general heterogeneous networks.

METHOD

Stochastic games and nonlinear programming.

PROBLEM

• This model will not reduce the computation time.

• Could not response for each player from the strategies for the components.

2.4. Intrusion and intrusion detection

Authors : John McHugh

WORK DONE

Describes the two primary intrusion detection techniques, anomaly detection and signature-based misuse detection, in some detail and describes a number of on temporary research and commercial intrusion detection systems. It ends with a brief discussion of the problems associated with evaluating intrusion detection systems and a discussion of the difficulties associated with making further progress in the field. With respect to the later, it notes that, like many fields, intrusion detection has been based on a combination of intuition and brute-force techniques. The suspect that these have carried the field as far as they can and that further significant progress will depend on the Development of an underlying theoretical basis for the field.

METHOD

Intrusion detection, Intrusive anomalies and Intrusion detection systems (IDS).

PROBLEM

The statement looks at the problem of malicious users from both a historical and practical standpoint. It traces the history of intrusion and intrusion detection from the early 1970s to the present day, beginning with a historical overview.

2.5. A Taxonomy of cyber awareness question for the user-centered design of cyber situation awareness

Authors : Paul,C,L.,&Whiteley,K.

Throughout the developed world, governments, defense organizations, and companies in finance, power, and telecommunications are increasingly targeted by overlapping surges of cyber attacks from criminals and nation-states seeking economic or military advantage. The number of attacks is now so large and their sophistication so great, that many organizations are having trouble determining which new threats and vulnerabilities pose the greatest risk and how resources should be allocated to ensure that the most probable and damaging attacks are dealt with first. This report summarizes vulnerability and attack trends, focusing on those threats that have the greatest potential to negatively impact your network and your business. It identifies key elements that enable these threats and associates these key elements with security controls that can mitigate your risk.

Disadvantages:

• Computing distinct logarithms is believed to be tough.

• No economical general technique for computing distinct logarithms on typical computers is understood, and several other necessary algorithms publicly key cryptography base their security on the idea that the distinct index downside has no economical answer.

2.6. Symantec Security Response online report, Symantec, Tech. Rep., February year 2011.

Authors : N . Falliere, Murchu, and E. Chien

Desktop computers are often compromised by the interaction of untrusted data and buggy software. To address this problem, we present Apiary, a system that transparently contains application faults while retaining the usage metaphors of a traditional desktop environment. Apiary accomplishes this with three key mechanisms. It isolates applications in containers that integrate in a controlled manner at the display and file system. It introduces phemeral containers that are quickly instantiated for single application execution, to prevent any exploit that occurs from persisting and to protect user privacy. It introduces the Virtual Layered File System to make instantiating containers fast and space efficient, and to make managing many containers no more complex than a single traditional desktop.

DIS-ADVANTAGES

Traffic classification techniques such as dynamic port numbers and user privacy protection. may rely on the port numbers specified by different applications or the signature strings in the payload of IP packets.

ADVANTAGES

Modern techniques normally utilize host/network behavior analysis or flow level statistical features by taking emerging and encrypted applications into account.

2.7. Lessons Learned from the Maroochy Water Breach, ser. IFIP International Federation for Information Processing. Springer US, 2007, vol. 253, pp. 73-82.

Authors : J. Slay and M. Miller

Web applications are the most common way to make services and data available on the Internet. Unfortunately, with the increase in the number and complexity of these applications, there has also been an increase in the number and complexity of vulnerabilities. Current techniques to identify security problems in web applications have mostly focused on input validation flaws, such as cross site scripting and SQL injection, with much less attention devoted to application logic vulnerabilities. Application logic vulnerabilities are an important class of defects that are the result of faulty application logic. These vulnerabilities are specific to the functionality of particular web applications, and, thus, they are extremely difficult to characterize and identify.

DIS-ADVANTAGES

In the state-of the-art traffic classification methods, Internet traffic is characterized by a set of flow statistical properties and machine learning techniques are applied to automatically search for structural patterns.

ADVANTAGES

The main reason for the underperformance of number of traditional classifiers including NB is the lack of the feature discretization process.

2.8. “Research challenges for the security of control systems, ” in Proceedings of the 3rd conference on Hot topics in security, ser. HOTSEC’08. Berkeley, CA, USA: USENIX Association, 2008, pp. 1-6.

Authors : A. A. Cardenas, S. Amin, and S. Sastry,

Cross-site scripting flaws have now surpassed buffer over-flaws as the world’s most publicly-reported security vulnerability. In recent years, browser vendors and researcher shave tried to develop client-side filters to mitigate these attacks. We analyze the best existing filters and find them to be either unacceptably slow or easily circumvented. Worse, some of these filters could introduce vulnerabilities into sites that were previously bug-free. We propose a new filter design that achieves both high performance and high precision by blocking scripts after HTML parsing but before execution. Compared to previous approaches, approach is faster, protects against more vulnerabilities, and is harder for attackers to abuse.

DIS-ADVANTAGES

The performance evaluation shows that the traffic classification using very few training samples can be significantly improved by our approach.

ADVANTAGES

A novel nonparametric approach, TCC, was proposed to investigate correlation information in real traffic data and incorporate it into traffic classification.

2.9. “Attack models and scenarios for networked control systems, ” in Proceedings of the 1st international conference on High Confidence Networked Systems, ser. HiCoNS ’12. New York, NY, USA: ACM, 2012,pp.55-64.

Authors : A. Teixeira, D. Perez, H. Sandberg, and K. H. Johansson

Learning-based anomaly detection has proven to bean effective black-box technique for detecting unknown attacks. However, the effectiveness of this technique crucially depends upon both the quality and the completeness of the training data. Unfortunately, in most cases, the traffic to the system (e.g., a web application or daemon process) protected by an anomaly detector is not uniformly distributed. Therefore, some components (e.g., authentication, payments, or content publishing) might not be exercised enough to train ananomaly detection system in a reasonable time frame. This is of particular importance in real-world settings, where anomaly detection systems are deployed with little or no manual configuration, and they are expected to automatically learn the normal behavior of a system to detect or block attacks. In this work, we first demonstrate that the features utilized to train a learning-based detector can be semantically grouped, and that features of the same group tend to induce similar models. We run our experiments on a real-world data set containing over 58 million HTTP requests to more than 36,000 distinct web application components. The results show that by using the proposed solution, it is possible to achieve effective attack detection even with scarce training data.

DIS-ADVANTAGES

• It assumes independent features.

• NB classifier is that it only requires a small amount of training data to estimate the parameters of a classification model.

ADVANTAGES

• NB with feature discretization demonstrates not only significantly higher accuracy but also much faster classification speed.

• NB effectively improves the accuracies of the support vector machine (SVM) at the price of lower classification speed.

2.10. “Distributed fault detection for interconnected second-order systems” Automatica vol 47, no. 12, year 2011.

Authors : I. Shames, A. M. Teixeira, H. Sandberg, and K. H. Johansson

Learning-based anomaly detection has proven to be an effective black-box technique for detecting unknown attacks. However, the effectiveness of this technique crucially depends upon both the quality and the completeness of the training data. Unfortunately, in most cases, the traffic to the system protected by an anomaly detector is not uniformly distributed. Therefore, some components (e.g., authentication, payments, or content publishing) might not be exercised enough to train an anomaly detection system in a reasonable time frame.This is of particular importance in real-world settings, where anomaly detection systems are deployed with little or no manual configuration, and they are expected to automatically learn the normal behavior of a system to detect or block attacks. In this work, we first demonstrate that the features utilized to train a learning-based detector can be semantically grouped, and that features of the same group tend to induce similar models.

DIS-ADVANTAGES

• In supervised traffic classification, sufficient supervised training data is a general assumption.

• To address the problems suffered by payload-based traffic classification

ADVANTAGES

• A novel non-parametric approach which incorporates correlation of traffic flows to improve the classification performance.

CHAPTER III

EXISTING SYSTEM

Traffic classification techniques such as dynamic port numbers and user privacy protection may rely on the port numbers specified by different applications or the signature strings in the payload of IP packets. Modern techniques normally utilize host/network behavior analysis or flow level statistical features by taking emerging and encrypted applications into account. In state-of the-art traffic classification methods, Internet traffic is characterized by a set of flow statistical properties and machine learning techniques are applied to automatically search for structural patterns.

The main reason for the under performance of number of traditional classifiers including NB is the lack of the feature discretization process. A big challenge for current network management is to handle a large number of emerging applications, where it is almost impossible to collect sufficient training samples in a limited time. Have to only manually label very few samples as supervised training data since traffic labeling is time-consuming. The supervised traffic classification methods analyse the supervised training data and produce an inferred function which can predict the output class for any testing flow.

3.1 DIS- ADVANTAGES

• Lot of existing intrusion Detection Systems (IDSs) examines the network packets individually within both the web server and the database system.

• However, there is very little work being performed on multitier Anomaly Detection (AD) systems that generate models of network behavior for both web and database network interactions.

• In such multitier architectures, the back-end database server is often protected behind a firewall while the web servers are remotely accessible over the Internet.

• Web delivered services and applications have increased in both popularity and complexity over the past few years.

• Daily tasks, such as banking, travel, and social networking, are all done via the web. Such services typically employ a web server front end that runs the application user interface logic, as well as a back-end server that consists of a database or file server.

...(download the rest of the essay above)

About this essay:

This essay was submitted to us by a student in order to help you with your studies.

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, Intrusion detection systems (IDS). Available from:<https://www.essaysauce.com/computer-science-essays/intrusion-detection-systems-ids-2/> [Accessed 22-09-19].

Review this essay:

Please note that the above text is only a preview of this essay.

Name
Email
Rating
Comments (optional)

Latest reviews: