Information Technology Security Management.
TASK 1
Introduction
IT security management basically rotates around the intentions of the guarantee of availability, integrity and confidentiality of an organisation’s data and its information and IT services. The research on risks, threats and exposures continues to demonstrate the need for taking an assertive approach to information risk management.
That is for instance, from 1989 to early 2003, the number of security incidents increased from 130 to over 42,000 and from 2000 to early 2003, the number of security vulnerabilities reported total over 900, which is over twice that of the sum of vulnerabilities reported for the five previous years and ever since 1995, the annual increase in risk from internet hacking is up 60% per year.
With this it is only proving that risks are increasing and there is need for organizations to get well armed for the challenges they will have to face and hence they require expertise in IT security field. So they are able to conquer the risks, threats to their organizations.
The business process owner is the manager responsible for a business process such as supply-chain management or payroll. This manager would be the focal point for one or more IT applications and data supporting the processes. The process owner understands the business needs and the value of information assets to support them.
The information custodian is an organization, usually the internal IT function or an outsourced provider, responsible for operating and managing the IT systems and processes for a business owner on an ongoing business process. The business process owner is responsible for specifying the requirements for that operation, usually in the form of a service level agreement (SLA). While information security policy vests ultimate responsibility in business owners for risk management and compliance, the day-to-day operation of the compliance and risk mitigation measures is the responsibility of information custodians and end users. End users interact with IT systems while executing business functional responsibilities. End users may be internal to the organization, or business partners, or end customers of an online business. End users are responsible for complying with information security policy, whether general, issue-specific, or specific to the applications they use. Educating end users on application usage, security policies, and best practices is essential to achieving compliance and quality.
Managing computer and network security programs has become an increasingly difficult and challenging job. Dramatic advances in computing and communications technology during the past five years have redirected the focus of data processing from the computing center to the terminals in individual offices and homes. The result is that managers must now monitor security on a more widely dispersed level. These changes are continuing to accelerate, making the security manager’s job increasingly difficult. The information security manager must establish and maintain a security program that ensures three requirements: the confidentiality, integrity, and availability of the company’s information resources. Some security experts argue that two other requirements may be added to these three: utility and authenticity (i.e., accuracy). In this discussion, however, the usefulness and authenticity of information are addressed within the context of the three basic requirements of security management.
CONFIDENTIALITY
Confidentiality is the protection of information in the system so that unauthorized persons cannot access it. Many believe this type of protection is of most importance to military and government organizations that need to keep plans and capabilities secret from potential enemies. However, it can also be significant to businesses that need to protect proprietary trade secrets from competitors or prevent unauthorized persons from accessing the company’s sensitive information (e.g., legal, personnel, or medical information). Privacy issues, which have received an increasing amount of attention in the past few years, place the importance of confi- dentiality on protecting personal information maintained in automated systems by both government agencies and private-sector organizations
Confidentiality must be well defined, and procedures for maintaining confidentiality must be carefully implemented, especially for standalone computers. A crucial aspect of confidentiality is user identification and authentication. Positive identification of each system user is essential to ensuring the effectiveness of policies that specify who is allowed access to which data items.
INTEGRITY
Integrity is the protection of system data from intentional or accidental unauthorized changes. The challenge of the security program is to ensure that data is maintained in the state that users expect. Although the security program cannot improve the accuracy of data that is put into the system by users, it can help ensure that any changes are intended and correctly applied. An additional element of integrity is the need to protect the process or program used to manipulate the data from unauthorized modification. A critical requirement of both commercial and government data processing is to ensure the integrity of data to prevent fraud and errors. It is imperative, therefore, that no user be able to modify data in a way that might corrupt or lose assets or financial records or render decision-making information unreliable. Examples of government systems in which integrity is crucial include air traffic control systems, military fire control systems (which control the firing of automated weapons), and Social Security and welfare systems. Examples of commercial systems that require a high level of integrity include medical prescription systems, credit reporting systems, production control systems, and payroll systems. As with the confidentiality policy, identification and authentication of users are key elements of the information integrity policy. Integrity depends on access controls; therefore, it is necessary to positively and uniquely identify all persons who attempt access.
AVAILABILITY
Availability is the assurance that a computer system is accessible by authorized users whenever needed. Information that is not available when and as required is not information at all but irrelevant data. Availability is one area where developments in technology have increased the difficulties for the Information Assurance professional very significantly. In the past, in an ideal world, all important information could be locked up in a very secure safe of some form and never allowed to be accessed – just about perfect assurance but naturally totally impractical. There will, therefore, always have to be a compromise between security in its purest sense and the availability of the information
The area under study is the management of Oshwal institutions in Kenya decided to create a SACOO with major focus on mobilization of funds and provision of affordable credit to its members (employees who are both the owners and users. The primary purpose of the SACCO is to encourage savings among members from which they can borrow at affordable terms decided by them-selves collectively or through the elected directors. Other financial services the SACCO will offer include ATM, Mobile money transfer and custody of valuable documents. The SACCO generates income by providing these services which it uses to meet the related costs. Any income that remains after these costs is paid out to members as dividends and interest based on their shares or deposits. The management suggested specialized software designed to monitor the dynamics of a SACCO as compared to a regular Excel spreadsheet. The software consists of modular based applications that integrate to provide a fully-fledged Enterprise level Software Suite to manage the needs of Savings management and Loan management. An integrated General Ledger and Accounts Management Module ensures that Accounts reporting can be handled without having to use a separate system to handle the same. It handles Member information and tracks the Savings and Loan activity with ease rendered over a secure intranet accessible via any standard web browser.
a)
i. Identify types of security risks to organizations, you should include examples from a range of different categories of threat
Information security is the practice in occurring in order to prevent unauthorized access, use, disclosure, disruption, modification, inspection, recording or destruction of information. Sometimes this activity or practice is also referred to as InfoSec.
There are various forms of IT security threats some of which mostly used is software attacks, theft of equipment, identity theft, information extortion etc. where,
Confidentiality can be compromised in several ways. The following are some of the most commonly encountered threats to information confidentiality:
Hackers.
Masquerades.
Unauthorized user activity.
Unprotected downloaded files.
Local area networks (LANs).
Trojan horses.
Denial of service.
Loss of data processing capabilities as a result of natural disasters (e.g., fires, floods, storms, or earthquakes) or human actions (e.g., bombs or strikes). [ availability]
Replay attack
Data interception
Manipulation
Identity interception
Repudiation
Macro viruses
Malicious mobile code
Hackers.
A hacker is someone who bypasses the system’s access controls by taking advantage of security weaknesses that the systems developers have left in the system. In addition, many hackers are adept at discovering the passwords of authorized users who fail to choose passwords that are difficult to guess or not included in the dictionary. The activities of hackers represent serious threats to the confidentiality of information in computer systems. Many hackers have created copies of inadequately protected files and placed them in areas of the system where they can be accessed by unauthorized persons.
Masquerades.
A masquerade is an authorized user of the system who has obtained the password of another user and thus gains access to files available to the other user. Masquerades are often able to read and copy confidential files. Masquerading is a common occurrence in companies that allow users to share passwords.
Unauthorized User Activity.
This type of activity occurs when authorized system users gain access to files that they are not authorized to access. Weak access controls often enable unauthorized access, which can compromise confidential files.
Unprotected Downloaded Files.
Downloading can compromise confidential information if, in the process, files are moved from the secure environment of a host computer to an unprotected microcomputer for local processing. While on the microcomputer, unattended confidential information could be accessed by authorized users.
Local Area Networks.
LANs present a special confidentiality threat because data flowing through a LAN can be viewed at any node of the network, whether or not the data is addressed to that node. This is particularly significant because the unencrypted user IDs and secret passwords of users logging on to the host are subject to compromise as this data travels from the user’s node through the LAN to the host. Any confidential information not intended for viewing at every node should be protected by encryption.
Trojan Horses.
Trojan horses can be programmed to copy confidential files to unprotected areas of the system when they are unknowingly executed by users who have authorized access to those files. Once executed, the Trojan horse becomes resident on the user’s system and can routinely copy confidential files to unprotected resources.
Denial of service usually refers to actions that tie up computing services in a way that renders the system unusable by authorized users. For example, the Internet worm overloaded about 10% of the computer systems on the network, causing them to be nonresponsive to the needs of users.
The loss of data processing capabilities as a result of natural disasters or human actions is perhaps more common. Such losses are countered by contingency planning, which helps minimize the time that a data processing capability remains unavailable. Contingency planning — which may involve business resumption planning, alternative-site processing, or simply disaster recovery planning — provides an alternative means of processing, thereby ensuring availability
Identity interception – The intruder discovers the user name and password of a valid user. This can occur by a variety of methods, both social and technical.
Replay attack – The intruder records a network exchange between a user and a server and plays it back at a later time to impersonate the user.
Data interception – If data is moved across the network as plaintext, unauthorized persons can monitor and capture the data.
Manipulation – The intruder causes network data to be modified or corrupted. Unencrypted network financial transactions are vulnerable to manipulation. Viruses can corrupt network data.
Repudiation – Network-based business and financial transactions are compromised if the recipient of the transaction cannot be certain who sent the message.
Macro viruses – Application-specific viruses could exploit the macro language of sophisticated documents and spreadsheets.
Malicious mobile code – This term refers to malicious code running as an auto-executed ActiveX® control or Java Applet uploaded from the Internet on a Web server.
Disclosure – usually falls in three categories that is the,
Full disclosure which is the practice of publishing analysis of software vulnerabilities as early as possible, making the data accessible to everyone without restriction. The primary purpose of widely disseminating information about vulnerabilities is so that potential victims are as knowledgeable as those who attack them
Coordinate Disclosure – the difference here is that, its vulnerability of information can be controlled. That is nobody can be informed about the information until the vendor of the software grants permission.
Non-disclosure – here no vulnerability information should be shared, or should only be shared under non-disclosure agreement (either contractually or informally)
Deception – is abundancy of the reliance on known attack patterns and monitoring, and usage of advanced luring techniques and engagement servers to entice an attacker away from valuable company servers.
There are various types of these deception like :-
• Identity deception
• Denial of service attacks
• Trojan horses
• Miscellaneous deception
• Denial of receipt, a false denial that an entity received some information or message, is a form of deception.
E.g. customer orders an expensive product, but the vendor demands payment before shipment. The customer pays, and the vendor ships the product. The customer then asks the vendor when he will receive the product.
If the customer has already received the product, the question constitutes a denial of receipt attack. The vendor can defend against this attack only by proving that the customer did, despite his denials, receive the product.
Disruption – this is usually driven by motives of profits (criminalistics way), extortion, theft or deliberate attacks to destroy, damage or interfere with the infrastructure systems.
This tactic is moderately practiced based on the reviews and trends related to this methodology nationally. For instance, hackers have attacked company computers, distracting employees and interfering with Internet Security Providers (ISP) to divert resources, take proprietary information, and steal PII.
Small devices can wreak havoc and disrupt systems. Some USBs have been manufactured with viruses or may become infected and spread viruses to multiple computers. Firewalls, access via signatures, and anti-virus are becoming antiquated security methods.
Usurpation – this is where information is seized or held by force or having rights and power over the information to manipulate it and use it for various purposes without legal authority on it.
Like for instance, Denial of service an activity where for a long term the service is inhibited and this falls under the usurpation category.
E.g. Attacker prevents a server from providing a service. The denial may occur at the source by preventing the server from obtaining the resources needed to perform its function,
at the destination (by blocking the communications from the server), or along the intermediate path (by discarding messages from either the client or the server, or both). Denial of service poses the same threat as an infinite delay.
Physical, technical, and administrative issues are important aspects of security initiatives that address availability. The physical issues include access controls that prevent unauthorized persons from coming into contact with computing resources, various fire and water control mechanisms, hot and cold sites for use in alternative-site processing, and off-site backup storage facilities. The technical issues include fault-tolerance mechanisms (e.g., hardware redundancy, disk mirroring, and application checkpoint restart), electronic vaulting (i.e., automatic backup to a secure, off-site location), and access control software to prevent unauthorized users from disrupting services. The administrative issues include access control policies, operating procedures, contingency planning, and user training. Although not obviously an important initiative, adequate training of operators, programmers, and security personnel can help avoid many computing stages that result in the loss of availability. In addition, availability can be restricted if a security office accidentally locks up an access control data base during routine maintenance, thus preventing authorized users access for an extended period of time. Considerable effort is being devoted to addressing various aspects of availability. Another sign that availability is a primary concern is that increasing investments are being made in disaster recovery planning combined with alternative-site processing facilities. Investments in antiviral products are escalating as well; denial of service associated with computer viruses, Trojan horses, and logic bombs is one of today’s major security problems.
ii. Evaluate types of security risks to organizations, detailing which are more likely to occur and what the potential consequences to an organization are.
For the case of Deception these are some of the ways in which one can control.
Concealment – where valuable data is stored in looking files are conceals and then honey pots and facades are set up that diverts the attackers from real assets.
Camouflage – Obscure your infrastructure by making it a moving target, changing addresses, infrastructure topologies, and available resources daily.
Feints – Use defensive feints to pretend to succumb to one form of attack in order to conceal a second, less-obvious defence also known as nested deception
Info: Honeypot is a computer security mechanism set to detect, deflect, or counteract attempts at unauthorized use of information systems.
Security controls are safeguards or countermeasures to avoid, detect, counteract, or minimize security risks to physical property, information, computer systems, or other assets.
They can be classified by several criteria. For example, according to the time that they act, relative to a security incident:
• Before the event, preventive controls are intended to prevent an incident from occurring e.g. by locking out unauthorized intruders;
• During the event, detective controls are intended to identify and characterize an incident in progress e.g. by sounding the intruder alarm and alerting the security guards or police;
• After the event, corrective controls are intended to limit the extent of any damage caused by the incident e.g. by recovering the organization to normal working status as efficiently as possible.
According to their nature, for example:
• Physical controls e.g. fences, doors, locks and fire extinguishers;
• Procedural controls e.g. incident response processes, management oversight, security awareness and training;
• Technical controls e.g. user authentication (login) and logical access controls, antivirus software, firewalls;
• Legal and regulatory or compliance controls e.g. privacy laws, policies and clauses.
A similar categorization distinguishes control involving people, technology and operations/processes.
In the field of information security, such controls protect the confidentiality, integrity and/or availability of information – the so-called CIA Triad
Systems of controls can be referred to as frameworks or standards. Frameworks can enable an organization to manage security controls across different types of assets with consistency.
1. Administrative – These are the laws, regulations, policies, practices and guidelines that govern the overall requirements and controls for an Information Security or other operational risk program. For example, a law or regulation may require merchants and financial institutions to protect and implement controls for customer account data to prevent identity theft. The business, in order to comply with the law or regulation, may adopt policies and procedures laying out the internal requirements for protecting this data, which requirements are a form of control.
2. Logical – These are the virtual, application and technical controls (systems and software), such as firewalls, anti virus software, encryption and maker/checker application routines.
3. Physical – Whereas a firewall provides a “logical” key to obtain access to a network, a “physical” key to a door can be used to gain access to an office space or storage room. Other examples of physical controls are video surveillance systems, gates and barricades, the use of guards or other personnel to govern access to an office, and remote backup facilities.
All three of these elements are critical to the creation of an effective control environment. However, these elements do not provide clear guidance on measuring the degree to which the controls mitigate the risk. Instead, the Simple Risk Model utilizes an alternative set of elements that provide a better means of weighting the level of mitigation:
• Preventive – These are controls that prevent the loss or harm from occurring. For example, a control that enforces segregation of responsibilities (one person can submit a payment request, but a second person must authorize it), minimizes the chance an employee can issue fraudulent payments.
• Detective – These controls monitor activity to identify instances where practices or procedures were not followed. For example, a business might reconcile the general ledger or review payment request audit logs to identify fraudulent payments.
• Corrective – Corrective controls restore the system or process back to the state prior to a harmful event. For example, a business may implement a full restoration of a system from backup tapes after evidence is found that someone has improperly altered the payment data.
Of the three types of controls, preventative controls are clearly the best, since they minimize the possibility of loss by preventing the event from occurring. Corrective controls are next in line, since they minimize the impact of the loss by restoring the system to the point before the event. However, the restoration procedure may result in some degree of loss, since the restoration procedure may lead to the unavailability of systems and applications along with possible lost productivity, customer dissatisfaction, etc. The least effective form of control, but the one most frequently used, is detective controls – identifying events after they have happened. Depending on how soon the detective control is invoked after an event, a business may uncover a loss long after there is any opportunity to limit the amount of damages. In the Proof-of-Concept application, the Control is weighted by whether it is a preventative, detective or corrective control.
One other valuable distinction to be made with controls is whether they are manual or automated. A business can implement manual controls to minimize the chance of fraudulent payments, such as requiring an administrator and a manager to manually sign the applicable paperwork to indicate that the transaction was authorized and approved. As an alternative, the business could automate these controls by introducing a computer program with logical access, segregation of duties and maker/checker controls.
()
Simple Risk Model also assesses whether the Control is Effective and Efficient
• Effective – Effectiveness measures whether the Control provides an acceptable level of risk mitigation to the organization. A Control may exist (for example, the organization maintains a Policy requirement that all employees must change their password every 30 days), but its value is diminished it it is not properly implemented (few employees are aware of the requirement and there is manageable evidence passwords are not being changed).
• Efficient – Efficiency measures the cost of maintaining the Control compared to the potential loss if the Control were to fail. This is a cost/benefit analysis where Controls are ideally structured to yield a positive return on investment
Controls
In order to measure the effect from a Control failure, you need to correlate the Scope of the Control (does the Control mitigate the Risk from the loss of Confidentiality, Integrity or Availability) with the Impact of the process (how important is a loss of Confidentiality, Integrity or Availability to the process). If a Control addresses a loss of Availability, but under Impact Availability is rated as a low risk element, the overall risk from the failure of the Control is likely Low. Conversely, if the Impact assessment rated Availability as a high risk area, then the failure of the Availability Control would have a significant effect on the organization. As a result, the Simple Risk Model factors in the correlation between the Scope of the Control and the Scope of the Impact.
also need to consider the challenges or limitations that the Control presents to a Threat:
1. Complexity – How difficult is it to exploit the Control? Does the exploit require significant resources (i.e. experience, training, money, technology, planning etc.), which would create a disincentive for most Threats? In effect, the greater the complexity involved in breaking the Control, the less likely the Control will be exploited.
2. Access – What level of access to the control is required for an exploit to be successful? Is the Control freely accessible on the Internet or is the Control protected within a guarded data center? How many people (potential Threats) could access the Control with the resources reasonable available to them?
3. Privilege – Assuming the Threat can overcome the challenges of Complexity and Access, what level of Privilege will the Threat receive? For example, a hacker (the Threat) may be able to run a script (an exploit) to gain access to an Internet site. The severity of the exploit would vary greatly based on the level of authority or privilege that the hacker would gain. If the exploit only allows the Hacker to run reports on the system, it would have a lower severity than if the exploit gave the Hacker system administrator access with the power to change or delete code, monitor user activity, etc.
The Types of External Events should be prioritized based on:
• Frequency of occurrence. For example, what is the frequency of hurricanes in your area? If the frequency is greater than “extremely remote”, what is the probability based on each of the categories in the Saffir-Simpson Scale (what is the possibility that within the next x years the building will be exposed to a Category 4 hurricane)?
• Duration of the outage measured as the period during which business processes will be unavailable due to an Event. Availability is measured as a function of time (how long could the process be unavailable to the business before the outage would cause a significant loss). For example, if the business concluded that there was a reasonable possibility that a Category 4 hurricane could impact the building, the following factors would need to be addressed:
o What type of damage would occur (will the building have power, communications and water, will the roads be open, will public transportation be available, etc.).
o Based on this damage, how long would the process be unavailable to the business? If it was determined that the building in which the process is performed would be a total loss, how long would it take to rebuild the office or find another suitable building to resume business? If the building was intact, but all the windows would need to be replaced, how long would that process take (considering that suitable window material likely would be in short supply). Note, at this point in the assessment process you should not take into account any mitigating controls available through the business continuity plan. The value of a business continuity plan in mitigating risk is addressed as part of the Control assessment. For example, the BCP may have a provision to move the operations to a backup site in another tate. The Duration of an outage from a Category 4 hurricane should be determined within the context of the Threat assessment without factoring in the BCP. Then as part of the Control assessment stage, the BCP should be evaluated on how effectively it addresses this specific type of External Event.
• Loss to capital assets (damage to buildings, technology, furnature, etc.) and business resumption expenses. While the primary focus in assessing External Events is on the lack of availability of critical business processes, some External Events can also cause significant damage to capital assets that are not directly related to the performance of a process. For example, a Category 4 hurricane could force the business to suspend processing for at least a month while it scrambles to repair damages to the building or find an alternate location. Aside from the process related losses, such as loss of sales and reputation, the business would also incur significant expenses related to repairing or replacing the building or increased rental costs at the alternate location. Alternatively, the business might conclude that the building could adequately withstand a Category 4 hurricane and not require significant repair costs. However, such a hurricane could cause significant damage to the local infrastructure (no power or communications or employees would be unable to travel to the office). In this case there would still be significant process related losses, but minimal losses associated with capital assets. Since process related losses are already identified as part of the Cost stage of the risk assessment process, we need to also factor in capital asset losses and business resumption expenses separately.
Simple Risk Model
The fundamental principles of risk are commonly accepted
Risk is a function of the potential cost of a harmful or negative event and the probability that the event will occur. Cost is based on the average expected losses and related expenses over a stated period (usually one year). Probability is a function of the vulnerabilities (defects in existing controls) and the threats (people or external events that could act on these vulnerabilities).
There is a lack of objective data: Due to the scarcity of reliable historical data and the constantly changing nature of technology and the business environment, it is exceedingly difficult to derive accurate quantitative results in the various operational risk disciplines (information security, business continuity planning, technology, back office operations, etc.). Currently, there is no simple, objective and comprehensive methodology for assessing operational risk.
We need a risk model that compensates for the lack of data: The Simple Risk Model addresses these challenges by adding a level of granularity to the risk principles to compensate for the lack of historical data and adding a repeatable and quantifiable methodology to compensate for the changing business and technology environments.
(bsewall, 2008)
Identification of weaknesses as an approach to countermeasure risk.
1. Lack of historical data
o The operational risk community, for the most part, lacks a sufficient amount of reliable historical data on losses that could be used to make predictions as to future losses. By means of comparison – the insurance industry has an abundance of historical information that it uses to price premiums. Your car insurance company, for example, has enough data to make a relatively accurate prediction as to the number of cars that will be stolen annually in any given zip code. But an information security specialist has no comparable method of predicting the probability of whether their business’ primary computer system will suffer a major computer breach in the next year. This leaves the IS specialist to resort to “gut feel” and their personal experiences as the foundation for their risk assessment decisions.
2. Technology and business practices constantly change
o Part of the reason that there has been little success in building a reliable database of op risk losses is that the value of any such database would be undermined by the constantly evolving nature of technology and business processes. Simply, what was true in op risk yesterday, would be questionable tomorrow.
3. No common methodology
o There are thousands of op risk assessment tools and methodologies available in the community, but none that is commonly accepted. In addition, those tools which have staked out some level of prominence are a). so resource intensive that they are unusable in the workplace and b). focus on only a limited portion of the components in risk the risk equation. More importantly, these tools tend to treat such areas as information security and business continuity planning as completely independent risk methodologies and, as a result, these tools fail to leverage the overarching elements of operational risk. There just is not a tool you can go to that will cover all the bases of risk and give you a result that is understood and accepted within the community.
o It should be noted that there are several robust, formal risk assessment methodologies available, such as OCTAVE, ISO/IEC 27001, and COSO. Any business in the process of creating or overhauling their operational risk program should consider such methodologies as the foundation for their program. However, care should be taken to minimize the possibility that the implementation of the methodology degrades into a checkbox exercise. These methodologies are resource intensive and require significant training and experience in risk to manage effectively. Without such training and experience, risk managers in the field will likely resort to blindly filling in the forms, instead of considering the risks. That is, in part, why I created the Simple Risk Model. The Model does not replace these methodologies, but it does try to simplify the learning and communication process so that management can better understand the underlying concepts of the methodologies and better weigh the results.
4. Everyone is an expert
o This is probably the most insidious challenge to operational risk. Everyone of us considers ourselves to be a risk expert. Our survival at the end of each day is a testament to our ability to handle the risks that life has thrown at us. And, if we are expert at managing the risks in our daily lives, why should we waste our time developing a disciplined and objective understanding of operational risk? We would rather deal with risk in a haphazard, experiential manner guided by our unique individual experiences, prejudices and emotions. We are guided by “gut feel” when we respond to risk in the work place, not logic. It is this reliance on subjective reasoning that is the greatest barrier to moving the operational risk community towards a more structured and objective approach to risk.
Hackers
Keeping your network safe
Despite vulnerabilities, new digital solutions can improve operations, enhance the customer experience and boost the bottom line. It’s not necessary or cost-effective to put non-payment solutions on a separate physical network to isolate them from cardholder data.
These six measures can help secure cardholder information while allowing normal network data flow in your restaurant:
1. Maintain a strong firewall. The PCI data security standards prescribe firewalls for compliance. Make sure your firewall is hardened and is supported by virus protection software.
2. Conduct regular scans of your network. The best way to determine if your systems have been compromised is to scan them regularly for vulnerabilities. For relatively low annual fees, a security vendor will remotely scan all of your external systems access points to determine if any are vulnerable to intrusion. This service is analogous to have a regular pest control inspection to identify infestations. Use a reputable, professional company to conduct these electronic scans regularly.
3. Limit remote access. Many restaurants leave their firewalls open to outside entry by mangers working remotely or vendors who routinely perform maintenance on systems.Create strong passwords instead of using the default codes, and change them often. Similarly, always change default firewall settings to allow only essential access, and limit remote access to secure methods such as VPN.
4. Ensure all credit card data is encrypted. If you have older POS equipment that sends raw credit card data to a back-office server, it may be time to upgrade. Modern, secure POS systems encrypt credit card data as soon as a card is swiped, and they immediately send that data to the payment processor without temporarily storing data. Double-check your POS system to make sure it complies with PCI standards.
5. Segment your network. For example, make sure your POS data traffic is separate from your Wi-Fi system, security cameras, digital menu boards and other connections. If you want to enable managers to connect to the POS via Wi-Fi, connect them through a virtual LAN that separates authorized traffic into a security zone.
6. Keep your software updated. Manufacturers frequent update operating systems and POS software to tighten security and eliminate weaknesses vulnerable to hackers. Make sure you download the latest operating system patches and keep all POS software up-to-date.
Addressing these issues is a smart step to help you protect your customers’ data, your reputation and the integrity of your payment card processing environment.
Preventing Denial of Service Attacks
With web application firewall you can avoid DoS attacks because dotDefender inspects your HTTP traffic and checks their packets against rules such as to allow or deny protocols, ports, or IP addresses to stop web applications from being exploited.
Architected as plug & play software, dotDefender provides optimal out-of-the-box protection against DoS threats, cross-site scripting, SQL Injection attacks, path traversal and many other web attack techniques.
The reasons dotDefender offers such a comprehensive solution to your web application security needs are:
• Easy installation on Apache and IIS servers
• Strong security against known and emerging hacking attacks
• Best-of-breed predefined security rules for instant protection
• Interface and API for managing multiple servers with ease
• Requires no additional hardware, and easily scales with your business
here are many different ways that an attacker can launch a Denial of Service attack. They range from simply unplugging a server from the network (if they have physical access) to coordinating large armies of zombie computers to launch a large scale distributed attack against their target using:
• Buffer overflows in the application functions
• Malformed data to raise unexpected exceptions
• Exploited race conditions in multi-threaded systems
• Heavy-duty SQL queries via web forms and “spamming” them with requests, e.g., inserting % characters within search query fields
• SQL Injection attacks executing recursive CPU-intensive queries
• The end users’ web browsers to overload the application with parallel requests via persistent / reflected Cross-Site Scripting attacks
• Overly-complex regular expressions within search queries
• Excessively large files uploaded to the server
Most commonly, the following tactics are used in a DoS attack: