1. Cloud environment can be defined when a company or organization uses resources or applications that are installed on the Web(Cloud). The entire workspace is installed in the Cloud. All the different environments, namely, Developer environment, test environment, build environment and all the related environments are installed the cloud. Due to the cloud infrastructure, all the resources are easily shared. The cloud workspace can provide increased efficiency at work. For Clients as well, who host their websites on virtual servers, they can tap into their services when required, based on the demands of their website and also they need to just pay for what they use. With the cloud environment, there is a network of servers that are used and clients can pull various data from different data centers in different locations . The technological advantages offered by the cloud are as follows-
a. Automatic updates- Customers can receive the latest upgrades for the applications as soon as they are released. Immediate updates push the latest features to the workers thereby increasing their productivity. This is much better when compared to the home-grown software that have the software releases once a year. Clients do not have waste their time in maintaining the software.
b. Always available- With the cloud infrastructure provided by the cloud providers, the connection is always up. As long as the employees have internet they can access their work from anywhere in the world and they can get the work done anytime. Nowadays some application work offline as well.
c. Increased and Improved collaboration- Teams can access, edit, share documents and other work-related stuff anytime, anywhere. Cloud-Based workflow can help in improving and accelerating the product development as every worker can see their real time collaboration.
d. Flexibility- Cloud infrastructure can be scaled up, scaled down or scaled out based on the requirements of the consumer. Growing businesses having fluctuating bandwidth demands. The cloud capacity can be easily scaled to accommodate the demand.
e. Security- As all the data is stored in the cloud, there is no threat of the data getting into wrong hands even if the machine is tampered with. The data on any particular machine can even be remotely wiped.
The economic advantages offered by the cloud are as follows-
a. Capital-expenditure free- Cloud computing helps in reducing the high costs that are incurred in the high costing hardware. With Cloud infrastructure consumers can go with the pay-as-you-go subscription model. Businesses can have zero in-house server storage and application requirements. With the absece of on premise infrastructure, businesses can expect a huge cost cutting in their expenses due to the cloud infrastructure.
b. Reliability- With Cloud computing, most cloud providers offer/guarantee 24/7/365 and 99.99% uptime. If any server fails, the applications and other services can be moved to a different server.
c. Less Environmental impact- Companies who use the cloud infrastructure can up their green credentials as they tend of have a lesser impact on the environment and don't leave oversized carbon footprints.
2. There the different layers in cloud computing. They are as follows-
a. IaaS- Infrastructure as Service
b. PaaS- Platform as a Service
c. SaaS- Software as a Service
A. IaaS- This refers to hardware, different network equipment, and servers which different hosting providers offer to the consumers. This layer consists of all the necessary components for the cloud computing. This layer is a hardware layer. Amazon AWS is an example of IaaS. This layer provides fundamental resources to the higher level layers. They can be categorized into data storage, computational resources, communications. The target customer for this layer would be the software developers as this layer hold the fundamental resources of the cloud infrastructure.
B. PaaS- This layer is the middle layer of the Cloud infrastructure. This layer is used by the developers to develop applications, programs and other tools. In PaaS it is generally a resource based allocation. Developers focus on developing the tools whereas the required scaling and other performance are handled by the PaaS providers. The target customer for this layer would be the software developers as well.
C. SaaS- This layer is visible to the end users of the cloud. The user consumes the offering from the service provider in this layer. It is usually a web based layer and can be accessed anywhere on any device. This layer is the top layer of the stack. It is the largest and the most accessible layer of the cloud computing stack. The end-users like large institutions, businesses and personal computers are the target customers for this layer.
3. SaaS for SMB- The following things can be offered as specific software solutions to small to medium businesses
a. Social media marketing/Email marketing- Small businesses can use email marketing tools to connect with Customers. Social Networks which are currently in trend all around the world can be used to keep the consumers informed about the services provided by the business. Email marketing can be a very useful trick to engage the consumers in your business as it has mesmerizing templates, simple interfaces. It can also be used to track and analyze the customer behaviors
b. Conduct meetings remotely- Within the small business to boost productivity of the employees it is essential to offer a good work life balance. Employees should be given the freedom to work remotely. Video conferencing tools can be used to conduct meetings and to keep everyone connected and working towards the same goals.
c. Centralized Doc Storage- With the business being done in the mobile environment, employees should be encouraged to store all the docs in a centralized cloud location. Cloud-Based workflow can help in improving and accelerating the product development as every worker can see their real time collaboration. Documents on the cloud can be automatically be synced with the mobile devices as well.
d. Expand sales with E-Commerce- If the small business is planning to sell its stuff online, they can consider building an online shopping website. Cloud applications are easily scalable based on the demand from the consumers.
if the business decides to move to the cloud infrastructure on its own, there are several risks that come along with it. For small to medium businesses moving to the cloud is a big change. With this big change, there are many concerns such as security, data access, learning of new processes and the new techniques to run business. As a cloud service provider, we can rightfully address these concerns and provide the new business with excellent training and customer support if they sign up for the services with us. As a reputed cloud service provider, we have the resources and agency to provide strong and extensive security measures. We can also follow up with the employees to provide them step-by-step training in the new methods. The business need not be worried about the maintenance of the SaaS, that would be done by the cloud service providers. By outsourcing the SaaS to a third party, the small business can focus on maximizing the productivity with the help of the new infrastructure and getting comfortable with the cloud based apps.
4. Data center is a kind of facility that is composed of network computers that businesses and organizations can use to process, organize, store huge amounts of data in a client/server architecture.
Data center as a service is a way to provide offsite physical data center infrastructure to clients. In DCaaS, the provider can rent, lease servers, networking resources to the clients. DCaaS providers can help customize the resources based on the client's unique needs. Also the client, due to its inability to expand its own data center, can take advantage of DCaaS. The client can access all the resources through a WAN. The various benefits that clients can avail are, enhanced physical environment, cash flow savings, reliability of servers, scaling.
The physical requirements for DCaaS are as follows-
a. Site selection- Data center should always be set up in the most appropriate places. DCaaS providers should assess the current and local situations before building up a data center. Also they need to assess various factors such as the fiber capacity, energy availability and other safety concerns.
b. Multiple locations- The cloud provider should ensure that the data center should be set up at multiple locations. In case of any failure, the client can take advantage of the replication between sites. This aspect comes under the Disaster Recovery.
c. Support- Daily use of the data center need to have a remedial team at hand in case of any disturbance in the services. Experienced, on-site technical personnel should be available 24/7 to assist with the preventive maintenance in case of any emergency.
The logical requirements for DCaaS are as follows-
a. Robust provision of network services- With the increase in virtualization and greater data demands, DCaaS provider needs to ensure that communication between data centers and the businesses is done effectively.
b. Strict compliance with standards- The DCaaS provider should offer multiple distribution paths serving the IT equipment. All the equipment must be dually-powered and fully compatible with the topology of the architecture
The security requirement for DCaaS are as follows-
a. Security zones- DCaaS provider should restrict the access to the Data Centers. Only authorized personnel should be allowed in certain areas of the data center.
b. Close-circuit monitoring systems- Surveillance technology should be installed at the data center location and the activity at the data center, entrances, exits, and the equipment areas should be monitored 24/7. In addition to the access control and monitoring systems, damage prevention technologies should also be provided. Fire and water emergency detection systems should be installed near the equipment zones. This can help in preventing significance damage.
c. Physical barriers- DCaaS should ensure that the data center location is well- guarded by fences, reinforced walls for extra protection from the outside agents. It is necessary to assess all the vulnerabilities that a data center is susceptible to.
5. Multi-tenancy can be defined as the provision of a single version of software by the service provider to all its consumers. Each consumer is called a tenant. Although each tenant has its own view of the application that they use or administer. The views can also be customized as a dedicated instance of the software. Tenants are restricted to their own application and they can individually customize the application. In a multitenant architecture, various customers' data exist in the same infrastructure and also run the same shared instance of the software. However the data is logically segregated. The environment is secured using multiple types of segmentation such as Hypervisor based segmentation, Database based segmentation. The features that can be customized by the tenant are as follows-
a. User interface- The tenants can personalize each aspect of the user interface of their application.
b. Business Process- The tenants can define all the rules, workflows, logic of the business processes.
c. Data model- Tenants can play around with the application data structures. They can edit, rename, extend the databases, database schemas at the backend of their application.
d. Access Control- Tenants can create different access groups and define access roles based on different roles, users and user groups.
Multi-tenant applications are much more complex than the single-tenant application. In a Multitenant application basically multiple customers, organization and businesses thrive on the same infrastructure and databases in order to take advantage of the price and performance in addition to the economies of scale.
The five main components of multi tenancy are-
a. Security Isolation
b. Performance isolation
c. Availability isolation
d. Administration isolation
e. On the fly customization
The advantages of Multitenancy are as follows-
a. Decrease in costs- Due to shared infrastructure Multitenancy reduces the overhead of processing and memory usage by splitting it between all the tenants. In addition, the licensing costs of the software can also be cut down as only one software license need to be purchased.
b. Easier data mining- All the data from all the customers is stored in a single database. The trends in the customer usage can be easily identified.
c. Streamlined release management- With the frequent upgrades in the software the need to install the updates on every server is eliminated with multitenancy. The package needs to be uploaded to a single server and it can be pushed to all the systems.
6. Scaling can be defined as decreasing or increasing the capacity of resources namely RAM, File system, CPU, bandwidth etc. for an enhanced performance. Vertical scaling enables the users to increase the performance on demand. Scaling up means to add resources which involves addition of CPUs or memory to a single computer. Virtualization technology can be used more effectively in vertical scaling as it provides extra resources for the hosted OS and application modules. The different mechanisms used for scaling up cloud services are-
a. Automated Scaling Listener- It is a mechanism that is used to monitor and track communications between the cloud services and cloud services consumers for dynamic scaling. These listeners are usually deployed near the firewall, from where they can track the workload status. Workloads can be dictates by the volume of request generated by the consumers of the cloud services. Example- Cisco Asset Recovery Web App. For this application, every time multiple copies of the web app are running simultaneously to cover the volume of the customers trying to return their products. These copies are hosted on the AWS EC2 instance, each handling the customer requests. Automated scaling of this we app manages the launching and termination of the EC2 instances on Cisco's behalf. Cisco has defined a criterion that pre-determines when the Automated scaling listener triggers in. With the help of Automated Scaling listener, it can analyze the customer traffic and based on the traffic it can smoothly add or reduce the capacity for each tier.
Load balancing can be defined as efficiently distributing the incoming traffic between the various backend servers. Load balancing automatically routes traffic to the available instances and other available zones. Load balancer is a run time agent that triggers to balance the workload across two or more resources to increase the performance and capacity. Load balancers can be specialized for various distribution functions such as-
a. Asymmetric distribution
b. Workload prioritization
c. Content Aware distribution
The two main reasons why load balancing is essential are-
a. High availability- A cloud provider needs to ensure that there are at least two backend servers for high availability. Load balancer can ensure that if one of the server fails the traffic is routed to the other servers.
b. Centralized control point- With the help of a load balancer, the traffic route can be changed during the deployment phase of any additional rules. Load balancer provides the ability to change how the service is implemented without exposing the changes to the front-end users.
7. SLA or Service Level Agreement can be defined as a contract between a service provide and a cloud service consumer which defines the type of service that the service provider will be providing to the consumer. SLA serve as a blueprint which provides details about the security specifications, performance, uptime statistics to be expected from the cloud service provider. For an IaaS, we can consider a cloud server that can be provided an instance to a small business. The following SLAs can be provided-
a. Service levels will be –
i. Server and Network Availability
iii. Response Delay for emergency incidents
All the service levels are constantly monitored by the monitoring tool
Server Availability- The hardware providing the cloud server are always available and constantly responding to the Monitoring tool. A Service level failure will occur when the service level target won't be fulfilled. The Service level target is 99.999% availability of the cloud servers.
Network Availability- The network components are always available and consists of the different load balancers, routers, switches. A Service level failure will occur when the service level target won't be fulfilled. The Service level target is 99.999% availability of the Network components.
Latency- Latency is the time taken by a data packet to travel between cloud servers in the same network. A Service level failure will occur when the service level target is exceeded. The service level target is less than or equal to 1 ms.
Response delay for emergency incidents- It can be defined as the time between when an emergency is reported and when the service providers will contact the consumer about the incident. A Service level failure will occur when the service level target is exceeded. The service level target is less than or equal to 30 mins.
For SaaS, the SLA offered can be as follows-
Availability- 99.99% uptime for the application
Performance- A maximum response time will be guaranteed 99.99% of uptime.
Security- All the consumer data will be encrypted and securely stored in the backend servers
Location of data- The data will be replicated and stored at multiple locations. In case of disaster, the data can pulled from the other backend servers.
Access to the data- Data can be retrieved from the service provider in a readable format
Dispute mediation process- Consumers should be able to escalate the to the service providers
Portability of data- The consumer should be able to smoothly move the data from one provider to the other.
For the third party products, the SLAs can include the same as for SaaS with the addition of Integration and database migration.
Database migration- Data would be migrated from the client servers and transitioned to the cloud servers without any data loss
Integration- A more robust integration would be provided between the SaaS and the IaaS.
8. The Security threats for any kind of cloud computing are as follows-
a. Data breaches- Due to the vast amount of data that is stored on the cloud, cloud environments are largely susceptible to the data breaches. The severity of potential damage depends on the on the sensitivity of the data exposed. Company database contains a lot of information relevant to their customers, trade secrets and other such intellectual property. If this data is breached then the company has to undergo a lot of things such as lawsuits, criminal charges and other monetary fines
b. Compromised customer credentials- If the data in the cloud isn't not encrypted then then it is susceptible to phishing. Hackers can extract the customer credentials from the database and use it for illegal purposes. During the Anthem breach, almost 80 million records were exposed resulting in stolen user credentials. Companies should follow a multi factor authentication systems to prevent such thefts.
c. Data loss- Data loss can occur when the server fails and there is no backup server and can result in catastrophic consequesnces. Data loss can be a result of human error. Some malware and viruses can also cause data destruction and this is big threat to the cloud providers.
d. Insecure APIs- Hackers and malicious attackers can target the APIs to compromise the integrity of the enterprise customers. A hacker can use the same token used by a customer to access any service and manipulate the data.
e. Malicious Insiders- This is the biggest threat for a cloud service provider. As they have complete access to the company resources, it is very essential to have proper security measures to track each employee's activity who deals with confidential data. As the cloud service providers do not follow correct security guidelines the customer data is highly susceptible to be leaked out.
The security threats in a normal computing environment are as follows-
a. Rogue security software- This kind of software can lure the users into clicking and downloading malicious software. This can be harmful for the computer and the data stored in it. The hackers can use such software to gain access into personal data stored on a computer.
b. Computer worm- A computer worm is a software program that can copy itself from one computer to another without any human interaction. Because of the speed of infection, worms can infect multiple computers in a very short period of time as they can replicate in great volume and great speed. The Conficker worm infected almost 9 million computers in 4 days.
c. Spam- Unwanted junk emails containing advertisements can infect computers if the malicious links inside the body of that email are clicked. They can expose the computers to malware and are a threat to the mail servers.
d. Rootkit- RootKit grants administrator level access to the network of computers. Rootkit can be used by hackers to exploit any loopholes in any application and spread spyware.
e. Phishing- Hackers can phish the data remotely in the guise of email messages. For example, a hacker can lure you to give him your personal information, bank credentials, social security pretending your bank is updating the database by providing a link where you can enter all the above mentioned info.
We cannot explicitly say that the security threats for either of the computing is more or less. Cloud computing has its own threats which it is susceptible to irrespective of all the security. Similarly normal internet computing has threats which cannot be completely avoided.
Although with the progress in technology, cloud computing is essentially becoming more secure than normal computing as cloud service providers do provide disaster recovery plans, load balancing which can be triggered in case of any emergencies. In normal internet computing, the data is more prone to loss and the recovery options are limited. So we can safely assume that with the current technology cloud computing is a little more secure than normal internet computing.
9. Fail over system is used to increase reliability and the availability of the IT resources by using the clustering technology to provide redundant implementations. It acts as a back up operational mode in which the functions of a system component are assumed by a secondary system components when the former becomes unavailable due to failure or a down time. It is mostly used to make a system fault tolerant and should be constantly available. It can span over more than one geographical region so that each location hosts individual redundant implementation of the same IT resource. The two types of the arrangements that are provided to ensure continuity are as follows-
Client connection Failover
Two Adaptive servers are configured as companion servers. Both are loaded with independent workloads. These companions run on the primary and secondary nodes, respectively. They function as individual servers until one fails over.
Secondary companion takes over the network components, connections from primary companion when failover happens.
During failback, primary companion takes back over the network connections
Clients can connect to secondary companion to submit the unfinished transactions during failover
Single adaptive server is run on either secondary or primary node. Adaptive server runs on primary node before a fail over and secondary node after fail over
Adaptive Server and its associated resources are relocated and restarted on the secondary node. when a system fails over
Failback not required. In case relocation to adaptive server can be considered.
During failover, and failback, client connections route back to adaptive server.
10. The cloud service offered by the cloud service provider can be compared as follows-
a. Amazon AWS is a exhaustive cloud computing platform provided by Amazon.com. Amazon has diversified its offerings in order to reduce the outages and ensure the strength of the system. AWS offers a lot of offerings as follows-
• Cloud Drive
• Cloud Search
• Simple Storage Service
• Dynamo Database
• Elastic Compute
All the offerings are billed per usage.
b. Microsoft Azure is Microsoft's public cloud computing platform. It provides various services for analytics, computing, storage and networking. Some of the products offered by MS Azure are as follows-
• Web And Mobile
• Data Storage
• Hybrid integration
• Internet of Things
c. Google offers Google Cloud Platform. Google cloud services can be accessed by the cloud administrators and other IT professionals over public internet. The core cloud computing services offered by google are-
• Google Compute Engine
• Google Cloud Storage
• Google Container Engine
• Google App Engine
The Cloud services can be compared as follows-
AWS allows the users to configure virtual machines using the pre-defined machine configurations or custom AMIs. The consumer gets to select the size, memory, power, capacity and number of VMs and also can choose the regions and availability within where it can launch. AWS also provides load balancing and auto scaling.
Google Compute Engine lets the consumers launch virtual machines in various regions and groups. Google also provides live migration of VMs, faster persistent disks and instances with more cores than AWS.
MS Azure users get to choose a Virtual Hard Disk which is equivalent to Amazon's AMI to create a VM. A VHD is usually pre defined by Microsoft or can be customized by the users. Users need to specify the number of cores and amount of memory.
• Storage and databases
AWS provides the users with temporary storage which is allocated once when the instance is started. It is destroyed when the instance is terminated. AWS also provides block storage that can be attached to any instance. It also provides object storage with the S3 service. It also supports NoSQL databases and BigData.
Google Cloud Platform both provides temporary storage and hard disks. Google Cloud storage is used for object storage. The relational databases are supported through Google SQL.
Azure uses ephemeral storage or page blobs for VM based volumes. Both relational and NoSQL databases are supported by Azure through windows Azure Table.
AWS charges its consumers by rounding up the number of hours used by them. The minimum usage is 1 hour. There are three models provided by Amazon.
On demand- Customers pay based on the use
Reserved- Customers reserve instances and pay an upfront cost
Spot- customers can bid for any extra available capacity
Google charges for instances by rounding up number of minutes used. The minimum usage is 10 minutes. With sustained usage discount pricing can be applied when a particular instance is used for a large percentage of a month.
Azure offers short term commitments with discounts. It charges the customers by rounding up for the number of minutes on demand.
Amazon used Virtual Private Cloud to allow users to create isolated network of VMs in the cloud. Using the VPC users can create network gateways, subnet, private IP tables.
Google Compute Engine instance is based on a single network and the network addresses are pre defined and the gateway addresses as well are pre defined.
Azure uses Virtual Network similar to Amazon's VPC and works in a similar pattern.
...(download the rest of the essay above)