Home > Computer science essays > Design of a virtual assistant system

Essay: Design of a virtual assistant system

Essay details and download:

  • Subject area(s): Computer science essays
  • Reading time: 22 minutes
  • Price: Free download
  • Published: 16 October 2022*
  • Last Modified: 22 July 2024
  • File format: Text
  • Words: 6,329 (approx)
  • Number of pages: 26 (approx)

Text preview of this essay:

This page of the essay has 6,329 words.

Our Project enables users to do many things using just one application like checking weather, ordering a pizza, booking a hotel or a flight, finding lyrics of a song, finding out the meaning of a word and its use with an example sentence. Additionally, our project enables users to stay in touch with what is happening in the real world through handy features like news .We can also make small talk with our bot and clearly use all the features of our bot through a persistent menu available beside the text entry box. Furthermore as the bot is a facebook messenger virtual assistant it can chat with multiple users at the same time.

“The need of conversational agents has become acute with the widespread use of personal machines with the wish to communicate and the desire of their makers to provide natural language interfaces”

(Wilks, 1999) Just as people use language for human communication, people want to use their language to communicate with computers. Zadrozny et al. (2000) agreed that the best way to facilitate

Human Computer Interaction (HCI) is by allowing users “to express their interest, wishes, or queries directly and naturally, by speaking, typing, and pointing”. This was the driver behind the development of virtual assistant. A virtual assistant system is a software program that interacts with users using natural language. Different terms have been used for a virtual assistant such as: machine conversation system, virtual agent, dialogue system, and chatterbot.

The purpose of a virtual assistant system is to simulate a human conversation; the virtual assistant architecture integrates a language model and computational algorithms to emulate informal chat communication between a human user and a computer using natural language. LDV-Forum 2007 – Band 22 (1) – 31-50 Abu Shawar, Atwell Initially, developers built and used virtual assistant for fun, and used simple keyword matching techniques to find a match of a user input, such as ELIZA (Weizenbaum, 1966,1967). The seventies and eighties, before the arrival of graphical user interfaces, saw rapid growth in text and natural-language interface research, e.g. Cliff and Atwell (1987)Wilensky et al. (1988). Since that time, a range of new virtual assistant architectures have been developed, such as: MegaHAL (Hutchens, 1996), CONVERSE (Batacharia et al.,1999), ELIZABETH (Abu Shawar and Atwell, 2002), HEXBOT (2004) and ALICE (2007). With the improvement of data-mining and machine-learning techniques, better decision– making capabilities, availability of corpora, robust linguistic annotations/processing tools standards like XML and its applications, virtual assistant s have become more practical, with many commercial applications (Braun,2003).

3.1 EXISTING SYSTEM

There are already online software tools existing which consist of virtual assistants but the scope of the application is very much constrained and limited. Including additional intelligence into messaging services in virtual assistants has not been acknowledged.

DISADVANTAGES:

• There is no one standalone application

• Lack of memory storage allocation

• Processing on client side

• Limitations by configuration

• Separate application for each service

3.2 PROPOSED SYSTEM

This project has been aimed at developing AI, an intelligent system which can be accessed online via the internet messaging by the users. It provides primary function of reporting the weather forecast. It uses contextual analysis and interprets the information to gradually learn with time.

ADVANTAGES

• It’s a standalone application.

• Proper memory allocation without any wastage

• Processing on server side.

• No limitations by configuration.

• Single application for all uses.

3.3 SOFTWARES USED:

3.3.1 NODE JS

Node.js is a server-side platform built on Google Chrome’s JavaScript Engine (V8 Engine). Node.js was developed by Ryan Dahl in 2009 and its latest version is v0.10.36. The definition of Node.js as supplied by its official documentation is as follows Node.js is a platform built on Chrome’s JavaScript

runtime for easily building fast and scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices. Node.js is an open source, cross-platform runtime environment for developing server-side and networking applications. Node.js applications are written in JavaScript, and can be run within the Node.js runtime on OS X, Microsoft Windows, and Linux.

Node.js also provides a rich library of various JavaScript modules which simplifies the development of web applications using Node.js to a great extent.

3.3.1.1 Features of Node.js

Following are some of the important features that make Node.js the first choice of software architects.

● Asynchronous and Event Driven − All APIs of Node.js library are asynchronous, that is, non-blocking. It essentially means a Node.js based server never waits for an API to return data. The server moves to the next API after calling it and a notification mechanism of Events of Node.js helps the server to get a response from the previous API call.

● Very Fast − Being built on Google Chrome’s V8 JavaScript Engine, Node.js library is very fast in code execution.

● Single Threaded but Highly Scalable − Node.js uses a single threaded model with event looping. Event mechanism helps the server to respond in a non-blocking way and makes the server highly scalable as opposed to traditional servers which create limited threads to handle requests. Node.js uses a single threaded program and the same program can provide service to a much larger number of requests than traditional servers like Apache HTTP Server.

● No Buffering − Node.js applications never buffer any data. These applications simply output the data in chunks.

● License − Node.js is released under the MIT license.

Following are the areas where Node.js is proving itself as a perfect technology partner.

● I/O bound Applications

● Data Streaming Applications

● Data Intensive Real-time Applications (DIRT)

● JSON APIs based Applications

● Single Page Applications

3.3.1.2 Local Environment Setup

If you are still willing to set up your environment for Node.js, you need the following two softwares available on your computer, (a) Text Editor and (b) The Node.js binary installables.

Text Editor

This will be used to type your program. Examples of few editors include Windows Notepad, OS Edit command, Brief, Epsilon, EMACS, and vim or vi.

Name and version of text editor can vary on different operating systems. For example, Notepad will be used on Windows, and vim or vi can be used on windows as well as Linux or UNIX.

The files you create with your editor are called source files and contain program source code. The source files for Node.js programs are typically named with the extension \”.js\”.

Before starting your programming, make sure you have one text editor in place and you have enough experience to write a computer program, save it in a file, and finally execute it.

The Node.js Runtime

The source code written in source file is simply javascript. The Node.js interpreter will be used to interpret and execute your javascript code.

Node.js distribution comes as a binary installable for SunOS , Linux, Mac OS X, and Windows operating systems with the 32-bit (386) and 64-bit (amd64) x86 processor architectures.

Following section guides you on how to install Node.js binary distribution on various OS.

Node Package Manager (NPM) provides two main functionalities −

● Online repositories for node.js packages/modules which are searchable on search.nodejs.org

● Command line utility to install Node.js packages, do version management and dependency

management of Node.js packages.

NPM comes bundled with Node.js installables after v0.6.3 version.To verify the same, open console and type the following command and see the result −

$ npm –version 2.7.1

There is a simple syntax to install any Node.js module −

$ npm install

Now you can use this module in your js file as following −

var express = require(\’express\’);

By default, NPM installs any dependency in the local mode. Here local mode refers to the package installation in node_modules directory lying in the folder where Node application is present. Locally deployed packages are accessible via require() method. For example, when we installed express module, it created node_modules directory in the current directory where it installed the express module.Alternatively, you can use npm ls command to list down all the locally installed modules.

Globally installed packages/dependencies are stored in system directory. Such dependencies can be used in CLI (Command Line Interface) function of any node.js but cannot be imported using require() in Node application directly. Now let\’s try installing the express module using global installation.

$ npm install express –g

This will produce a similar result but the module will be installed globally. Here, the first line shows the module version and the location where it is getting installed.

package.json is present in the root directory of any Node application/module and is used to define the properties of a package. Let\’s open package.json of express package present in node_modules/express/

Attributes of Package.json

● name − name of the package

● version − version of the package

● description − description of the package

● homepage − homepage of the package

● author − author of the package

● contributors − name of the contributors to the package

● dependencies − list of dependencies. NPM automatically installs all the dependencies mentioned here in the node_module folder of the package.

● repository − repository type and URL of the package

● main − entry point of the package

● keywords − keywords

Before creating an actual \”Hello, World!\” application using Node.js, let us see the components of a Node.js application. A Node.js application consists of the following three important components −

● Import required modules − We use the require directive to load Node.js modules.

● Create server − A server which will listen to client\’s requests similar to Apache HTTP

Server.

● Read request and return response − The server created in an earlier step will read the HTTP request made by the client which can be a browser or a console and return the response.

Creating Node.js Application

Step 1 – Import Required Module

We use the require directive to load the http module and store the returned HTTP instance into an http variable as follows −

var http = require(\”http\”);

Step 2 – Create Server

We use the created http instance and call http.createServer() method to create a server instance and then we bind it at port 8081 using the listen method associated with the server instance. Pass it a

function with parameters request and response. Write the sample implementation to always return

\”Hello World\”.

The above code is enough to create an HTTP server which listens, i.e., waits for a request over 8081

port on the local machine.

Step 3 – Testing Request & Response

Let’s put step 1 and 2 together in a file called main.js and start our HTTP server as shown below −

Express Overview

Express is a minimal and flexible Node.js web application framework that provides a robust set of features to develop web and mobile applications. It facilitates the rapid development of Node based

Web applications. Following are some of the core features of Express framework −

● Allows to set up middlewares to respond to HTTP Requests.

● Defines a routing table which is used to perform different actions based on HTTP Method and URL.

● Allows to dynamically render HTML Pages based on passing arguments to templates.

Installing Express

Firstly, install the Express framework globally using NPM so that it can be used to create a web application using node terminal.

$ npm install express –save

The above command saves the installation locally in the node_modules directory and creates a directory express inside node_modules. You should install the following important modules along with express −

● body-parser − This is a node.js middleware for handling JSON, Raw, Text and URL encoded form data.

● cookie-parser − Parse Cookie header and populate req.cookies with an object keyed by the cookie names.

● multer − This is a node.js middleware for handling multipart/form-data.

Express application uses a callback function whose parameters are request and response objects.

Following is a very basic Express app which starts a server and listens on port 3000 for connection.

This app responds with Hello World! for requests to the homepage. For every other path, it will respond with a 404 Not Found.

● Request Object − The request object represents the HTTP request and has properties for the request query string, parameters, body, HTTP headers, and so on.

● Response Object − The response object represents the HTTP response that an Express app sends when it gets an HTTP request.

You can print req and res objects which provide a lot of information related to HTTP request and response including cookies, sessions, URL, etc.

Git is a distributed revision control and source code management system with an emphasis on speed. Git was initially designed and developed by Linus Torvalds for Linux kernel development. Git is a free software distributed under the terms of the GNU General Public License version 2.Version Control System (VCS) is a software that helps software developers to work together and maintain a complete history of their work.

Listed below are the functions of a VCS:

● Allows developers to work simultaneously.

● Does not allow overwriting each other’s changes.

● Maintains a history of every version.

Following are the types of VCS:

● Centralized version control system (CVCS).

● Distributed/Decentralized version control system (DVCS).

Distributed Version Control System

Centralized version control system (CVCS) uses a central server to store all files and enables team collaboration. But the major drawback of CVCS is its single point of failure, i.e., failure of the central server. Unfortunately, if the central server goes down for an hour, then during that hour, no one can collaborate at all. And even in a worst case, if the disk of the central server gets corrupted and proper backup has not been taken, then you will lose the entire history of the project. Here, distributed version control system (DVCS) comes into picture.

DVCS clients not only check out the latest snapshot of the directory but they also fully mirror the repository. If the server goes down, then the repository from any client can be copied back to the server to restore it. Every checkout is a full backup of the repository. Git does not rely on the central server and that is why you can perform many operations when you are offline. You can commit changes, create branches, view logs, and perform other operations when you are offline. You require network connection only to publish your changes and take the latest changes.

Advantages of Git

● Free and open source Git is released under GPL’s open source license. It is available freely over the internet. You can use Git to manage property projects without paying a single penny. As it is an open source, you can download its source code and also perform changes according to your requirements.

● Fast and small As most of the operations are performed locally, it gives a huge benefit in terms of speed. Git does not rely on the central server; that is why, there is no need to interact with the remote server for every operation. The core part of Git is written in C, which avoids runtime overheads associated with other high-level languages. Though Git mirrors entire repository, the size of the data on the client side is small. This illustrates the efficiency of Git at compressing and storing data on the client side.

● Implicit backup The chances of losing data are very rare when there are multiple copies of it. Data present on any client side mirrors the repository, hence it can be used in the event of a crash or disk corruption.

● Security Git uses a common cryptographic hash function called secure hash function (SHA1), to name and identify objects within its database. Every file and commit is check-summed and retrieved by its checksum at the time of checkout. It implies that, it is impossible to change file, date, and commit message and any other data from the Git database without knowing Git.

● No need of powerful hardware In case of CVCS, the central server needs to be powerful enough to serve requests of the entire team. For smaller teams, it is not an issue, but as the team size grows, the hardware limitations of the server can be a performance bottleneck. In case of DVCS, developers don’t interact with the server unless they need to push or pull changes. All the heavy lifting happens on the client side, so the server hardware can be very simple indeed.

● Easier branching CVCS uses cheap copy mechanism, If we create a new branch, it will copy all the codes to the new branch, so it is time-consuming and not efficient. Also, deletion and merging of branches in CVCS is complicated and time-consuming. But branch management with Git is very simple. It takes only a few seconds to create, delete, and merge branches.

General workflow is as follows:

● You clone the Git repository as a working copy.

● You modify the working copy by adding/editing files.

● If necessary, you also update the working copy by taking other developer\’s changes.

● You review the changes before commit.

● You commit changes. If everything is fine, then you push the changes to the repository.

● After committing, if you realize something is wrong, then you correct the last commit and

push the changes to the repository.

Shown below is the pictorial representation of the work-flow.

A Git repository contains the history of a collection of files starting from a certain directory. The process of copying an existing Git repository via the Git tooling is called _cloning. After cloning a repository the user has the complete repository with its history on his local machine. Of course, Git also supports the creation of new repositories.

If you want to delete a Git repository, you can simply delete the folder which contains the repository. If you clone a Git repository, by default, Git assumes that you want to work in this repository as a user. Git also supports the creation of repositories targeting the usage on a server.

● bare repositories are supposed to be used on a server for sharing changes coming from different developers. Such repositories do not allow the user to modify locally files and to create new versions for the repository based on these modifications.

non-bare repositories target the user. They allow you to create new changes through modification of files and to create new versions in the repository. This is the default type which is created if you do not specify any parameter during the clone operation.

A local non-bare Git repository is typically called local repository.

A local repository provides at least one collection of files which originate from a certain version of the repository. This collection of files is called the working tree. It corresponds to a checkout of one version of the repository with potential changes done by the user.

The user can change the files in the working tree by modifying existing files and by creating and removing files. A file in the working tree of a Git repository can have different states. These states are the following:

● untracked: the file is not tracked by the Git repository. This means that the file never staged nor committed.

● tracked: committed and not staged

● staged: staged to be included in the next commit

● dirty / modified: the file has changed but the change is not staged

After doing changes in the working tree, the user can add these changes to the Git repository or revert these changes.

After modifying your working tree you need to perform the following two steps to persist these changes in your local repository:

● add the selected changes to the staging area (also known as index) via the git add command

● commit the staged changes into the Git repository via the git commit command

This process is depicted in the followng graphic

non-bare repositories target the user. They allow you to create new changes through modification of files and to create new versions in the repository. This is the default type which is created if you do not specify any parameter during the clone operation.

A local non-bare Git repository is typically called local repository.

A local repository provides at least one collection of files which originate from a certain version of the repository. This collection of files is called the working tree. It corresponds to a checkout of one version of the repository with potential changes done by the user.

The user can change the files in the working tree by modifying existing files and by creating and removing files. A file in the working tree of a Git repository can have different states. These states are the following:

● untracked: the file is not tracked by the Git repository. This means that the file never staged nor committed.

● tracked: committed and not staged

● staged: staged to be included in the next commit

● dirty / modified: the file has changed but the change is not staged

After doing changes in the working tree, the user can add these changes to the Git repository or revert these changes.

After modifying your working tree you need to perform the following two steps to persist these changes in your local repository:

● add the selected changes to the staging area (also known as index) via the git add command

● commit the staged changes into the Git repository via the git commit command

This process is depicted in the followng graphic

non-bare repositories target the user. They allow you to create new changes through modification of files and to create new versions in the repository. This is the default type which is created if you do not specify any parameter during the clone operation.

A local non-bare Git repository is typically called local repository.

A local repository provides at least one collection of files which originate from a certain version of the repository. This collection of files is called the working tree. It corresponds to a checkout of one version of the repository with potential changes done by the user.

The user can change the files in the working tree by modifying existing files and by creating and removing files. A file in the working tree of a Git repository can have different states. These states are the following:

● untracked: the file is not tracked by the Git repository. This means that the file never staged nor committed.

● tracked: committed and not staged

● staged: staged to be included in the next commit

● dirty / modified: the file has changed but the change is not staged

After doing changes in the working tree, the user can add these changes to the Git repository or revert these changes.

After modifying your working tree you need to perform the following two steps to persist these changes in your local repository:

● add the selected changes to the staging area (also known as index) via the git add command

● commit the staged changes into the Git repository via the git commit command

This process is depicted in the followng graphic

non-bare repositories target the user. They allow you to create new changes through modification of files and to create new versions in the repository. This is the default type which is created if you do not specify any parameter during the clone operation.

A local non-bare Git repository is typically called local repository.

A local repository provides at least one collection of files which originate from a certain version of the repository. This collection of files is called the working tree. It corresponds to a checkout of one version of the repository with potential changes done by the user.

The user can change the files in the working tree by modifying existing files and by creating and removing files. A file in the working tree of a Git repository can have different states. These states are the following:

● untracked: the file is not tracked by the Git repository. This means that the file never staged nor committed.

● tracked: committed and not staged

● staged: staged to be included in the next commit

● dirty / modified: the file has changed but the change is not staged

After doing changes in the working tree, the user can add these changes to the Git repository or revert these changes.

After modifying your working tree you need to perform the following two steps to persist these changes in your local repository:

● add the selected changes to the staging area (also known as index) via the git add command

● commit the staged changes into the Git repository via the git commit command

This process is depicted in the followng graphic non-bare repositories target the user. They allow you to create new changes through modification of files and to create new versions in the repository. This is the default type which is created if you do not specify any parameter during the clone operation.

A local non-bare Git repository is typically called local repository.

A local repository provides at least one collection of files which originate from a certain version of the repository. This collection of files is called the working tree. It corresponds to a checkout of one version of the repository with potential changes done by the user.

The user can change the files in the working tree by modifying existing files and by creating and removing files. A file in the working tree of a Git repository can have different states. These states are the following:

● untracked: the file is not tracked by the Git repository. This means that the file never staged nor committed.

● tracked: committed and not staged

● staged: staged to be included in the next commit

● dirty / modified: the file has changed but the change is not staged

After doing changes in the working tree, the user can add these changes to the Git repository or revert these changes.

After modifying your working tree you need to perform the following two steps to persist these changes in your local repository:

● add the selected changes to the staging area (also known as index) via the git add command

● commit the staged changes into the Git repository via the git commit command

This process is depicted in the followng graphic

The git add command stores a snapshot of the specified files in the staging area. It allows you to incrementally modify files, stage them, modify and stage them again until you are satisfied with your changes.

Some tools and Git user prefer the usage of the index instead of staging area. Both terms mean the same thing.

After adding the selected files to the staging area, you can commit these files to add them permanently to the Git repository. _ Committing_ creates a new persistent snapshot (called commit or commit object) of the staging area in the Git repository. A commit object, like all objects in Git, is immutable. The staging area keeps track of the snapshots of the files until the staged changes are committed.

For committing the staged changes you use the git commit command.

If you commit changes to your Git repository, you create a new commit object in the Git repository. See Commit object (commit) for information about the commit object.

3.3.3 HEROKU

Heroku is a cloud Platform-as-a-Service (PaaS) supporting several programming languages that is used as a web application deployment model. Heroku, one of the first cloud platforms, has been in development since June 2007, when it supported only the Ruby programming language, but now supports Java, Node.js, Scala, Clojure, Python, PHP, and Go. For this reason, Heroku is said to be a

polyglot platform as it lets the developer build, run and scale applications in a similar manner across all the languages. Heroku was acquired by Salesforce.com in 2010.Heroku was initially developed by James Lindenbaum, Adam Wiggins, and Orion Henry for supporting projects that were compatible with the Ruby programming platform known as Rack. The prototype development took around six months. Later on, Heroku faced drawbacks because of lack of proper market customers as many app developers used their own tools and environment. In Jan 2009 a new platform was launched which was built almost from scratch after a three-month effort. In October 2009, Byron Sebastian joined Heroku as CEO. On December 8, 2010, Salesforce.com acquired Heroku as a wholly owned subsidiary of Salesforce.com. On July 12, 2011, Yukihiro \”Matz\” Matsumoto, the chief designer of the Ruby programming language, joined the company as Chief Architect, Ruby. That same month, Heroku added support for Node.js and Clojure. On September 15, 2011, Heroku and Facebook introduced Heroku for Facebook. At present Heroku supports MongoDB and Redis databases in addition to its standard PostgreSQL.The name \”Heroku\” is merger of \”heroic\” and \”haiku\”. The Japanese theme is a nod to Matz for creating Ruby. The creators of Heroku did not want the name of their project to have a particular meaning, in Japanese or any other language, and so chose to invent a name.Applications that are run from the Heroku server use the Heroku DNS Server to direct to the application domain (typically \”applicationname.herokuapp.com\”). Each of the application containers, or dynos are spread across a \”dyno grid\” which consists of several servers. Heroku\’s Git server handles application repository pushes from permitted users.

The working can be summarized into two major categories:

Deploy

● The main content of the development are the source code, related dependencies if they exist, and a Procfile for the command.

● The application is sent to Heroku using either of the following: Git, GitHub, Dropbox, or via an API.

● There are packets which take the application along with all the dependencies, and the language runtime, and produce slugs. These are known as build-packs and are the means for the slug compilation process.

● A slug is a combination/bundle of the source code, built dependencies, the runtime, and compiled/generated output of the build system which is ready for execution.

● Next is the Config vars which contain the customizable configuration data that can be changed independently of the source code.

● Add-ons are third party, specialized, value-added cloud services that can be easily attached to an application, extending its functionality.

● A release is a combination of a slug (the application), config vars and add-ons.

● Heroku maintains a log known as the append-only ledger of releases the developer makes.

Runtime

● The main unit which provides the run environment are the Dynos which are isolated, virtualized unix containers.

● The application’s dyno formation is the total number of currently-executing dynos, divided between the various process types the developer has scaled.

● The dyno manager is responsible for managing dynos across all applications running on Heroku.

● Applications that use the free dyno type will sleep after 30 minutes of inactivity. Scaling to multiple web dynos, or a different dyno type, will avoid this.

● One-off Dynos are temporary dynos that run with their input/output attached to the local terminal. They’re loaded with the latest release.

● Each dyno gets its own ephemeral filesystem with a fresh copy of the most recent release. It can be used as temporary scratchpad, but changes to the filesystem are not reflected to other dynos.

● Logplex automatically collates log entries from all the running dynos of the app, as well as other components such as the routers, providing a single source of activity.

● Scaling an application involves varying the number of dynos of each process type.

A detailed description of the architecture involves-

● Define the application: The definition of the application i.e. the source code and the description is built on the framework provided by Heroku which converts it into an application. The dependency mechanisms vary across languages: for Ruby the developer uses a Gemfile, in Python a requirements.txt, in Node.js a package.json, in Java a pom.xml, and so on.

● Knowing what to execute: Developers don’t need to make many changes to an application in order to run it on Heroku. One requirement is informing the platform as to which parts of the application are runnable. This is done in a Procfile, a text file that accompanies the source code. Each line of the Procfile declares a process type — a named command that can be executed against the built application.

● Deploying applications: Application development on Heroku is primarily done through git. The application gets a new git remote typically named as Heroku along with its local git repository where the application was made. Hence to deploy heroku application is similar to using the git push command.

There are many other ways of deploying applications too. For example, developers can enable GitHub integration so that each new pull request is associated with its own new application, which enables all sorts of continuous integration scenarios. Dropbox Sync lets developers deploy the contents of Dropbox folders to Heroku, or the Heroku API can be used to build and release apps.

Deployment then, is about moving the application from a local system to Heroku.

● Building applications: The mechanism for the build is usually different for different languages, but follows the consistent pattern of retrieving the specified dependencies, and creating any necessary assets (whether as simple as processing style sheets or as complex as compiling code).The source code for the application, together with the fetched dependencies and output of the build phase such as generated assets or compiled code, as well as the language and framework, are assembled into a slug.

● Running applications on dynos: Applications in Heroku are run using a command specified in the Procfile, on a dyno that’s been preloaded with a prepared slug (in fact, with the release, which extends the slug, configuration variables and add-ons).

It\’s like running dyno as a lightweight, secure, virtualized Unix container that contains the application slug in its file system. Heroku will boot a dyno, load it with the slug, and execute the command associated with the web process type in the Procfile. Deploying a new version of an application kills all the currently running dynos and starts new ones (with the new release) to replace them, preserving the existing dyno formation.

● Configurations: A customization of the existing configuration is possible as the configuration is done not within the code but in a different place outside the source code. This configuration is independent of the code currently being run. The configuration for an application is stored in config vars.At runtime, all of the config vars are exposed as environment variables so they can be easily extracted programatically. A Ruby application deployed with the above config var can access it by calling ENV[\”ENCRYPTION_KEY\”]. All dynos in an application will have access to exactly the same set of config vars at run-time.

● Releases: The combination of slug and configuration is called a release. Every time a new version of an application is deployed, a new slug is created and release is generated.As Heroku contains a store of the previous releases of the application, it’s designed to make it easier to roll back and deploy a previous release. A release, then, is the mechanism behind how Heroku lets the developer modify the configuration of the application (the config vars) independently of the application source (stored in the slug) — the release binds them together. Whenever the developer changes a set of config vars associated with the application, a new release will be generated.

● Dyno manager: Dyno manager help maintain and operate the dynos created. Because Heroku manages and runs applications, there’s no need to manage operating systems or other internal system configuration. One-off dynos can be run with their input/output attached to the local terminal. These can also be used to carry out admin tasks that modify the state of shared resources, for example database configuration, perhaps periodically through a scheduler.

● Add-ons: Dynos do not share file state, and so add-ons that provide some kind of storage are typically used as a means of communication between dynos in an application. For example, Redis or Postgres could be used as the backing mechanism in a queue; then dynos of the web process type can push job requests onto the queue, and dynos of the queue process type can pull jobs requests from the queue. Add-ons are associated with an application, much like config vars, and so the earlier definition of a release needs to be refined. A release of the applications is not just the slug and config vars; it’s the slug, config vars as well as the set of provisioned add-ons.

● Logging and monitoring: Heroku treats logs as streams of time-stamped events, and collates the stream of logs produced from all of the processes running in all dynos, and the Heroku platform components, into the Logplex- a high-performance, real-time system for log delivery. Logplex keeps a limited buffer of log entries solely for performance reasons.

● HTTP routing: Heroku’s HTTP routers distribute incoming requests for the application across the running web dynos. A random selection algorithm is used for HTTP/HTTPS request load balancing across web dynos. It also supports multiple simultaneous connections, as well as timeout handling.

When you create an app on Heroku, it deploys to the Cedar Stack, an online runtime environment that supports apps built in Java, Node.js, Scala, Clojure, Python and PHP—all the programming languages that Heroku supports.

The current version of the Cedar Stack is Celadon Cedar. It supports hundreds of thousands of developer apps. When you deploy a new app, Heroku assigns it a unique name based on a natural theme, like “calm-springs3345” or “desolate-cliffs1221.”.When it comes to your app, think of Heroku as home to a vast array of virtual computers, or “instances,” that can be powered up and down. Heroku calls these instances dynos; these are lightweight containers that each run a single command for your app. In my experience as a beginner building apps that only perform one action, I’ve never had more than one dyno per app.

It turns out that a lot of apps require the same actions. Heroku keeps developers from reinventing the wheel with the Addon Store, which provides actions you can assign to dynos for free or, sometimes, a fee. I am using a free addon called Heroku Scheduler, which prompts my apps to become active once every hour.

2017-5-1-1493617274

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, Design of a virtual assistant system. Available from:<https://www.essaysauce.com/computer-science-essays/design-of-a-virtual-assistant-system/> [Accessed 16-04-26].

These Computer science essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.