It is the process of defining the architecture, modules and data for the system to satisfy specified needs. Systems design can also be seen as the application of systems theory to product development. It describes the basic system design goals, functionality and architecture. It may include a high-level description of the approach used to develop a system design. It may also include high-level descriptions of the system hardware, software, security and other components.Depending on the complexity of the system it also includes component or contextual diagrams of the system and system components.
The UML has become the standard language in object-oriented analysis and it is widely used for modeling software systems and is increasinglyused for high designing non-software systems and organization
User Interface Design:
We are redefining how we interact with machines and how they interact with us. Advances in AI have helped in making new human-to-machine and machine-to-human interaction possible. Traditional interfaces get simple, abstracted, hidden — they become ambient part of everything. The best UI is no UI.
Everyone getting in the game, but few have cracked that code. We must change the way we think.
Cross-train your team:Our roles as technologists, UX designers, copywriters and designers have to bechange. What and how we build is scrolling pages, buttons, taps and clicks -is based on aging concepts. These concepts are similar, proven and will still remain useful.But we need a new user interaction model for devices that “listen”, “feel” and “talk” to us.
No decision trees:The bartender is where more of the UI is starting to reside. On one hand, it represents more responsibility to create transparent experiences that tend to be based on hidden rules and different algorithms. But on another hand, this gives us incredible latitude for creating open-ended experiences in which only important information is presented to the user.
Your interface is showing:We have started talking to our machines —not with commands, menus and quirky key combinations —but using our own human languages. Natural language processing has seen incredible changes and we finally don’t need to be a machine to talk to one. We chat with the latest chatbot, search using Google Voice or talk to Siri. The accuracy of speech recognition has improved to an incredible 97 percent accuracy.
Contextual awareness:The system need to know more about for invisible UI to become reality. Contextual awareness today is somewhat limited. For eg: when asking for directions via Google Maps, the system knows your current location and will return a different result if you are in New York versus California.
It will consist of how you organize, managing and manipulate the data. Data Design is concerned with how the data is represented and stored in the system. Finally, Process Design is concerned with how data move through the system, and with how and where it’ll be validated, secured and transformed as it flows into, through and out of the system. At the end of this phase, documentation describing the three sub-tasks is produced and made available for use in the next phase.
Physical design, in this context, does not refer to the tangible physical design of an information system. To use an analogy, a personal computer\'s physical designinvolve input via a keyboard, processing within the CPU, and output via aprinter, monitor, etc. It will not concern the actual layout of the tangible hardware, which for a PC would be a monitor, CPU, motherboard, hard drive, modems, etc. It involves a detailed design of a user and a product database structure processor.
System Design Specification:
The System Design Specification (SDS) is a document that contains all of the information needed to develop the system. Unlike the SRS, the target audiences of document are the implementation and testing teams, so the SDS will be written at a more technical level.This document does not attempt to restrict a team to a specific format for producing the document. It hardly sets up guidelines showing what is required. Within each section of document, students are encouraged to use their past experience and organizational abilities to come up with the best method of describing their particular system.
The major sections of the SDS should be:
1. Introduction: The introduction should be written during the architectural design phase. It should provide an overview of the system, including a short description of each of the major functions or operations to be provided by the system.
2. Project Scope: Thisshould be written during the architectural design phase. It should restate the system objectives and project constraints. It also include in this section a discussion of any other limitations that the developers of the system need to know in order to perform their duty.
3. Design Description :It contains the final, verified versions of the graphical representations of the architectural design. These diagrams should be derived from the system diagram developed in the System Description phase.
4. Component and Process Design: It should contain a brief narrative which describes the function of each hardware component and software process shown on the structure charts. The format for this section will be adapted from that of the process description of the SRS.
5. Data Design: It should consist of the final, verified version of the system data dictionary. This dictionary should be derived from that developed in the Data Flow Description of the SRS. It should be adapted during the architectural design phase and substantially enhanced during the detailed designing.
6. User Displays and Output Reports: Itshould be completed during the architectural design phase. It will contain descriptions of all the input and output displays available in the system.
7. System Files: Thisshould be completed during the architectural design. In this, a file may be a software logical file or hardware physical file. Each file used by the system should be documented with the information.
8. Prototype Description :This section will define the prototype that the team will implement and test during the second quarter of the course. It should be completed during the architectural design phase.
9. Prototype Test Procedures :This prototype test should develop guidelines for completely testing the prototype discussed above. Functionaltests which verify that implemented processes are consistent with their original specifications, should be included.
10. Prototype Analysis: The prototype have chosen to illustrate a concept, verify a critical design decision or measure the degree of accomplishment of key objectives. After the prototype has been implemented and tested, the team should discuss what has been learned from the implementation in this section.
11. System Development Schedule :It will be set up during the architectural design phase and updated during the detailed design, and prototype testing phases. It should also contain a revised project schedule.
12. Special Notes:This sectionwill be completed during the prototype testing phase..
GA as a subfield of Evolutionary Algorithms is a fast growing area of Artificial Intelligence, which has been inspired by the process of natural evolution. In genetic algorithm to solve the problem, a set of stochastic operators iteratively applies on a population of candidate solutions, which are known as individuals to acquired the better solutions
A genetic algorithm is great for finding solutions to complex search problems. They are often used in fields such as engineering to create high quality products thanks to their ability to search a through a huge combination of parameters to find the best match. For eg, they can search through different combinations of materials and designs to find the perfect combination of both which could result in a stronger, lighter and overall, better final product. They can also be used to design computer algorithms, to schedule tasks, and to solve manyother optimization problems. Genetic algorithms are based on the process of evolution by natural selection which have been observed in nature.It essentially replicate the way in which life uses evolution to find solutions to real world problems. Although genetic algorithms can be used to find solutions to complicated problems, they are themselves pretty simple to use and understand.
How It works:
As we now know they are based on the process of natural selection, that means they take the fundamental properties of natural selection and apply them to whatever problem it is we\'re trying to solve.
The basic process for a GA is:
1. Initialization - Create an initial population. This population is randomly generated and can be any desired size, from only a few individuals to thousands.
2. Evaluation - Each member of the population is evaluated and we calculate a \'fitness\' for that individual. The fitness value is calculated by how well it fits with desired requirements. These requirements could be easy, \'faster algorithms are better\', or more complex, \'stronger materials are better but they shouldn\'t be too heavy\'.
3. Selection - We desiring to be constantly improving our populations overall fitness. Selection helps us to do this by avoiding the bad designs and only keeping the best individuals in the population. There are a few different selection methods but the idea is the same, make it more likely that fitter individuals will be selected for our next generation.
4. Crossover - During crossover we create new individuals by combining aspects of our selected individuals. We can think of this as mimicking how sex works in nature. The hope is that by combining certain traits from 2 or more individuals we will create an even \'fitter\' offspring which will inherit the best traits from each of parents.
5. Mutation - We need to add a little part randomness into our populations\' genetics otherwise every combination of solutions we can create would be in our initial population. Mutation typically works by making very minute changes at randomly to an individuals genome.
6. And repeat! - Now we have our next generation from where we can start again from step two until we reach a termination condition.
1. A robot may disrupt the environment :
The first 2 risks identified by the researchers from Google and their acolytes are related to a poor coordination and allocation of the main objective. That is first what they call “Avoiding Negative Side Effects”. Specifically, how to avoid environment related problems caused by a robot while it is accomplishing its mission.
2. The machine can cheat :
Second risk of Al based machines is to avoiding reward hacking. the reward is the success of the goal. Avoid the quest reward from turning into a game and the machine trying to get by all of the means. A difficult problem to solve as an Al can be interpreted in many different ways a task and the environment it meets. One of the ideas in the article is to delete the information so that the program does not have a perfect knowledge of how to get a reward and thus does not seek to go shorter or easier.
3.The AI is programmed to do something devastating:
Autonomous weapons are artificial intelligence systems that are programmed to kill enemy. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war that also results in casualties. To avoid being threatened by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation. This risk is one that’s present even with narrow AI, but grows as a huge levels of AI intelligence and autonomy increase.
4. The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal:
This can happen whenever we fail to fully aligned the AI’s goals with ours, which is strikingly difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but perhaps what you asked for. If a superintelligent system is tasked with ambitious geoengineering project, it might wreak havoc with our ecosystem as the side effect, and view human attempts to stop it as a threat to be met.
5. Scalable oversight :
The third risk is called scalable oversight. More the goal is complex, AI will have to validate his progress with his human referent, which would quickly become tiresome and unproductive. How to proceed so the robot can accomplish itself certain stages of its mission to be effective while knowing seek approval in situations that he will know how to interpret? Example: tidy and clean the kitchen, but ask what to do in the saucepan on the stove. It would simplify to the maximum step of the cooking task so that the robot goes to the point without coming to disturb you during your nap every time.
6. How much independence can you give to an AI?
The next identified problem is the safe exploration of Al. How much independence can you give an AI? The whole point of artificial intelligence is that it can make progress by experimenting different approaches to evaluate the results and decide to keep the most relevant scenarios to achieve its objective! The suggested solution is to train these Al with simulated environment in which their empirical experiments will not create any risk of accident.
7. Does AI will adapt the change?
The next problem is robustness to distributional shift or how to adapt all the change. “How to be confirmed that AI recognizes and behaves properly when it is in a very different environment from the one in which it was being driven? It is clear that we wouldn’t want the robot who was trained to wash the floor with detergent products does not apply the same technique if asked to clean home.
How AI Can Be Used in Cyber Crime
Cybersecurity firms and researchers are using AI to fight back most of the crimes.As IDG notes, to block malware, the firm is using a subset of AI known as machine learning, which uses algorithms to detect patterns, and then can predict outcomes and potentially operate autonomously. The effort has involved creating models based on malware samples to determine whether activity on computers is normal or not.AI will be used to rapidly characterize malware that rapidly change its form as it attacks.“AI will also help us to track down who is responsible for attacks, and identifying what further information is needed to draw conclusions. AI system can alsoable to detect malware. The system, called AI2, could “detect 85% of attacks, which is roughly 3 times better than previous benchmarks, while also reducing the number of false positives” by a factor of 5.
1. Unemployment. What happens after the end of jobs?
2. Inequality. How do we distribute the wealth created by machines?
3. Humanity. How do machines affect our behaviour and interaction?
4. Artificial stupidity. How can we guard against mistakes?
5. Racist robots. How do we eliminate AI bias?
6. Security. How do we keep AI safe from adversaries?
7. Evil genies. How do we protect against unintended consequences?
8. Singularity. How do we stay in control of a complex intelligent system?
9. Robot rights. How do we define the human treatment of AI?
Test Cases Design:
REVIEW ON APPLICATION OF ARTIFICIAL INTELLIGENCE TECHNIQUES IN SOFTWARE TESTING :
One of the software engineering areas with a more prolific use of artificial intelligence techniques is software testing. According to the test data are generated either with static method or dynamic method. Static methods include the domain reduction and symbolic execution and the dynamic methods include random testing, local search approach, goal oriented approach, chaining approach and evolutionary approach. Static methods suffer from a number of problems when they handle indefinite array, loops, pointer references and procedure calls whereas the dynamic test data generation avoids these problems. AI techniques used for test data generation included the AI Planner Approach, Simulated Annealing, Tabu Searching, Genetic Algorithm and ACO. Being a robust search method in complex spaces, genetic algorithm is apply to test data generation and evolutionary approach has become a burgeoning interest since then. They applied the AI technique to find the most critical path clusters in a program for improving software testing efficiency. The approach used suffers from the disadvantage about dynamic aspect of testing, as the stopping criteria used can’t specify actual number of generations, i.e. in some cases, the tester is exited based on waiting time, while the stopping criterion is not satisfied. Generating test data automatically and identifying infeasible paths reduces the testing cost, time and effort. They mentioned that the work have been done on the automation of test cases using Tabu search algorithm on complex programs under test and large number of input variables. The tabu search algorithm is used for generation of structural software tests. The authors present use of tabu search with dijksra algorithm a greedy approach to provide an efficient path with maximum code coverage and minimum cost.
Software testing takes a large portion of the software project resources. Reduction in cost and time at this stage will be of great help for software development process. Test cases and test data generation is a key problem in software testing and its automation improves the efficiency and effectiveness and improves the high cost of software testing. The generations of test data using random, symbolic and dynamic approach are not sufficient enough to generate adequate amount of test data. Some Other problems like non reorganization of occurrences of infinite loops and inefficiency to generate test data for complex programs makes these techniques not worthy for generating test data. They showed through experiments that the test data generation based on search techniques like genetic algorithm reduced the cost of software testing by more than 75% and when random test data generation is match with an approach based on genetic search then it has been found that genetic search visibly outperformed random test generation. Random test generated test data do not give a good test data set. The quality of test data produces by GA is higher than the quality of test data produced by random way because the algorithm can direct the generation of test cases to the desirable range fast. GA are also useful in reducing the time required for lengthy testing by generating meaningful test data.
...(download the rest of the essay above)