This report produces a formal analysis of each aspect of the International Student Database System project, and taking into consideration the experience and the evaluation of tools and techniques applied during the project. This report will briefly evidence of familiarity with alternatives approaches to those taken using references.
Introduction to project
International Student Database System Project is taken as part of a BSc (Hons) Computing studies at the University of Leeds Beckett. The aim of the project is to design, develop, and evaluate a new database system in order to allow Study Abroad and Exchange Programme department to manage student records at Leeds Beckett University. Prior to International Student Database implementation and development, literature research and other requirements capturing tools and techniques have been taken into practice for gathering and producing requirements and specification documents.
The Study Abroad and Exchange Programme office has got over 100 partner universities in the world with which Leeds Beckett University has exchange agreements. They have a number of inbound and outgoing students each semester. Students can spend one or two semesters in Leeds or abroad studying or working towards a faculty approved learning study plan, whilst earning credit towards their degree course. (Leeds Beckett University, n.d)
The Study Abroad and Exchange Programme office have decided to explore the existing system into a DBMS solution to support the office with managing students’ records. The core of the project was to design and implement a database system in order to allow the office to manage student data easily. (Lazerevski, 2014)
The main project itself is a database management system that is designed, developed and implemented to provide the Study Abroad and Exchange Programme office for managing students’ data. The office already has an exciting system (spreadsheet), however the problem with the existing system is that there is a spreadsheet that gets send to the Study Abroad office, when students arrive, they start changing the modules and they keep changing it, which means since the existing system is spreadsheets, it is updated in three different locations to keep track of correct records. If there was a database system, where some users could access it and update the information, the office could run reports and see the updated records instead of changing it in different locations. (ibid)
The major objectives of the project included initially specifying and planning all the stages of the project in the Software Life Cycle Development in relational database system. The second step was to have an interview with the client for identifying the International Student Database System requirements. To look for literature resources and discover the best and relevant resources for the research area. The other objective of the project was to fully understand and identify the system requirements for International Student Database System. In addition, the next step was to use Rapid Application Development methodology to guide whilst undertaking every stage involved in the project. The last four stages included, designing and developing the International Student database RDBMS system based on database design approaches and to choose the appropriate technologies (Software. Hardware, applications) and making sure they are available throughout the year for regularly using it for delivering the project. The final objective of the project was to write an evaluation report and testing the system against requirements specification.
This section deals with Literature review which is a piece of work that evaluates the chosen field. It discusses, describes, summarises, evaluates and clarifies the subject area. It helps to present the understanding of the field and set the foundation for a project using the literature obtained during the research.
Data quality Review
Data has always been playing a very vital role in companies and businesses for decades now. Studies reveal that with the help of relational database applications e.g. “Customer Management Systems” a very useful data and knowledge can be obtained from quantities of data. (Lin, 2012) On the other hand, investigation states there are many problems and issues that cause the failure, for instance a poor system design which could be the key to have poor data within the system. This clearly states that many organisations are concerned about the fact the issue with data quality is bringing down their businesses. (Li, 2012) Another key thing to remember is that, many companies and businesses are becoming aware that the benefit of their data is restricted by its bad quality. (Suzanne, 2001) To put it another way, a good quality of data is the key to a successful business system having said that it has now been argued that some of the data in databases are considered to be “inconsistent, incomplete, out-dated, and wrong”. (Binling, 2013)
To be more precise, data is considered to be high quality if they are fit for their “intended uses in operations, decision making, and planning” (J, 2005). Furthermore, another point to state in here is that the data quality has different meanings to different users, for example, the data could be good enough for one user; whereas the same data could not be acceptable quality for another user. Therefore it is necessary to consider and comprehend all intended uses of data prior to attempting to measure data quality levels. (FHWA, 2013)
However, investigations show that many businesses and organisations are having issues with data within their businesses, despite the progressions at the application layer; almost all organisations succeed or fail on the quality and consistency of data. (Baum, 2009)
Having a good quality of data within an organisation or for a database system is considered to be very important. Data quality is furthered defined by attributes such as consistency, accuracy, audit ability, completeness, and validity. If a system has all these attributes, then the data is considered to be a good quality data, or in other words, these attributes make data appropriate for particular use (D, 2013)
Problems with data quality
It is pretty obvious having data quality issues or errors within an organisation’s system means the business is incapable of working efficiently and making revenue. There are many issues that could cause data quality problems. One these problems could be missing data e.g. missing values in rows and columns for an attribute in a relational database management system, duplicate data, and broken data fields. In order to understand, how to deal with such issues, it is considered by many experts to initially, discover the root causes and then put new processes in place to avoid them. (Ching, 2011)
Despite the advancements in technology and a lot of automation in data architecture nowadays, data is still entered manually into the database fields, forms, and etc. by people. The major problem with this is that the people could type the data incorrectly e.g. people could pick the wrong entry from a list and end up typing the data in a wrong box. (ibid)
Problems with data
Out of date
The other issue is that the data entry problems might not be totally by mistake. For instance, many people nowadays provide incomplete or even incorrect data to safeguard their privacy. (Governance, 2011)
In this case, even if the people within an organisation enter data and want to do the right thing, then they cannot do it. This is another issue that could lead to data quality issues. (ibid)
The system could lack of controls and data validation. The other issues could be bad system design and programming.
Solution to data quality problems
Training – Since the ‘typographical’ errors have been caused by manually typing the data into the fields by untrained staff, the solution to this issue is that the organisation must make sure the people in charge of entering data is well-trained. (A.R. 2011)
Metadata definitions- a metadata list should be provided and “locking down” exactly what the user should enter into a field.
Validation – for preventing wrong data being entered into fields, validations could be put in place to avoid the users from typing the data in a wrong format. Or real-time validation tools could be implemented to validate email addresses, phone numbers, and other important data. (ibid)
To conclude many businesses try to adapt the best way of achieving their goal towards data quality improvements within the organisations. A huge number of businesses have come to this conclusion that the consequences of not addressing the poor quality of data could result in a poor business, customers’ satisfactions, and loss of profit. According to Gartner Group it can also result in as “total business failures” (Adelman. S, 2005)
It has been recognised that a database with poor quality of data is not reliable for any purpose or business research today shows that many organisations are going under high risk of having a poor business due to bad data quality, which makes it difficult to draw definite conclusions. With this in mind, organisations and business should consider to have a process and resources to check and analyse the quality of data. (Li, 2012)
Database Normalisation Review
Research was undertaken in order to reveal the importance of normalisation in the area of database. The finding of the research clearly states that the relational database changed the world when it was introduced in 1970. The man who introduced normalisation was Edgar F. Cod the inventor of the relational model which is known as 1st Normal Form. (Wesley, 2015)
On the other hand, databases regularly grow in size and complication it is very important to control and maintain these complexities and decreasing errors and data redundancy in the data. (Mia, 2016) To put it another way, these issues could be solved as key solution is normalisation of the data within a database system. To be more precise, normalisation allow a system to design a relational database table so that the tables have the relevant data, have less data redundancy, permitting efficient updates of data within the database system and decreasing the risk of losing data. (Wyllys, 2002)
In addition, normalisation is considered to be a formal technique of managing and analysing relations among the table using their primary keys and “functional dependencies” (Codd, 1972) All things considered, it shows that the major aim of the normalisation is to decrease data redundancy in order to achieve accurate data within the database system. (John, 2009)
On the other hand, there has been an argument about storage space according to research storage was quiet expensive back in the old days and people would not prefer to spend money. Nowadays the things have changed and evolved via advances and developments in technology these days storage is a lot cheaper and space is not the issue anymore so more redundancy can be endured in contrast with old days. (Joel, 2013)
Having said that, investigation states data normalisation is still necessary even with the storage becoming cheaper. Especially, if a company is storing the same data in so many places, then it becomes an issue as it will be hard for them to maintain it as the maintenance cost would increase. If data is stored in more than one place (same data) it would become very complex to deal with as this would make it difficult to determine which data is correct. (Denny, 2008)
Nevertheless data normalisation is considered to be important in transactional online processing (OLTP) world where in this area particularly insert, delete and update take place rapidly. In comparison, DW (data warehouses) carries a huge amount of denormalised data in order to avoid the performance complexities and joins and data in the DW is updated periodically and it is considered to be under controlled situation. (Michelle, 2008)
Furthermore, recent research indicates that “database is considered to be the heart of any system” (Poolet, 2008) investigation shows if the design is bad then the whole system will be bad either in performance or effectiveness. (Marston, 2004) Although most developers would generally agree with this statement that a good database design with data being normalised is the key to a successful business management system.
To conclude normalisation is good solution for storage efficiency with this in mind it is good for OLTP databases which is being updated, inserted and deleted rapidly. However, normalisation produces a lot of tables which makes is complex when it comes to querying the tables. (Sybase, 2008)
Review of Technologies
Databases have always been playing an important role in the industries and it is being used by more than millions of companies, businesses, hospitals, government offices, libraries etc. to store and access data. (Johnson, 2015) Over the past decades technology has hugely changed and brought so many changes in the area of applications and etc. In the past, people used basic flat files which were used for storing a very small amount of information that would be used for organising data which then would allow users to read information, and edit it manually. Research shows the computerised databases were introduced in the early 1960s. (Keith 2005)
Research indicates that there were two popular data models: a network model which is called CODASYL and hierarchical model called IMS. In the 1970s two relational database system prototypes were created. Later in the 1976 Entity Relationship (ER) produced in that time it was possible for designers to focus on data application instead of database logical table (Peter, 2007)
In addition, in the 1980s SQL (Structured Query Language) became the most used query language and then the research indicates that the relational database became successful in market as the sales boosted in the database world. (Ezine, 2009)
Moreover, in the early 1990s databases became more popular and were sold at very high prices. Around that time technologies and tools were developed and released such as Oracle Developer, PowerBuilder, VB and etc. In that time, tools like OBDC and EXCEL/ACCESS were also developed for personal productivity. In the late 1990s due to high increased within online businesses resulted in a very high demand for databases connectors like Dream Weaver Active Server Pages, ColdFusion, Enterprise Java Beans, and Oracle Developer 2000 (ibid). In addition to this, with the use of MySQL, Apache, CIG and etc it became open source solution to the internet and with the use of “point of sale” technology OLTP and OLAP began to come of age. In the 2000s databases grow although the internet experienced a decline but database applications were evolving and continued to grow. New tools and applications were introduced and recent research indicates that the most three leading database companies are Oracle, IBM, and Microsoft. (Jeffery, 2005)
On the other hand, in the area of technology some of the things stay constant for example one these is databases, it is unlike fashions which come and go. When it comes to choosing the right database is that not all of the databases are created equal, this means that prior to choosing a database, the correct one has to be picked. (Ezine, 2009)
Research indicates that Access databases are incapable of dealing with huge amount of data sets which could be an easy job for SQL Server to manage innumerable sets of data. Currently, there are three tiers of databases in the industry these days, which are: desktop and embedded databases which are considered to be suitable for smaller tasks, “Express” version which are the main players that are good up to a few GBs of data in the system, and the last ones like SQL Server, Oracle and DB2 that can cope with as much data as a user could throw at them. Prior to moving and making the decision which one to pick is, that it is necessary to take into consideration and make estimation of how much data would be stored in the system and then the decision could be made afterwards. (Gungerloy, 2006)
Relational Databases Vs NoSQL
NoSQL is a database management system similar like other databases. There are different types of database management systems such as RDBMS (Relational), OLAP (Online Analytical Processing), and NoSQL.
The major issues with the Relational databases are that it cannot cope with a lot of data so the answer to this issue was NoSQL. NoSQL have been created because of the limitations of relational databases when it comes to storing a huge amount of data. (Mir, 2013)
NoSQL provides three main objectives; Scalability, Performance, and High Availability. NoSQL is scalable which means they can handle a large amount of data as the data keeps on growing. In comparison, relational databases NoSQL databases are more scalable which means it can cope with a huge amount of data and it also provides better performance and their data could address several issues that the relational system is not designed to address. (Mang, 2009) NoSQL provides less functionality compared to RDBMS and data is structured in a RDBMS and OLAP system, whereas NoSQL can cope with both structured and unstructured data. The example of unstructured data could be Media, videos, data files, blocks, text messages and etc. NoSQL could be further divided into three categories; Key Value Store, Tabular, and Document Oriented. In NoSQL databases no joins are supported, because of these joins relational databases have problems with scalability. Furthermore, the support for complex transactions is not there, it does not let to insert or update records and there is no constraints support either, so therefore these have to be implemented at the application level. Even though there is no SQL language, the NoSQL have other languages which allow the users to perform queries with the database. (ibid)
However, NoSQL is useful when producing prototypes or fast applications because it does not need to create the structure in advance and eventually NoSQL is useful when constraints and validations controlled by the application and are not needed to be implemented in database. NoSQL should not be used when complex transactions are needed to be handled, in addition to this, when joins are needed to be handled by database for instance, when the application is unable to join two sets then relational database would be better in this case. Furthermore, if constraints and validations are expected to be handled by database and the application is not able to handle this then RDBMS would be the best answer for this. (Hasan, 2013)
Oracle Vs SQL Server
There are many similarities between Oracle and MS SQL Server but there are also a few differences between them. Initially, the first difference would be that the language they use even though both the databases systems use SQL (Structured Query Language). However SQL Server uses ‘Transact SQL’ which is an extension of SQL it was originally developed by Sybase and used by Microsoft, Oracle in the meantime uses the PL/SQL. (Stansfield, 2014)
The first company was IBM who developed RDBMS; however Oracle made its history in 1980s by releasing their RDBMS. Research states that Oracle has definitely led the way. In 2011 Oracle gained up to 50% of RDBMS in the market. Oracle 2 was the first release which only supported basic SQL features (Lee, 2013)
SQL Server was released in 1990s when Microsoft bought it from Sybase and then released the version 7.0. Initially the companies were working together to develop the platform in order to run on the IBM. Nevertheless, Microsoft developed its own OS (Windows) and the company wanted to create a database management for it. (ibid)
One of the differences between these two platforms are that is the transaction control. Transaction control can be defined as group of operations that can be treated as single unit. For example, a SQL query is written to update all records at the same time where for example an error occurs to update any single rows amongst the set should return in none of the rows being updated. However, MS SQL server by default will execute and commit each task individually this results in making it impossible to roll back amendments if any problems occurred on the way.
On the other hand, Oracle treats each connection as new transaction. In other words, once queries are executed then changes are only made in the memory and nothing is ‘committed’ until COMMIT statement is used. Furthermore, the other difference between Oracle and SQL Server is how they manage the objects. For example, in SQL Server all objects e.g. tables, procedures, and views by database name and users are created to login in which they have given access to specific database and objects within the database. In addition to this, it also has a private unshared desk file on its server. Whereas in the Oracle everything is (objects) are all grouped by schemes. All the objects shared between users, however although it is all share, each user can be restricted or limited to some schemes within the database system for instance table through privileges and permissions. (Josh, 2014)
To conclude Oracle and SQL Server are both powerful Relational Database Systems. Even though there are some differences between them but both can be used in equal ways. (ibid)
Why Oracle is chosen for the project over other alternatives
In comparison with other alternatives, Oracle is chosen because firstly, it is considered to be secure and it delivers best performance, reliability and scalability in terms of running Windows, Linux and UNIX. Secondly, since this project is developed for student management content it will grow as time goes along because more and more students will be added to the system so the data will be keep growing and therefore Oracle Apex database would be the best choice as it can deal with huge amount of data. Another key point to remember is that it also provides a very comprehensive tools and features for letting users to easily manage transaction processes, content management applications, and BI (Business Intelligence). (McDougall, 2016)
On the other hand, Oracle is the most expensive to choose but it all depends on type of project and budget. With this in mind, if the budget is low then other alternatives could be the best choice as they perform well too, except that Oracle provides more security and best performance with the extra tools and features. (Gould, 2016)
Furthermore, research indicates that oracle manages memory very well and it can cope with complex ‘JOIN’ operations. Many developers state that Oracle has a great Architecture because it manages and organises applications data well. Additionally, it also provides a good performance to cope with complex queries and features like ‘Materialised Views’, Procedural Language and etc make it all worthwhile.
However, MS Access could have been chosen as it one of the mostly used desktop databases across the world. It provides a free MS Access run time it is also considered to be cheaper than SQL Server and Oracle. The issue with this alternative is that it cannot deal with too big databases or in other words, if a system contains more than millions of records then Access is not the right choice for it.
To summarise, SQL Server and Oracle databases are mostly used within companies and businesses once one of these is up and running then the investment is made and then it can be used for custom applications and it also provides great tools and features. (Rep, 2016)
By choosing a correct and relevant SDLC approach will highly influence the quality of the system being produced. Therefore, making a right decision of appropriate SDLC approach is very important before developing an actual system.
Software Development Life Cycle (SDLC)
SDLC can also be referred to as the application development or systems development life cycle. It is used in modelling for software development and information systems in order to describe a process for planning, designing, developing, implementing, testing, and arranging an information system. Furthermore, this model has a number of stages or phases: requirements capture, design, build test, and implement. (Dawson, 2015)
Requirements capture phase represents all the activities that are performed to produce the requirements from the user and the documents that are produced during this phase. Design stage represents the design of the system based on the requirements that are elicited from the requirements capture stage. Build is the actual coding or development of the system. Test stage is testing the code whether it is accurate or contains errors. Implement is the eventual installation in its target environment (this could also include evaluation of the system with respect to the company’s or user’s initial requirements). (ibid, 2015)
There is often some overlap between these phases, for instance if a prototype is developed, it will perform some requirements capture, some design and some building work simultaneously. During this process, it might be possible to return to the requirements stage to redefine these in light of feedback from the prototype and repeat the build and design phases again. Moreover, the build and test usually overlap considerably. For instance, a piece of code is written, it could be tested and then write some more code and test it again. (ibid)
Requirements capture is considered to be the hardest stage of the System Development Life-Cycle (SDLC) to perform well. There are various techniques and methods that could be employed to get users’ requirements for a system. The process of this phase is series of documents that clearly state what the system is required to do: requirements definition, requirements specification, and functional specification. (Christian, 2015)
There is a way of structuring the requirements into a hierarchy technique which will help, it is MOSCOW analysis. This includes, ‘Must have’, this states that what the system must include. ‘Should have’, the system should have this requirement if it is possible. ‘Could have’, the system could include this functionality if it is possible and it does not impact other things. ‘Won’t have’, things that will not be included in the system, or things could be considered for future improvements but it will not be used for this version of this system. (Dawson, 2015)
Design stage includes what should the system exactly achieve the requirements that are captured in the requirements stage. There are different methods that could be employed when designing a system. E.g. class diagrams, flow charts, pseudo code, Entity Relationship Diagrams (ERD), sitemap, and etc.
Build stage includes the coding and construction. This stage depends on the programming language used, the design methods used and any coding standards. Test is the final testing of the system and implementation is the last stage of System Development Life-Cycle. This phase is the handover of the system to the company or user. (ibid)
There are different types of Software Development Life Cycle (SDLC) methodologies and approaches used in order to structure a project from the beginning to the end without missing a phase. It is a type of a project which has different Development Life Cycle methodologies that provide guidance to the users throughout the project. (Robert, 2014)
Agile methodology is breaking the product into sequence; it delivers product in short deliveries. Compare to waterfall methodology in which the delivery could take a long time to release, the idea behind agile methodology is to deliver working systems to the company or user in a matter of weeks. (Dawson, 2015)
This methodology is considered to be a very realistic development approach. It creates ongoing releases, each with small, incremental changes from the earlier release. Additionally, at every stage of iteration, the product is being tested. (Robert, 2014)
This method emphasizes interaction, as the developers, customers, and testers work together throughout the project. Since this method depends on the customer interaction, the project could head the wrong way if the customer is not clear on the direction. (Robert, 2014)
Extreme programming (XP)
Extreme programming is another software development approach which encompasses many of the ideals of agile methods. This approach is considered to be designed for teams of between 2 and 12, and it is considered to be suited for student projects. (Dawson. Christian W, 2015)
Additionally, this approach is best suited to the type of projects in which the requirements are likely to change. In addition to this, this approach encourages users to be involved with the development process and contributes the positive influence. It emphasizes team work; it encourages the users, supervisors and project team to work together towards developing a product or system. (Dawson. Christian W., 2015)
Waterfall is one of the Development Life Cycle methodology, this methodology is considered to be the oldest and most straightforward. The process of this method is to finish one phase, then carry on or move to the next. This means not going back. Each phase or step relies on information from the previous stage and it has its own project plan. This methodology or approach is easy to understand and simple to manage. Early delays could throw off the entire project and since there is a little room for revisions once a phase or stage is completed, errors cannot be fixed until they get to the maintenance stage. However, this model does not work well if flexibility is needed or if the project is a long term and on-going. (Robert, 2014)
RAD (Rapid Application Development)
RAD which is shortened for Rapid Application Development is one of the SDLC methodologies. In this method systems or applications are developed in parallel as prototypes and included to make the complete product for more fast system delivery. It is one of the popular methodologies that employ different tools and techniques to quickly produce applications. (Tutorial P, 2004)
This model is based on prototyping and iterative development with no particular planning. In this model a prototype is working model which is functionally equivalent to a component of the product. (Tutorial P, 2004)
This model reduces the overall development time due to reusability of the similar development.
Why use RAD (Rapid Application Development) approach?
For producing a system or product, choosing an appropriate methodology is vital. A right choice of methodology and applying it in the practice properly is necessary to produce a suitable system that will enhance the productivity within an organisation. In the International Student Database System project, it is required to design and develop a relational database management system to allow the office to manage students’ records. Comparing those approaches discussed above have summed up that the suitable approach to this specific project is RAD (Rapid Application Development) as this approach is more likely to meet the expectation of end users.
In contrast to ‘Waterfall’ approach, RAD (Rapid Application Development) breaks the process in cycles each containing number of phases. For instance, 80% of the system functionality could be completed with 20% of the development effort during the first phase.
Other improvements and functionality then could be implemented during other cycles (Jilang. P. 2004).
The benefit of Rapid Application Development Methodology is that it includes greater flexibility for scope changes, this methodology allows to indentify limitations earlier in the development process, and to deliver the system sooner than with ‘Waterfall’ approach. (Jilang. P. 2004).
Compare to other methodologies RAD approach gets many better features and it will definitely be more suitable because Oracle Apex is totally Rapid Application Development. (Frank, 2013)
In addition to this, Rapid Application Development theory of ‘Moscow’ rules will be used during the user requirements selection.
Another reason for selecting RAD approach is that Rapid Application Development approach is also associated with different database RAD tools e.g. Oracle Application Express (Apex), since this software development environment will be used for implementing the RDBMS for Study Abroad office so therefore, this approach would be a very good choice as it is already used by RAD. (Balasan,2009)
RAD Model Diagram (2013)
Product Implementation and Testing
Product and Project Evaluation
International Exchange Programme allows Leeds Beckett students to study at one of their partner universities in the Australia, America, Canada, Macao, Japan, and New Zealand for a semester or full year. Majority of international partnerships are university wide and suitable for students across most subject areas. On some courses it would be difficult to incorporate exchange as part of the degree due to external accreditation issues and professional body requirements. Furthermore, international students are eligible for exchange but not to their home country. International students must ensure they comply with the UKVI obligations whilst on exchange including the compliance requirements of their UK student visa.
Exchanges are open to full time undergraduate students during the second year (of a 3 year course) or third year (of a 4 year course). Students must seek approval from their Course Leader before submitting their form. The application cannot be considered unless they have the support of their parent school. It is not possible to go on exchange in the final year, as studies would become increasingly specific and appropriate equivalents are not always available at partner universities.
A student can be qualified as an exchange student if they are enrolled at one of Leeds Beckett University’s exchange partners and are nominated by their home institution. If their university does not have a partnership with Leeds Beckett, they could join as a Fee Paying Study Abroad student.
The Study Abroad and Exchange Programme administrator would like the system to keep record of how many students can come and go. The office would like to keep information about the partner institution, the partner information is really important to store, the partners could be anywhere in the world. If they are in the Europe, they are under Erasmus contract (Bilateral Agreements). The Study Abroad office would like to keep information about bilateral agreements. These contracts are renewed once a year, that is why they are signing up, which course students can come from, which department they will be on once they arrive here, how many students per year can come from that institution, and how many students from here can go there. Once the student is interested, then the process will start with exchange form, or in other words, for instance the two institutions sign up an agreement (bilateral agreement) then exchange can start happening.
Furthermore, students coming from abroad can take different modules from more than one course and they can take more than 3 modules per semester, they could study 5-6 if they wish to and it is free for them. However, if a student is going from UK to another institution, they Study Abroad office needs some personal data of them such as bank details and other data that would be part finance department e.g. grants, and allowance. There is an application form to fill in to get that grant. The students coming from other institution, the office does not need to keep their information about sponsorship and funding from the Erasmus. The students coming here must pass all the modules. Each institution has Erasmus code. When students arrive the spreadsheet (existing system) is filtered, to show information about particular student e.g. which institution they have come from, name, address, which course they are on and which modules they have selected.
Additionally, It would have been good if there was a note section so the Study Abroad office knows which original modules students have selected to what they have changed because these learning agreements that the office signs they get sent to somewhere, so therefore, the information has to be updated always for institution to get their funding and etc. In addition to this, credits in UK and Europe are different; the system needs to convert the UK credit to theirs.
...(download the rest of the essay above)