Home > Sample essays > How Artificial Neural Networks Are Improving Real Estate Market Predictions

Essay: How Artificial Neural Networks Are Improving Real Estate Market Predictions

Essay details and download:

  • Subject area(s): Sample essays
  • Reading time: 6 minutes
  • Price: Free download
  • Published: 1 April 2019*
  • Last Modified: 23 July 2024
  • File format: Text
  • Words: 1,609 (approx)
  • Number of pages: 7 (approx)

Text preview of this essay:

This page of the essay has 1,609 words.



Today real estate market has become very popular. Though the near future of real estate is still in question, investors have been hungry for a fast way to play the market or to hedge against their volatile portfolios. Futures contracts have been an extremely popular method of balancing a portfolio in other markets, and real estate is, with a little Knowledge, now in the same boat [8].  Futures contracts that trade at a centralized exchange allow market participants more financial leverage and flexibility and are guaranteed by the exchange so there is no risk of counterparty default. They are also in and of themselves leveraged investments, which allow investors way to benefit on movements in housing prices as well as provide them with the opportunity for a liquid short term real estate investment. These futures also allow investors a way to speculate on housing prices with much lower capital requirements [8]. An accurate prediction on the house price is important to prospective homeowners, developers, investors, appraisers, tax assessors and other real estate market participants, such as, mortgage lenders and insurers [9]. Traditional house price prediction is based on cost and sale price comparison lacking of an accepted standard and a certification process. Artificial Neural Network (ANN) is a neurobiological inspired paradigm that emulates the functioning of the brain based on the way that neurons work, because they are recognized as the cellular elements responsible for the brain information processing [10]. ANN models can detect patterns that relate input variables to their corresponding outputs in complex biological systems for prediction [11].

Methods for improving network performance include finding an optimum network architecture and appropriate number of training cycles, using different input combinations [12]. Therefore, the availability of a house price prediction model helps fill up an important information gap and improves the efficiency of the real estate market [13].

1.1 Problem Statement

An accurate prediction on house prices is important to prospective home owners, developers, investors, appraisers, tax assessors and other real estate market participants, such as, mortgage lenders and insurers.

1.2 Aim and Objective

At the end of these thesis, feed forward neural network and Cascade forward neural network model would be carried out each on predicting real estate pricing based on specific dataset gotten. Results gotten from each of the network model would be compared with each other to finds out the network model which gives the most accurate result in terms of prediction. Hence the network model with the best result would be recommended for predicting real estate pricing.

1.3 Definition of some key words

1.3.0 Neural Network: Neural networks have a large appeal to many researchers due to their great closeness to the structure of the brain, a characteristic not shared by more traditional systems. In an analogy to the brain, an entity made up of interconnected neurons, neural networks are made up of interconnected processing elements called units, which respond in parallel to a set of input signals given to each. The unit is the equivalent of its brain counterpart, the neuron.

A neural network consists of four main parts:

1. Processing units {uj}, where each uj has a certain activation level aj(t) at any point in time.

2. Weighted interconnections between the various processing units which determine how the activation of one unit leads to input for another unit.

3. An activation rule which acts on the set of input signals at a unit to produce a new output signal, or activation.

4. Optionally, a learning rule that specifies how to adjust the weights for a given input/output pair.

A processing unit uj takes a number of input signals, say a1j, a2j,…,anj with corresponding weights w1j, w2j,…,wnj, respectively. The net input to uj given by: netj = SUM (wij * aij)

The new state of activation of uj given by: aj(t+1) = F(aj(t),netj), where F is the activation rule and aj(t) is the activation of uj at time t. The output signal oj of unit uj is a function of the new state of activation of uj: oj(t+1) = fj(aj(t+1)).

One of the most important features of a neural network is its ability to adapt to new environments. Therefore, learning algorithms are critical to the study of neural networks.

1.3.1 Feed-Forward Back-Propagation Neural Network: Back propagation network is one of the most commonly used supervised artificial neural network models. Back propagation was created by generalizing the Widrow-Hoff learning rule to multiple-layer network and nonlinear differentiable transfer function. The back-propagation algorithm consists of two paths: forward path and backward path. Forward path creates a feed-forward network by initializing weight, simulation, and training the network. The network weight and biases are updated in backward path. Before training a FFBP network, the weight and bias must be initialized, for training of neural network, proper inputs and targets are needed as outputs.

1.3.2 Cascade-Forward (CF) Back Propagation Network: CF networks are similar to feed-forward networks, but it includes a weight connection from the input to each layer and from each layer to the successive layers. For example, a three-layer network has connections from layer 1 to layer 2. Layer 2 to layer 3, and layer 1 to layer 3. The three-layer network also has connections from the input to all three layers. The additional connections might improve the speed at which the network learns and desired relationship.

1.4 How Do Neural Networks Differ From Conventional Computing?

To better understand artificial neural computing it is important to know first how a conventional 'serial' computer and its software process information. A serial computer has a central processor that can address an array of memory locations where data and instructions are stored. Computations are made by the processor reading an instruction as well as any data the instruction requires from memory addresses, the instruction is then executed and the results are saved in a specified memory location as required. In a serial system (and a standard parallel one as well) the computational steps are deterministic, sequential and logical, and the state of a given variable can be tracked from one operation to another.

In comparison, ANNs are not sequential or necessarily deterministic. There are no complex central processors, rather there are many simple ones which generally do nothing more than take the weighted sum of their inputs from other processors. ANNs do not execute programed instructions; they respond in parallel (either simulated or actual) to the pattern of inputs presented to it. There are also no separate memory addresses for storing data. Instead, information is contained in the overall activation 'state' of the network. 'Knowledge' is thus represented by the network itself, which is quite literally more than the sum of its individual components.

1.5 What Applications Should Neural Networks Be Used For?

Neural networks are universal approximators, and they work best if the system you are using them to model has a high tolerance to error. One would therefore not be advised to use a neural network to balance one's cheque book! However they work very well for:

• Capturing associations or discovering regularities within a set of patterns.

• Where the volume, number of variables or diversity of the data is very great.

• Prediction or Estimation of values based on target data or historic dataset.

• The relationships between variables are vaguely understood; or,

• The relationships are difficult to describe adequately with conventional approaches.

1.6 What Are Their Limitations?

There are many advantages and limitations to neural network analysis and to discuss this subject properly we would have to look at each individual type of network, which isn't necessary for this general discussion. In reference to back-propagational networks however, there are some specific issues potential users should be aware of.

• Back-propagational neural networks (and many other types of networks) are in a sense the ultimate 'black boxes'. Apart from defining the general architecture of a network and perhaps initially seeding it with a random numbers, the user has no other role than to feed it input and watch it train and await the output. In fact, it has been said that with back-propagation, "you almost don't know what you're doing". Some software freely available software packages (NevProp, bp, Mactivation) do allow the user to sample the networks progress at regular time intervals, but the learning itself progresses on its own. The final product of this activity is a trained network that provides no equations or coefficients defining a relationship (as in regression) beyond its own internal mathematics. The network 'IS' the final equation of the relationship.

• Back-propagational networks also tend to be slower to train than other types of networks and sometimes require thousands of epochs. If run on a truly parallel computer system this issue is not really a problem, but if the BPNN is being simulated on a standard serial machine (i.e. a single SPARC, Mac or PC) training can take some time. This is because the machines CPU must compute the function of each node and connection separately, which can be problematic in very large networks with a large amount of data. However, the speed of most current machines is such that this is typically not much of an issue.

1.7 What Are Their Advantages Over Conventional Techniques?

Depending on the nature of the application and the strength of the internal data patterns you can generally expect a network to train quite well. This applies to problems where the relationships may be quite dynamic or non-linear. ANNs provide an analytical alternative to conventional techniques which are often limited by strict assumptions of normality, linearity, variable independence etc. Because an ANN can capture many kinds of relationships it allows the user to quickly and relatively easily model phenomena which otherwise may have been very difficult or impossible to explain otherwise.

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, How Artificial Neural Networks Are Improving Real Estate Market Predictions. Available from:<https://www.essaysauce.com/sample-essays/2015-10-1-1443692451/> [Accessed 20-04-26].

These Sample essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.