Home > Computer science essays > A review on scheduling and prioritization mechanisms with limited resources under uncertainty

Essay: A review on scheduling and prioritization mechanisms with limited resources under uncertainty

Essay details and download:

  • Subject area(s): Computer science essays
  • Reading time: 18 minutes
  • Price: Free download
  • Published: 15 October 2019*
  • Last Modified: 22 July 2024
  • File format: Text
  • Words: 4,253 (approx)
  • Number of pages: 18 (approx)

Text preview of this essay:

This page of the essay has 4,253 words.

Introduction

This chapter provides context around the subject of this report. First some basic concepts around smart charging are introduced, the role of GreenFlux is described and the main question of this research is explained.

1-1 Charging infrastructure

Electrical Vehicles (EVs) have been the focus of intensive research and development in the recent years as they seem to be a feasible solution to replace traditional CO2 emitting ve- hicles. Their impact on the grid has been researched ever since the 1980s [1]. With the advancement in research and technology, a steep growth in the demand for EVs is expected. To accommodate for this growth, a sustainable large-scale charging infrastructure should be developed to encourage consumers to shift from fossil fuel consuming cars to EVs.

1-2 Smart Charging

Consider an office building with a parking lot for 10 EVs. The EVSEs share their connection to the grid with the office building. If in the morning multiple employees arrive at the same time and start to charge their EVs, this creates a hugh peak in energy demand. Due to this peak a large (and expensive) grid connection, or capacity, is required. After a couple of hours the EVs will be fully charged and therefore the demand in energy will drop. At this moment only a fraction of the total capacity used. The uncoordinated charging of EVs can lead to overcurrent in local grids [2].

Smart Charging is the principle that allows EVs to charge through a shared connection to the grid, thereby ensuring that the capacity of the connection is not exceeded while EVs are provided with their energy requirements. The available capacity is distributed in such a way

Literature Survey Bob Elders

2 Introduction

that no upgrade of the grid connection is needed [3]. This way todays infrastructure can continue to be used and thus no costly extensions are needed.

Smart charging consists of two techniques: Load Shifting and Load Balancing.

Load Shifting By charging these EVs not all at once but spread throughout the day, the peak demand can significantly reduced. This form of balancing the energy demand is called load shifting or peak shaving. Another advantage of this technique is that EVs can be charged when energy is cheap, for example at night. Taking into account the energy consumption of the office, load shifting determines how much capacity there is at any time of the day to charge EVs.

Load Balancing Next it is important for each time instant to divide the available capacity between the connected EVs. This is called load balancing. This can be done either statically or dynamically. Static in this case means that the available capacity is simply divided by the number of EVSEs. Dynamic means that the available capacity is divided based on the number of connected EVs and their respective energy demands.

1-3 GreenFlux

GreenFlux is an Amsterdam-based company that offers services for EV charging. Their main service is to provide communication between charging stations and a cloud-based back office. Clients use this platform to provide charging services to their own customers. Next to this communication service, GreenFlux offers a smart charging controller than can be installed in charging points. This controller communicates with the GreenFlux Services and Operations Platform (GSOP) to receive instructions on how connected EVs should be charged (smart charging).

1-4 Current Smart Charging Algorithm

GreenFlux’ current smart charging algorithm is based on the Round-Robin scheduling algo- rithm from computing (chapter 2 deals more extensively with round-robin and its variations). It roughly works as follows: the algorithm uses a fixed time interval (or time quantum) of 15 min to allocate a limited resource (in this case power) to different processes (or EVs). First the processes are put in a queue. The algorithm selects the first EV from the queue and assigns to it the share it demands. Then the next EV is assigned a share of capacity until no more capacity is left. After this time interval the queue rotates: EVs that received the full amount which they demanded move to the back of the queue and the algorithm starts over.

Bob Elders Literature Survey

1-5 Practical Issues 3

1-5 Practical Issues 1-5-1 Apparent Power

When Alternating Current (AC) runs through a circuit, electrical components (e.g. trans- formers) change the current (I) such that it is no longer in phase with the voltage (V ). As a result, the current and voltage reach their peak values at different moments. This means that not all energy can be converted from the electric power. The phase change is denoted as φ. When cos φ = 1, there is no phase change, whereas at cos φ = 0.5 the phase change is large. This phenomenon can be seen in the Power Triangle which is shown in Figure 1-1. The Apparent Power (S), Reactive Power (Q) and Real Power (P) are related through the

following equations:

Figure 1-1: Power Triangle

S=V∗I (1-1) P = V ∗ I ∗ cos φ (1-2) Q=V ∗I∗sinφ (1-3)

Due to this effect, a higher power is needed to deliver the same amount of usable power. The ratio between real and apparent power is called the power factor:

Powerfactor=cosφ= P (1-4) |S|

In practice one wants to use the highest possible power factor. When charging EVs on AC this means that one wants to charge with the largest possible current to maximize charging efficiency. Therefore it makes sense to make use of a round-robin based charging algorithm that in turn gives maximum current to a few cars instead of giving all cars some current.

Literature Survey Bob Elders

4

Introduction

1-5-2

1-6

When an EV connects to the charging station, the following parameters are unknown:

• Which EV type or manufacturer

• The state of charge (SoC) of the battery

• When the driver will stop the session

• How fast the EV can charge (single phase or polyphase, 16-32A)

• If the driver has set a timer or other internal rule in the EV battery management system • If the charger is connected on a polyphase network

Some of these can be estimated short after the charging session started, others remain un- known. The algorithm in its current form functions well without this information. However, some of these parameters can be estimated using charging session data that is collected in the past years. The goal of this project is to determine which estimated parameters can be used to improve the algorithm and which degree of uncertainty is allowed.

1-7 Outline of this report

This research includes thorough knowledge of scheduling algorithms chapter 2, risk analysis, uncertainty modeling and performance analysis. This report provides a summary of existing studies related to these topics which will be a basis for the further master thesis project

Minimum current Problem Statement

Bob Elders Literature Survey

Chapter 2

Scheduling

In situations where multiple tasks share a scarce resource (e.g. CPU time or energy), schedul- ing is the technique that decides which tasks are prioritized. A scheduler arranges tasks and distributes the available resource among the different tasks. The purpose of the scheduler is to maximize efficiency, ensure that not a single tasks is assigned all (or none) of the available resource and make sure that tasks are finished before their deadlines. Scheduling is an essen- tial tool to keep a system balanced and stable.

A first distinction can be made between preemptive and non-preemptive scheduling. Preemp- tive scheduling is a form of scheduling that interrupts tasks before they are completed to allocate a different task first. The uncompleted task is placed back in the waiting queue. Conversely, non-preemptive scheduling is the form of scheduling that assigns tasks to a re- source without interruption of current tasks [4].

This chapter discusses several scheduling algorithms ranging from basic deterministic al- gorithms to more complex non-deterministic algorithms. Their respective advantages and disadvantages are discussed as well as the applicability in a smart charging environment.

2-1 Round-robin scheduling

Round-robin is an algorithm originating from computer sciences to schedule computing time for processors. Its principle however can be used in other fields of engineering such as smart charging.

2-1-1 Basic algorithm

The power of RR lies in its simplicity, during a fixed timeslice (or time quantum), capacity (e.g. CPU time) is assigned to a task. At the end of this time interval, the task is paused and the task highest in queue is next. The former task is placed at the back of the queue. This process repeats until all tasks are completed. An schematic example of this algorithm is given in Figure 2-1. The round-robin algorithm is considered fair, as each task is assigned

Literature Survey Bob Elders

6

Scheduling

Figure 2-1: Round-robin scheduling with time quantum of 1, retrieved from …

an equal amount of capacity. Challenges lie in choosing the correct time quantum size. If the length is chosen too large, the response time is also considered too large as well. Conversely, if the length is chosen too small it results too much context switches which then leads to larger waiting times [5, 6]

2-1-2 Priority based Round-Robin

One variation on the algorithm described above is a so called Priority Based Round-Robin Algorithm as described in [7]. The proposed algorithm creates two separate queues: one for high priority tasks and one for low priority tasks. The algorithm then runs similar to the traditional algorithm but selects tasks alternately from both priority lists. This new methodology reduces average waiting time for high priority tasks but may also increase the waiting time for lower priority tasks [7].

2-1-3 Dynamic time quantum based Round-Robin

Average Burst Time Performance of a round-robin scheduler mainly depends on the length of time quantum. The optimal time quantum however depends on the length of the jobs in the queue. The main challenge in round-robin algorithms is choosing the length of the time quantum. One way to increase the performance of the round-robin algorithm is by introducing a dynamic time quantum as described by [6, 8]. The AN algorithm takes the average burst time (time to complete) from the queue and uses this as the new time quantum. Using this form of dynamic time quantum reduces the turn-around time, the average waiting time and the number of context switches.

Median Burst Time Similar to the approach from the previous paragraph it is possible to choose the length of the time quantum based on the median burst time in the queue. Therefore all waiting tasks are ordered in ascending order. The median burst time is found as following:

Yn+1

2

if n odd

x ̄= 1

(2-1)

2[Yn/2+Y1+n] ifneven 2

Bob Elders

Literature survey

2-2 Fair Queuing 7

where x ̄ is the median, Yi is the burst time of task i in ordered list, n is the number of tasks in the queue. This approach drastically reduces the average waiting time, the turnaround time and the number of context switches [9].

2-1-4 Deficit Round-Robin

2-2 Fair Queuing

In the previous section the round-robin scheduling algorithm (and some variations) was (were) described. Round-robin is normally considered as a fair algorithm since it allocates a same share of resource to individual tasks. However when some tasks need more of the resource than others, an even distribution might not be considered fair anymore. This section introduces ways to divide scarce resources among tasks that require a different amount.

2-2-1 Max-min Fairness

One way to distribute a resource fairly amoung different tasks is through a Max-Min Fair Share Algorithm (MMFS). This algorithm assigns to small tasks the amount of resource they need and then divides unused resources among the bigger tasks. First check whether the total demand is larger than the available resource, otherwise there is no need for using this algorithm. The MMFS algorithm works as follows:

Consider a set of tasks 1, . . . , n, that demand x1, . . . , xn amount of resources. First order the tasks such that x1 ≤ x2 ≤ … ≤ xn. If the total availabe resource is C, first divide C/n to the task with the lowest demand. If this is more than tasks 1 needs, add the surplus to the remaining available resource and continue the process with the remaining tasks. This is called max-min fairness since the algorithm maximizes the miniumum amount of resource that is assigned to each task [4].

2-2-2 Max-min Weighted Fairness

In the case that tasks are assigned different priorities the MMFS algorithm can be extended to the Weighted Max-Min Fair Share Algorithm (WMMFS). The main difference with MMFS is that desired resources are divided proportional to the respective weights. A twice as high weight directly results in twice as many units of resource.

With MMFS task i is guaranteed to be assigned at least Ci = C/n amount of resource. For wi

WMMFS task i is guaranteed at least Ci = 􏰁n w C. j=1 j

2-2-3 Credit-Based Fair Queuing

2-3 Scheduling under uncertainty

The algorithms above assume a full deterministic situation. In reality however, this is rarely the case. Numerical values such as (in the case of EV charging) arrival time, departure time

Literature Survey Bob Elders

8 Scheduling

and state of charge may not be known exactly or may change during the process. To assess how well an algorithm can handle uncertainties, new performance criteria such as stability and robustness (see chapter 5) have to be introduced. This section discusses several approaches on how to deal with uncertainty. These can be classified as proactive, reactive or as hybrid approaches [10].

2-3-1 Proactive 2-3-2 Reactive

2-3-3 2-3-4

Proactive – reactive Predictive – reactive

2-4

2-5

2-6

2-7 Conclusion

Bob Elders

Literature survey

Resource Scheduling

Stochastic resource constrained scheduling Genetic algorithm scheduling

Chapter 3

Risk Analysis

Uncertainty is everywhere around us. Generally one can not be sure about what is going on in his environment or what is going to happen. A system that handles uncertainty well is considered robust. When making decisions under uncertainty one has to choose between a number of alternatives using imperfect or unknown information [11].

Especially in the energy industry, many decions have to be made using uncertain data [12]. In many industries Probabilistic Risk Analysis (PRA) is an important tool used for validation of safety claims or to demonstrate need for improvement [13]. For the last couple of years PRA is used more and more as a supportive tool in decision making. This chapter introduces the concept of uncertainty along with basic PRA tools. The goal is to model uncertainty mathemathically using probability and to use such models to make optimal decisions.

3-1 What is uncertainty?

Uncertainty is that which disappears when we become certain. [13]. Let us first distinguish between two types of uncertainty that are relevant within this research [14]:

• Measurement uncertainty is uncertainty about the current state. There is no com- plete information about some parameters since they can not be measured exactly or directly.

• Prediction uncertainty is variability in prediction as a result of uncertain or possibly other inputs [15].

3-2 Modeling tools + quantification

Multiple methods exist that are developed to deal with uncertainty [16]. These methods in- clude robust optimization, Information Gap Decision Theory (IGDT) probabilistic approach,

Literature Survey Bob Elders

10 Risk Analysis

possibilistic approach and hybrid probabilistic and possibilistic. The common goal of these methods is to investigate the influence of uncertainty in in distribution networks (such as charging infrastructue). This section will discuss the general principle of these methods. An overview of all methods is shown in Figure 3-1.

Figure 3-1: Uncertainty Modeling Methods, retrieved from [17]

3-2-1 IGDT

Information Gap Decision Theory is a non-probabilistic method for supporting decision mak- ing under uncertainty [18]. It was developed by Yakov Ben-Haim in the 1980s. This method can be divided into three parts [17]:

• Uncertainty model

• Robustness model

• Decision-making model

Consider a typical optimization problem:

y = min f(X, d)

d

s.t. H(X, d) = 0 G(X, d) ≥ 0

(3-1)

(3-2) (3-3)

with X the uncertain input parameters, d the decisions variables. H the equality constraints and G the inequality constraints and f defining the relation between X and d.

When the uncertain input parameters X is equal to its estimated value (X = X ̄ ), then solving Equation 3-1 to 3-3 gives the predicted value of y (= y ̄). However when X is unknown, IGDT tries to find a solution to the problem that is robust taking into account the uncertainty in X. The robustness is defined as the maximum uncertainty level that can be sustained while still achieving the desired performance.

Consider the optimization problem from before describing a real-life energy system problem. Now f describes the model (the constraints that describe how much energy is purchased from different resources), X contains the uncertain inputs parameters (the electricity price), y is

Bob Elders Literature Survey

3-2 Modeling tools + quantification 11

the output (the total payments for all energy) and d the decision variables (amount of energy per energy source). The robustness of this system can now be defined.

The total payment should be lower than a pre-defined value lc, regardless of how much the prices deviate from their predicted values. Equation 3-1 to 3-3 can now be converted to their robust form:

3-2-2 Probabilistic approaches

f (X, d) ≤ lc lc =(1+ζ)×y ̄ H(X, d) = 0 G(X, d) ≥ 0

(3-4) (3-5) (3-6) (3-7)

with ζ describes the degree that can be deviated from the objective function due to estimation errors in X, that is tolerated by the decision maker. Uncertainty in IGDT models can be represented in several ways but mainly the envelope bound model is used:

X ̃ ∈U(α,X ̄) (3-8) 􏰃􏰃 X − X ̄ 􏰃􏰃

U(α,X ̄) = 􏰃􏰃   􏰃􏰃 ≤ α (3-9) 􏰃 X ̄ 􏰃

here α is the uncertainty level of X ̄, X ̄ is the expected value of X and U(α,X ̄) is the set that contains all values of x with a deviation from X ̄ that is never greater than αX ̄. Both α and X are uncertain.

The robustness of a decision d based on requirement ellc, that is αˆ(d,lc) is defined as the maximum value of α at which the decision make is sure that the minimum requirements are still fulfilled, so:

αˆ(d, lc) = max α (3-10) s.t. constraints (3-11)

Decision making is done by finding the set of decision variables d that maximizes the robust- ness:

max αˆ(d, lc) d

∀ X ∈ U ( α , X ̄ ) f(X,d) ≤ lc lc =(1+ζ)×y ̄ H(X,d) = 0 G(X, d) ≥ 0

(3-12)

(3-13) (3-14) (3-15) (3-16) (3-17)

The probabilistic method uses a multivariable function, e.g. y = f(Z). Here Z is an input vector containing multiple uncertain paramaters Z = [z1, . . . , zm] with a known Probability Density Function (PDF) for each element [17]. The goal is to estimate the PDF of y.

To clarify consider the following example: f is a system model that relates the uncertain system inputs in Z like power injections by renewables and electric loads, to the output variable y, which are total active losses and total operating costs.

Literature Survey Bob Elders

12 Risk Analysis

Monte Carlo Simulation

Monte Carlo is used to refer to a set of techniques that use repeated simulations to find usable results (e.g. a PDF) making use of a randomized set of input parameters during every simulation.

Consider the input vector Z = [z1, . . . , zm] with known PDFs. Based on the PDF of each input parameter zi, a sample input value zie is generated. The output is then calculated using ye = f(Ze) (with Ze = [z1e, . . . , zme ]). This process is repeated a larger number of times such that the outputs can be used to form a output PDF [17].

For example, one wants to use Monte Carlo simulation to calculate the area of a circle quadrant that lies in the unit square. It is easy to tell that the square is filled for π/4 by the circle quadrant. The Monte Carlo technique would use a set of randomized points in the unit square and then evaluate for each point whether it lies within in the circle quadrant (= distance to origin ≤ 1). When a large set of inputs is used, the fraction of points that lie in the circle quadrant approximates π/4. In 1 000 000 experiments there would be a 95 % chance that the number of points in the cirlce quadrant would lie between 784 600 and 786 200, which means that the estimate for π/4 would lie between 0.7846 and 0.7862, which is close to the real value 0.785398 [19].

Point Estimate Method

In statistics, the Point Estimation Method (PEM) is used to calculate the moments of a random variable that is a function of other random variables. Moments are quantities used to describe a probabilistic distribution. This method is already widely applied in fields with probabilistic power flow and problems concerning many variables. The PEM is shortly ex- plained below.

Assume a random output Z that is the function of m random input variables in the function F:

Z = F(x1,x2,…,xm) (3-18)

Now consider the l-th random variable xl. The k-th concentration of this variable is an estimate point that is desribed by a location factor xl,k and a weight factor pl,k. Here the location factor is the k-th point of xl in F and the weight factor is a weighted probability that describes how important xl,k is in the output variable Z.

PEM first determines information on the random input variables. Then for each input variable K concentrations are calculated. Next the function is evaluated using the weighted probability factor. Now statistical information on the output variables can be computed [20].

Advantages of PEM are that it is computationally efficient and easy to implement. A disad- vantage is that only the mean and standard deviation of the output are estimated while no further information about the PDFs shape is obtained [16].

Scenario Based Modeling

A scenario is a possible realization as a result of uncertain paramaters. It is possible to generate a set of scenarios using the PDF of each uncertain input parameter Zs. The expected

Bob Elders Literature Survey

3-2 Modeling tools + quantification 13

value of the output variable y is then calculated as:

y = 􏰄 πs × f(Zs) (3-19)

s∈Ωj

where 􏰁s∈Ωj πs = 1 with πs the probability for each scenario [12].

3-2-3 Interval Based Analysis

The Interval Based Analysis technique was introduced in 1966 [?]. It assumes that uncertain input parameters can be represented by a known interval.

Consider the following function example:

y = f(x1,…,xn) (3-20)

Each uncertain input parameter xi is assumed to lie within a known interval such that lbi ≤ xi ≤ ubi. Interval Analysis finds the bounds of the output parameters. It has been shown in [?] that this technique is suitable to evaluate the effects related to load demand.

3-2-4 Robust Optimization

Robust optimization is a field in optimization that focusses on solving optimization problems that are subjected to uncertainty. It was introduced in the 1970s by A. L. Soyster [?]. Robust in this context means the ability to cope with uncertain parameters or disturbances. To explain this technique consider the function below:

z = f(X,y) (3-21)

such that z is linear in uncertain parameter X and nonlinear in known y. It is assumed that the PDF of X is unknown . Its uncertainty is represented with an uncertainty set X ∈ U(X), where U(X) is a set that contains all values which X can take.

The maximization of (3-21) is written as:

maxz = f(X,y)

y

X ∈ U(X)

As z is linear in X, the maximization problem can be rewritten to:

max z y

z ≤ f ( X ̃ , y )

f ( X ̃ , y ) = A ( y ) × X ̃ + g ( y )

X ̃ ∈U(X)={X||X−X ̄|≤Xˆ}

where X ̃ is the uncertain value of X, X ̄ is the predicted value of X and Xˆ is the maximum

deviation of X from X ̄.

Literature Survey Bob Elders

(3-22) (3-23)

(3-24)

(3-25) (3-26) (3-27)

14

Risk Analysis

3-2-5 Hybrid Probabilistic – Possibilistic 3-2-6 Possibilistic

3-3

Uncertainty modeling + risk measurement

3-4

3-4-1

3-5 Conclusion

Bob Elders

Literature survey

Optimal decision making Decision tree

Chapter 4

Uncertainty Modeling

This chapter investigates approaches to take uncertainty into account when scheduling. Some basic terms from the field of statistics are introduced first.

The mean is the average of a set of numbers. The median is the middle number of a (ordered) set of numbers. The mode is the number from a set that occurs most often.

4-1

4-1-1

4-1-2

Estimating Task Times

Uniform distribution function

Stochastic Activity Durations: The PERT Model

One way to estimate the duration of stochastic tasks is through the PERT model [5]. Origi-

nating from the 1950s, it was developed by the US Navy to estimate project durations. The model assumes the following:

• Activity durations are independent of other activities in the project

• The probability density function of du, the duration of task u, can be approximated as

(4-1)

(4-2)

and its variance as:

with au an optimistic estimate of the duration, mu the most likely estimate (the mode)

and bu a pessimistic estimate.

Literature Survey Bob Elders

a beta distribution:

with a ≤ du ≤ b, a & b location parameters and α & β shape parameters.

dF(du) = K(du − a)α(b − du)β • The mean of the beta density function is approximated as:

E[du] ∼= au + 4mu + bu 6

∼ (bu − au)2 Var[du] = 36

(4-3)

16 Uncertainty Modeling

The corresponding density function is shown in Figure 4-1. The main advantage of modeling

Figure 4-1: PERT density function beta distribution, retrieved from …

using the PERT model is that only three time estimates have to be specified instead of a full

probability density function [5].

Bob Elders Literature Survey

Chapter 5

Performance analysis

Literature Survey Bob Elders

18 Performance analysis

Bob Elders Literature Survey

Chapter 6

Conclusion

Literature Survey Bob Elders

20 Conclusion

Bob Elders Literature Survey

Appendix A

The Back of the Thesis

Appendices are found in the back.

A-1 An Appendix Section

A-1-1 An appendix subsection with C++ Listing

1 //

2 // C++ Listing Test

3 //

4

5 #include <stdio .h>

6

7 for(int i=0;i<10;i++) 8{

9 cout << “Okn”;

10 }

A-1-2 A MATLAB listing

1%

2 % Comment 3%

4 n=10;

5 for i=1:n

6 disp(’Ok’) ;

7 end

Literature survey

Bob Elders

22 The Back of the Thesis

Bob Elders Literature Survey

Appendix B

Yet Another Appendix

B-1 Test Section (Again?)

Ok, all is well.

Literature Survey Bob Elders

24 Yet Another Appendix

Bob Elders Literature Survey

Bibliography

[1] G. T. Heydt, “The Impact of Electric Vehicle Deployment on Load Management Strare- gies,” IEEE Transactions on Power Apparatus and Systems,, vol. PAS-102, no. 5, pp. 1253–1259, 1983.

[2] J. Hu, S. You, C. Si, M. Lind, and J. Østergaard, “Optimization and Control Methods for Smart Charging of Electric Vehicles Facilitated By Fleet Operator : Review and Classification,” International Journal of Distributed Energy Resources and Smart Grids, vol. 10, no. 1, pp. 383–397, 2014.

[3] G. Fitzgerald, C. Nelder, J. Newcomb, and J. Lazar, Electric Vehicles As Distributed Energy Resources Additional Contributors Suggested Citation About Rocky Mountain In- stitute. Rocky Mountain Institute, 2016.

[4] I. Marsic, Computer Networks, Performance and Quality of Service. Rutgers University, 2013.

[5] E. L. Demeulemeester and W. S. Herroelen, Project Scheduling, a Research Handbook. Kluwer Academic Publishers, 2002.

[6] R. Mohanty, H. S. Behera, K. Patwari, M. Dash, and L. M. Prasanna, “Priority Based Dynamic Round Robin (PBDRR) Algorithm with Intelligent Time Slice for Soft Real Time Systems,” International Journal of Advanced Computer Science and Applications, vol. 2, no. 2, pp. 46–50, 2011.

[7] I. S. Rajput, “A Priority based Round Robin CPU Scheduling Algorithm for Real Time Systems,” International Journal of Computer Science and Information Technologies, vol. 8, no. 3, pp. 475–478, 2017.

[8] A. Noon, A. Kalakech, and S. Kadry, “A New Round Robin Based Scheduling Algo- rithm for Operating Systems : Dynamic Quantum Using the Mean Average,” Journal of Computer Science Issues, vol. 8, no. 3, pp. 224–229, 2011.

Literature Survey Bob Elders

26 Bibliography

[9] H. S. Behera, R. Mohanty, and D. Nayak, “A New Proposed Dynamic Quantum with Re-Adjusted Round Robin Scheduling Algorithm and Its Performance Analysis,” Inter- national Journal of Computer Applications, vol. 5, no. 5, p. 06, 2011.

[10] T. Chaari, S. Chaabane, N. Aissani, and D. Trentesaux, “Scheduling under uncertainty: Survey and research directions,” 2014 International Conference on Advanced Logistics and Transport, ICALT 2014, pp. 229–234, 2014.

[11] K. Pazek and C. Rozman, “Decision Making Under Conditions of Uncertainty in Agri- culture: a Case Study of Oil Crops,” Poljoprivreda, vol. 15, no. 1, pp. 45–50, 2009.

[12] A. J. Conejo, J. M. Morales, and M. CarrioÌĄn, Decision making under uncertainty in electricity markets. Springer, 2006.

[13] T. D. U. o. T. Bedford and R. D. U. o. T. Cooke, Probabilistic Risk Analysis: Foundations and Methods. Cambridge University Press, 2002.

[14] J. M. O’Kane and S. M. Lavalle, “Algorithms for Planning under Uncertainty in Predic- tion and Sensing,” Autonomous Mobile Robots: Sensing, Control, Decision-Making, and Applications, pp. 501—-547, 2006.

[15] M. D. McKay, Evaluating Prediction Uncertainty. Division of Systems Technology Office of Nuclear Regulatory Research U.S. Nuclear Regulatory Commission, 1995.

[16] R. H. Zubo, G. Mokryani, H. S. Rajamani, J. Aghaei, T. Niknam, and P. Pillai, “Op- eration and planning of distribution networks with integration of renewable distributed generators considering uncertainties: A review,” Renewable and Sustainable Energy Re- views, vol. 72, no. September 2016, pp. 1177–1198, 2017.

[17] A. Soroudi and T. Amraee, “Decision making under uncertainty in energy systems: State of the art,” Renewable and Sustainable Energy Reviews, vol. 28, pp. 376–384, 2013.

[18] A. Soroudi and M. Ehsan, “IGDT based robust decision making tool for DNOs in load procurement under severe uncertainty,” IEEE Transactions on Smart Grid, vol. 4, no. 2, pp. 886–895, 2013.

[19] M. H. Kalos and P. A. Whitlock, Monte Carlo Methods. Weinheim: WILEY-VCH Verlag GmbH & Co., 2nd revise ed., 2008.

[20] S. Qiao, P. Wang, T. Tao, and G. B. Shrestha, “Maximizing Profit of a Wind Genco Con- sidering Geographical Diversity of Wind Farms,” IEEE Transactions on Power Systems, vol. 30, no. 5, pp. 2207–2215, 2015.

Bob Elders Literature Survey

List of Acronyms

AC Alternating Current

EVSE Electric Vehicle Supply Equipment

GSOP GreenFlux Services and Operations Platform IGDT Information Gap Decision Theory

PDF Probability Density Function

PEM Point Estimation Method

PRA Probabilistic Risk Analysis

Literature survey

Bob Elders

Glossary

28 Glossary

Bob Elders Literature Survey

 

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, A review on scheduling and prioritization mechanisms with limited resources under uncertainty. Available from:<https://www.essaysauce.com/computer-science-essays/2018-11-9-1541776766/> [Accessed 10-04-26].

These Computer science essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.