i need to write a simulation of CPU scheduling. The operating system must select one of the processes in the ready queue to be executed.
i have real problems understanding what the professor wants in this program, can anybody here help me develop some pseudocode for the logic behind this or try to explain to me what exactly is required here. i actually have some C++ code of a program like this, but i don't understand C++. i would like to write my own java code for this.
i need to simulate the following scheduling algorithms.
First Come, First Serve (FCFS) scheduling algorithm.
Simulate the system using FCFS scheduling. The FCFS scheduling algorithm is not preemptive (once a job starts, it runs to completion).
Shortest Job First (SJF) scheduling algorithm.
Simulate the system using SJF scheduling. The SJF scheduling is not preemptive.
Shortest Remaining Time First (Preemptive SJF) scheduling algorithm.
Simulate the system using Preemptive SJF scheduling. At any given time, you are to be running the process with the least remaining processing time.
For example, in the the following case, i am running process A. Process B arrives, which is shorter than the remaining time of process A. You enter the task switch overhead to switch to job B.
Priority Scheduling.
Simulate the system using preemptive priority scheduling.
Round Robin
Simulate the system with round-robin scheduling, using a time quantum of Q=1 and Q=10 (I.e. each of these simulations is run for each quantum size). When a new process arrives, it always goes in the back of the queue.
Data
i need to simulate the arrival of 200 processes into the ready queue to be executed and only generate the processes only once, save their process control blocks (PCB) and reuse the same processes for each simulation. The records in the queue are PCBs of the processes.
You should generate your own arrival, burst, and other service times and priorities for each process. Arrival and burst times should be given in milliseconds. Priorities should be in the range of 0 to 7. Use 0 to represent high priority.
Note that for each simulation, the i'th customer will arrive at the same time, and will require the same amount of service time. Thus any difference in performance should be due to the different scheduling algorithms, not differences in the random numbers.
For test and debugging purposes, you should also provide a sample set of five processes and calculate results manually.
Arrivals
Processes arrive with inter-arrival times. The simulation ends when the 200'th customer arrives. Note that the system starts empty, and that the first customer arrives at time T1, (not 0) i.e. after the first inter-arrival time is added to zero.
Service Times
For each scheduling algorithm, consider two cases: The time X to switch between processes is either 0.0 or 0.10. I.e. each simulation is run two times. This overhead is incurred whenever:
a) There is a task switch from job A to job B
b) A job arrives to an empty system.
c) If the clock interrupts at the end of a quantum and the same job is resumed, the overhead is X/2.
You can consider other logical overhead items.
Output
Please put together a table showing your simulation results in some clear and useful manner.
Compute throughput, the average turnaround time, and the average waiting time for all processes that have completed service and left the system
Compute the average of the inter-arrival times, for all 200 processes, and print them out.
Compute the average of the service times, for all 200 processes, and print them.
You should also show the results of all of your simulations using the test data (a sample set of five processes)
Hint.
You should write an event driven simulation, not a clock or interval type simulation. What this means is that your simulator basically cycles through the following loop:
(start loop)-> (get event)-> (reset clock to event time)->
(update system statistics)-> (update system state)->
(generate any new events and put them on event list)-> (continue loop).