Posts

Showing posts from July, 2024
  Week 6 – CST– 334 Operating System Module 6 – Concurrency Continued (Semaphores) This week we continued with concurrency and covered semaphores. Semaphores were invented by Edsger Dijkstra and colleagues. A semaphore is a synchronization primitive. It is an object with an integer value that can be manipulated by two routines, sem_wait() and sem_post. The semaphore must first be initialized to determine the behavior. A binary semaphore is a lock. By initializing the semaphore value to 1 it can behave like a lock. For example, thread1 will call sem_wait() and decrement the value to 0 which will allow the thread to enter the critical section. If thread2 calls sem_wait() while thread1 is still in the critical section, it will decrement the semaphore value to -1 and thread2 will go to sleep. Once thread1 is done with the critical section, it will call sem_post() incrementing the value to 0 and waking thread2 so it can acquire the lock to the critical section which will in turn decreme
  Week 5 – CST– 334 Operating System Module 5 – Concurrency Introduction This week we covered an introduction to concurrency. So far, we have studied single threaded programs. Programs can also have multiple threads of execution. A process for example, can fork a child but parent and child do not share memory because they have their own copies of code and data in memory. A process can also create multiple threads which share the same address space and can use global variables to communicate which use less memory. Each thread has its own program counter and stack. The advantage of using threads is parallelism, this means that a single process can use multiple CPU cores in parallel to complete tasks. The disadvantage when using threads comes in the form of a race condition. A race condition happens when multiple threads access a section of code that has shared variables. These sections of code are called critical sections. Multiple threads entering this section causes unpredictable
  Week 4 – CST– 334 Operating System Module 4 – Memory Virtualization Continued This week we continued learning about memory virtualization. We were introduced to paging. In contrast to segmentation, where we chop up memory into variable size pieces, with paging, we used fixed-sized pieces. The logical memory space is divided into fixed-sized units called pages, and the corresponding fixed-sized slots in physical memory are called page frames. The virtual address space assigned to each process depends on the architecture. A 32-bit architecture, for example, will have a virtual address space of 4GB (2 32 ). Paging does not lead to external fragmentation and allows the spares use of virtual address space. Along with paging, we learned about the Translation Look-aside Buffer or TLB. The TLB is a part of the MMU and is a hardware cache of popular virtual-to-physical address translation. When a virtual memory is referenced, the hardware checks the TLB to see if the desired address tra
  Week 3 – CST– 334 Operating System Module 3 – Introduction to Memory Virtualization This week we learned about Memory Virtualization and how the OS manages memory. The virtual memory system provides the illusion of a large memory address space. This space is accessed by the running program by using virtual addresses. We also covered the types of memory which include the stack and the heap. Allocations and deallocations to the stack are managed implicitly by the compiler. When you return from a function, the compiler deallocates the memory for you. In order to keep the memory allocated, you have to explicitly allocate it to the heap using malloc(). When you call malloc() successfully, it returns a pointer to the allocated memory space or else it will return NULL. Using malloc requires the programmer to use the free() call to free the heap memory that is no longer in use. Another topic we covered was address translation. It is the process by which the OS controls memory access by
  Week 2 – CST– 334 Operating Systems Module 2 – Process Management This week’s subject was Processes Manage. A process is a running program. In chapter 4 of OSTEP, we learned how the operating system creates the illusion of multiple processes running at the same time by virtualizing the CPU using a technique called time-sharing. We also learned about the APIs available in modern systems such as Create, Destroy, wait, miscellaneous controls, and status. This chapter also taught us about the different process states. A process can be in one of three states: running, ready, or blocked. A process that has control of the CPU is considered to be running. A process that is ready, is standing by ready to run but has not been chosen by the OS to run yet. Lastly, a process is blocked when it has performed a procedure that does not make it ready to run such as initiating an I/O request to a disk. Chapter 5 introduced us to the fork(), exec(), and wait() system calls which deal with UNIX