Posts

Showing posts from November, 2024

CST 334 Week 5

     This week, we explored threading, its benefits, and the potential challenges it introduces. One major issue with threading is that multiple threads running concurrently can interfere with each other, especially when they access shared resources. This interference can lead to race conditions, where the CPU switches between threads in the middle of important calculations, potentially causing incomplete or inconsistent results.      To address these issues, we discussed several synchronization mechanisms, including spin locks, test-and-set, compare-and-swap, and ticket locks. These mechanisms ensure that only one thread can access a critical section of code at a time, preventing race conditions and maintaining data integrity. However, each mechanism comes with its own trade-offs. For example, spin locks can waste CPU cycles as a thread waits for access, and they can lead to starvation if threads aren't given a fair chance to acquire the lock.   ...

CST 334 Week 4

     This week in our course, we focused on memory virtualization and memory management. In our lab, we concentrated on FIFO (First-In-First-Out), a page replacement strategy that uses a queue to manage page entries.      In FIFO, pages are loaded into memory and placed in a queue. When a page needs to be replaced, the oldest page in the queue is evicted. This approach can lead to inefficient page replacement because even if a page has been recently used, it will still be evicted if it has been in memory the longest.      In contrast, LRU (Least Recently Used) tracks the usage history of pages. LRU replaces the page that hasn't been used for the longest period of time. Because it takes into account the actual usage patterns of pages, LRU typically provides better performance, whereas FIFO only considers the order in which pages were loaded into memory.

CST 334 Week 3

     This week introduced the concept of memory virtualization, where the operating system maps virtual addresses to physical memory, creating a separate virtual address space for each process. The primary goals of memory virtualization are transparency, efficiency, and protection: Transparency ensures that the virtualization is invisible to the program. Efficiency improves the utilization of physical memory, enabling better memory management. Protection isolates the memory spaces of different processes to prevent interference.      To implement memory virtualization, the OS uses address translation, where virtual addresses are mapped to physical addresses in RAM. This process is managed by the Memory Management Unit, which ensures both efficiency and protection of memory.      A key technique in memory virtualization is segmentation, where a program's virtual address space is divided into logical segments. Each segment can vary in size, prov...

CST334 Week 2

     This week, we learned about processes. A process is essentially a program running in a protected environment managed by the operating system. This environment includes memory for the process’s code, data, and stack, as well as CPU registers for fast computations. To prevent interference between processes, they run in user mode, which limits access to hardware and certain instructions. To interact with the outside world, processes use system calls, which the operating system handles through a special TRAP instruction. The OS also manages process states and performs context switching, allowing multiple processes to share the CPU efficiently.      We also learned how processes interact with the operating system to perform tasks and manage resources. For example, when a process needs I/O, it may be blocked until the operation is finished, allowing the CPU to handle other tasks. The operating system tracks everything using a data structure called the Proces...