Posts

CST 370 - Week 1 Journal

Introduction to Algorithms      - Core Concepts:           - An algorithm is a precise sequence of steps for solving a problem or performing a computation.           - Algorithms helps us solve problems efficiently           - Key properties of algorithms:                - well-defined inputs and outputs               - finite and ambiguous  steps               -effectiveness and termination     - Analysis framework         - not all correct algorithms are efficient. Analysis gives us a way to measure efficiency          - Key tools:                - time complexity                    - Big-O n...

CST 334 - Week 7 Learning Journal

 This week the lectures better helped grasp how operating systems handle input/output and persistent storage. We covered topics such as differences between block and character devices, how hardware interfaces allow the OS to communicate with I/O devices, and how OS manages performance through concepts like hard drive transfer rates. I/O scheduling and RAID, file systems (abstractions, directories, links, volumes, mounts, design, and on-disk data structures). Understanding how these topics connect helped me understand that the OS acts as a middle layer that hides hardware complexity while still trying to optimize performance and reliability. The most challenging topics for me were hard drive performance calculations and some of the lower-level on-disk data structures. I understand the general ideas, like seek time, rotational latency and transfer time effect but I need more practice applying them to different workloads. The file system abstractions were my "aha" moment for the...

CST 334 - Learning Journal week 6

 How Semaphores Work: Semaphores are synchronization tools that control access to shared resources in a concurrent environment. A semaphore holds an integer value that represents the number of permits available. Process or threads can perform two main operations, wait - which decreases the semaphore and may block the thread if no permits are available, and signal - which increases the semaphore and potentially unblocks a waiting thread. Semaphores are coordinators, the coordinations prevents conflicts, race conditions and incorrect program behavior.  Pros and Cons of Semaphores vs. Other Synchronization Tools: Comparison of semaphores to other tools such as mutexes, condition variables and monitors -  Semaphores are flexible - they can enforce mutual exclusion and general resource counting which mutexes cannot, but semaphores can also be error-prone, because the programmer is responsible for correct ordering. Monitors encapsulate locking behavior into the language or libr...

CST 334 - Learning Journal Week 5

 This week we covered a bunch of topics: threads, pthreads functions, locks, race conditions, critical sections, condition variables and how hardware plays a role in making this all work. Concurrency - a way to let multiple parts of a program make progress at the same time, improving speed and responsiveness. Understanding the main pthreads was not to bad but the parameters were a bit confusing. It's easy to pass the wrong thing or forget that threads don't automatically share the context you expect them too. It is very important to remember that when a thread is run, you have to pass the arguments carefully or share data. Locks were a concept that is easier to grasp than threads. For locks we discuss race conditions, mutual exclusion and critical sections. This made it easier to understand that when two threads are run at the same time the output is generally different each time unless you add a mutex near the critical section which will fix it. Condition variables are still c...

CST 334 - Learning Journal Week 4

 This week's material focused on how operating systems manage memory. We talked about paging, address translations, locality, multi-level page tables, swapping and page replacement policies. Paging is the idea that the OS divides memory into fixed-size chunks so it can manage processes more easily. We practiced translating virtual addresses to physical ones using page tables. This helped me see how the OS creates the illusion that each program has its own private memory space. Average memory access time combines fast hits and slow misses. Multi-level paging came up as a solution to the memory waste of giant single-level page tables. The part I found the hardest was multi-level address translations. I understand the 'tree of pages tables' idea but when I try to manually walk through each level, I lose track of which bit index what. The "aha" moment I had this week was with locality. Once I recognized how often programs loop over arrays or reuse variables, it made s...

CST 334 - Learning Journal Week 3

 This week’s module covered a wide range of topics, including address spaces, physical and virtual addresses, the base-and-bounds translation scheme, segmentation, memory-management design goals, garbage collection, malloc() and free(), automated editing with sed, build automation with make, and simple awk programs. Even just listing these out feels like a lot, and that pretty much captures how the week felt overall—dense, fast-moving, and conceptually challenging. Address spaces and the distinction between physical and virtual addresses made sense on a surface level: physical addresses correspond to real hardware memory, while virtual addresses are what programs think they’re using. But once we moved into base-and-bounds and segmentation, I started struggling. Writing a sentence or two defining the terms was easy. Actually understanding how translation happens is the struggle I am having, such as figuring out what is added, checked, or offset. Simulating the translation process ex...

CST 334 - Learning Journal Week 2

Topics Covered:  1. The fork() System Call:  fork() is a system call used to create a new process - known as the child - which is a nearly identical copy of the parent process. What clicked for me was how the return values distinguish the two: the parent receives the child’s process ID, and the child receives zero. That simple distinction is what allows the same code to “split” into two executing entities. I also learned that every call to fork() doubles the number of running processes, which explains why two consecutive fork() calls result in four processes total. 2. The exec() System Call: while fork() creates a copy, exec() replaces the current process image with a new program. This was initially confusing - why would you create a process only to immediately overwrite it? For example, when a shell executes a command, it first uses fork() to create a child, then the child calls exec() to load the new program, leaving the shell intact in the parent process.  3. Parent vs...