Posts

CST 370 - Week 7 Learning Journal

Dynamic Programming:  Dynamic programming is a technique used to solve problems by breaking them into smaller overlapping subproblems and storing their solutions to avoid repeated work. It relies on two main ideas:  overlapping subproblems  and  optimal substructure . I am trying to focus more on identifying when a problem should use dynamic programming instead of other techniques like divide and conquer.  Warshall's Algorithm: Warshall’s algorithm is used to compute the  transitive closure of a graph , meaning it determines which vertices are reachable from others. It works by systematically updating a matrix representation of the graph, checking whether a path exists through intermediate vertices. I find it confusing  how the matrix updates guarantee that all indirect paths are eventually captured , especially when multiple intermediate nodes are involved. Floyd's Algorithm: Floyd’s algorithm is used to find the  shortest paths between all pairs...

CST 370 - Week 6 Learning Journal

 AVL Trees:  AVL trees are a type of  self-balancing binary search tree  where the height difference (balance factor) between the left and right subtrees of any node is at most 1. When this balance is violated,  rotations  (left, right, left-right, right-left) are performed to restore balance. This ensures that operations like search, insertion, and deletion remain efficient at  O(log n) . Questions: It is still difficult to determine which rotation (single vs. double) to apply in different imbalance cases. The conditions have many similarities which makes it difficult to differentiate.  2-3 Trees:  A 2–3 tree is a balanced search tree where each node can have either  two children (2-node)  or  three children (3-node) . Values are stored in a way that maintains sorted order, and the tree remains balanced by splitting nodes when they become too full. This guarantees that all leaves are at the same depth. Heaps and Heapsort:...

CST 370 - Week 5 Learning Journal

Quicksort: Algorithm that sorts an array by selecting a pivot element and partitioning the remaining elements into two subarrays: those less than the pivot and those greater than it. The algorithm then recursively sorts the subarrays.  Binary Tree Traversal and Height: Tree traversal refers to the systematic way of visiting all nodes in a binary tree, and the three main depth-first traversal methods are preorder, in order, and postorder. Each traversal follows a different order of visiting the root and its subtrees, which affects how the tree’s data is processed. I also learned that the height of a binary tree is defined as the length of the longest path from the root to a leaf, which directly impacts the efficiency of operations like searching or inserting in certain tree structures.  Introduction to Decrease-and-Conquer: A fundamental algorithm design technique where a problem is solved by reducing it to one smaller instance of the same problem and then extending that soluti...

CST 370: Learning Journal Week 3

This week we covered  brute force and exhaustive search ,  depth-first search (DFS) ,  breadth-first search (BFS) , and an  introduction to divide-and-conquer algorithms .  B rute force string matching and exhaustive search , which emphasize solving problems by checking all possible solutions. Brute force string matching works by aligning a pattern with a text and comparing characters one by one until a match is found or all possibilities are exhausted. It is often inefficient for large inputs. The key concept was that  brute force guarantees correctness but does not scale well , making it impractical for complex or large datasets. Depth-First Search (DFS) , an algorithm used primarily for traversing graphs and trees. DFS works by exploring as far as possible along one path before backtracking. This method is useful for tasks like path discovery, cycle detection, and connectivity analysis. One important takeaway was that DFS uses a stack-based approach, whi...

CST 370 - Week 2 Journal

Asymptotic Notations: Provides a formal way to describe how an algorithm's running time grows as the input size increases. Instead of focusing on exact execution times, which depend on hardware and implementation details, these notations allow us to compare algorithms based on their growth rates. Big-O notation describes an upper bound, helping understand the worst-case scenario. Big-Omega gives lower bound and Big Theta captures a tight bound, showing when upper and lower bounds match.  Analysis of Non-Recursive Algorithms: Focuses on determining running time by counting how often a key operation executes. This usually involves analyzing loops and converting them into summation expressions. Nested loops are especially important since they often lead to polynomial time complexities. Basic operation - dominates running time and expressing its frequency as a function of input size.  Analysis of Recursive Algorithms: Require different analysis approach using recurrence relations....

CST 370 - Week 1 Journal

Introduction to Algorithms      - Core Concepts:           - An algorithm is a precise sequence of steps for solving a problem or performing a computation.           - Algorithms helps us solve problems efficiently           - Key properties of algorithms:                - well-defined inputs and outputs               - finite and ambiguous  steps               -effectiveness and termination     - Analysis framework         - not all correct algorithms are efficient. Analysis gives us a way to measure efficiency          - Key tools:                - time complexity                    - Big-O n...

CST 334 - Week 7 Learning Journal

 This week the lectures better helped grasp how operating systems handle input/output and persistent storage. We covered topics such as differences between block and character devices, how hardware interfaces allow the OS to communicate with I/O devices, and how OS manages performance through concepts like hard drive transfer rates. I/O scheduling and RAID, file systems (abstractions, directories, links, volumes, mounts, design, and on-disk data structures). Understanding how these topics connect helped me understand that the OS acts as a middle layer that hides hardware complexity while still trying to optimize performance and reliability. The most challenging topics for me were hard drive performance calculations and some of the lower-level on-disk data structures. I understand the general ideas, like seek time, rotational latency and transfer time effect but I need more practice applying them to different workloads. The file system abstractions were my "aha" moment for the...