This week I got a deeper look into why indexes, which are meant to speed up queries, sometimes don’t work as fast as you’d expect. The article, Slow Indexes, explained that while the initial tree traversal part of the index is efficient there are two other steps that can slow things down. First, when there are multiple matches the database has to follow a chain of leaf nodes to find all the entries, which can take extra time. Second, for each match it has to access the actual table data which means reading a lot of additional blocks. These extra steps can make the query slower even though the index itself isn’t "broken" or “unbalanced.” It was interesting to learn that rebuilding the index doesn’t actually solve the issue. The performance hit comes from the natural way the lookup process works, especially when there are many matches to process.
This week I focused a lot on understanding time complexity and recursive analysis. It was challenging at first to figure out how to break down recursive functions and write the correct recurrence relations. Applying the Master Theorem was especially tough because I had to carefully identify each part of the formula and decide which case applied. I also spent time reviewing the difference between Big O, Big Omega, and Big Theta, which helped me better understand how to describe the efficiency of an algorithm. These concepts took time to click but working through examples really helped.
Comments
Post a Comment