Logarithm basics for DSA Solution In C++/Java/Python/JS
What is a Logarithm?
Description
A logarithm is the inverse operation of exponentiation. It determines the power to which a given base must be raised to obtain a specified number. Mathematically,
if: b^x = y
Then, using logarithms:
logb y = x
This is read as "logarithm of y to the base b is x."
Example
- Since 2^3 = 8, we write log2 (8) = 3.
- Since 9^1 = 9, we write log9 (9) = 1.
- The logarithm of 1 in any base is always 0: loga(1) = 0 (for a>0).
- If the base and number are the same, then loga(a) = 1.
Types of Logarithms
- Common Logarithm (Base 10) – Denoted as log(y), where the base is assumed to be 10.
- Natural Logarithm (Base e) – Denoted as ln(y), where e≈2.718e.
Undefined Logarithm Values
- loga(0) is undefined since no exponent can make a positive base equal to zero.
- Logarithms of negative numbers are undefined in the real number system.
7 Rules of Logarithms:
Rule | Formula | Example |
---|---|---|
Product Rule | logb(A ⋅ B) = logb(A) + logb(B) | log2(8 × 4) = log2(8) + log2(4) |
Quotient Rule | logb(A / B) = logb(A) - logb(B) | log2(16 / 4) = log2(16) - log2(4) |
Power Rule | logb(AC) = C ⋅ logb(A) | log2(83) = 3 ⋅ log2(8) |
Change of Base | loga(b) = logc(b) / logc(a) | log2(10) = log10(10) / log10(2) |
Zero Rule | logb(1) = 0 | log5(1) = 0 |
Identity Rule | logb(b) = 1 | log3(3) = 1 |
Negative Rule | logb(1/A) = -logb(A) | log2(1/8) = -log2(8) |
Why Do We Care About Logarithms?
Logarithms help simplify multiplicative problems into additive ones.
Real World Example
🔹 Imagine you have 1000 files and you need to organize them into folders.
🔹 If you divide them into 2 folders at each step, how many steps will it take to get to 1 file per folder?
Step by step:
Step |
Files Remaining |
1 |
500 |
2 |
250 |
3 |
125 |
4 |
63 |
5 |
32 |
6 |
16 |
7 |
8 |
8 |
4 |
9 |
2 |
10 |
1 |
It takes log2(1000) ≈ 10 steps! Instead of handling 1000 operations, we reduced the problem to just 10.
Logarithms in Algorithm Complexity
If we were given a problem, how would we approach solving it? Sometimes, we can go step by step, one element at a time, but what if there was a smarter way? Imagine you have a huge pile of work and someone tells you that you can complete it in just a fraction of the steps needed by a brute-force approach. Wouldn't that be amazing?
This is exactly what logarithmic time complexity offers!
Before diving into algorithms, let's build our intuition with a simple example.
Suppose we have a number N, and we keep dividing it by 2 until we reach 1. How many times do we need to divide?
Let's start with N=16:
16→8→4→2→1 (It took 4 steps)
Now, let's try N = 32:
32→16→8→4→2→1 (It took 5 steps)
What pattern do we see?
It seems like each time we divide by 2, we quickly reduce the number, and the number of steps doesn’t grow as fast as N. The larger N, the more divisions we need, but the growth is slow.
Finding a Mathematical Pattern
We are dividing N by 2 repeatedly until we reach 1. Let’s express this in an equation.
Let’s say we divide N exactly k times before reaching 1:
N/(2^k) = 1
Now, to solve for k, we multiply both sides by 2^k:
N = 2^k
Now, let’s apply a logarithm with base 2 on both sides:
log2(N) = log2(2k)
By above logarithm rules, we have logb (A^c) = c * logb (A) and logb b = 1.
log2(N) = k * log2(2)
So, the number of steps required is:
k = log2(N)
Key Takeaway: Whenever we repeatedly divide a number by 2, the number of steps required follows a logarithmic pattern: log(N). This is the essence of logarithmic time complexity.
Logarithm basics for DSA Complexity Analysis
An algorithm has logarithmic time complexity O(logN) if the number of operations needed reduces the problem size by a factor (typically by half) in each step.
This means that as the input size N grows, the number of steps increases very slowly compared to N.
The input size decreases exponentially, but the number of steps increases logarithmically.
Now, let’s take this intuition and apply it to a problem. Suppose we are given a sorted array:
Input Array (Sorted): {1, 3, 5, 7, 9, 11, 13, 15, 17}
Task: Find the target (17) element's index if it is present, else return -1. Instead of scanning the entire list from left to right, let's be smart and apply what we learned from dividing by 2!
Solution
We’ll use an approach where we always jump to the middle of the remaining search space and decide which half to explore next:
- Start with the entire array. The first middle element is found at:
Middle index = (0 + 8) / 2 = 4 (rounded down) Element at index 4 → 9
- Since 9 is smaller than 17, we now search only the right half of the array: {11, 13, 15, 17}
- Find the new middle element:
Middle index = (5 + 8) / 2 = 6 Element at index 6 → 13
- 13 is still smaller than 17, so we now search {15, 17}
- Find the new middle element:
Middle index = (7 + 8) / 2 = 7 Element at index 7 → 15
- 15 is still smaller than 17, so we move to the last element: {17}
- Found 17 at index 8!
This is known as Binary Search! We repeatedly divide the problem in half, reducing the search space exponentially, which gives us a time complexity of
O(log N).

Logarithm basics for DSA Algorithm
Initialize search boundaries:
- low = 0 (start of array)
- high = length of arr - 1 (end of array)
Loop until low exceeds high:
- Find the middle index: mid=(low+high)//2mid = (low + high) // 2
- If arr[mid] == target, return mid (element found).
- If arr[mid] < target, search the right half (low = mid + 1).
- Else, search the left half (high = mid - 1).
If the loop ends without finding the target, return -1.
Why is This Efficient?
Every time, we cut the problem size in half. Instead of checking each element one by one, we make a decision at each step that eliminates half the possibilities. This makes our approach logarithmic: O(log N).
If our array had 1 million elements, instead of checking all 1 million elements, this approach would only require about 20 steps!
Comparing Logarithmic Complexity to Other Complexities
Complexity |
Operations for N = 1,000,000 |
O(1) |
1 operation |
O(log N) |
~20 operations |
O(N) |
1,000,000 operations |
O(N^2) |
1,000,000,000,000 (Trillions!) |
Logarithmic complexity is much faster than linear or quadratic complexity for large inputs.
Generalizing Logarithmic Time Complexity
Any problem where the input size shrinks by a fraction in each step tends to have O(log N) complexity. This is commonly seen in:
Searching Algorithms:
- Binary Search (like our example above)
- Binary Search Trees (BST)
Sorting Algorithms: Merge Sort, Heap Sort (Divide & Conquer)
Graph Algorithms: Dijkstra’s Algorithm (Priority Queue operations take O(log N))
Mathematical Computations:
- Fast Exponentiation (Computing a^b mod m efficiently)
- Greatest Common Divisor (GCD) using Euclidean Algorithm
Variants of Logarithmic Complexity:
- Simple Log Complexity (log(N)) – Standard logarithmic complexity.
- Double Log Complexity (log(log(N)) – Used in problems involving nested logarithmic growth.
- N log N Complexity (N log(N)) – Found in efficient sorting algorithms like Merge Sort.
- Squared Logarithm Complexity (log^2 (N)) – Used in advanced computations.
- N^2 log N Complexity (N^2 log(N)) – Common in matrix-based algorithms.
- N^3 log N Complexity (N^3 log(N)) – Found in 3D computational models.
- Log of Square Root Complexity (log(sqrt{N})) – Occurs in mathematical approximations.
Conclusion
Understanding logarithms and their applications in computer science is crucial for algorithm design. Logarithmic complexity provides efficient solutions in searching, sorting, and data structure operations. Mastering these concepts helps in writing optimized code for real-world applications.
Learning Tip
- Logarithms simplify exponential equations and optimize complexity.
- Programming languages provide built-in log functions (e.g., log2() in Python, C++).
- Problems that shrink by half follow a logarithmic pattern (e.g., Binary Search).
Real World Example
- Efficient Searching – Used in search engines and databases.
- Data Structures – Powers BSTs, Heaps, AVL Trees.
- Algorithm Optimization – Key in Binary Search, Merge Sort, Fast Exponentiation.
- Cryptography & Compression – Essential for RSA encryption, data compression.
Similar Problems
Given an integer n, return true if it is a power of two. Otherwise, return false.
An integer n is a power of two, if there exists an integer x such that n == 2x.
Given an integer n, return true if it is a power of three. Otherwise, return false.
An integer n is a power of three, if there exists an integer x such that n == 3x.