Skip to main content

Recursion Basics

Recursion: Time & Space Complexity Analysis - 2

In this article we'll take the example of a problem that we've already solved in a previous article - Fast Exponentiation.

Why?

The honest answer - Because I feel like doing it xD

The more curated answer - I think it'll help in giving you an understanding that how a small change in a recursive function can have a big impact on the time complexity.

Getting into it

First things first, go back to this article and read it. I'm not going to repeat the things already discussed there.

Now, below is the final recursive approach (i.e. Implementation 2) from that article:

Implementation from before
double myPowHelper(double x, int n) {
    if(!n)
        return 1.0;
    
	double partialAns = myPowHelper(x, n/2);

	// Squaring the partialAns
	partialAns *= partialAns;

    return n % 2? partialAns * x : partialAns;
}

Now, what'll do is slightly change the implementation, keeping the overall logic exactly the same as above.

A slightly different implementation
double myPowHelper(double x, int n) {
    if(!n)
        return 1.0;
    
	double partialAns = myPowHelper(x, n/2) * myPowHelper(x, n/2);

    return n % 2? partialAns * x : partialAns;
}

If you notice, the only change I made is that instead of storing the myPowHelper(x, n/2) to the partialAns variable and then squaring it in the next line, I directly stored myPowHelper(x, n/2) *myPowHelper(x, n/2) in the partialAns variable.

Seems very insignificant, right? Well, the implementation difference might be insignificant but the impact is not.

Explaining the difference

As always, I urge you to try thinking about it on your own before reading further. Trust me, it's not that hard to analyse if you take a pen & paper and try to draw a recursion tree.

Okay, if you're here now, you either gave up or have been able to think about it. For the people who gave up, give it 1 more try?

See, the thing is that in the 1st approach, 1 recursive call is made to myPowHelper(x, n/2) and that's it. The below is a representation of how the calls will happen:

0:00
/0:49

Whereas, things get a little wild when you'll analyse the 2nd approach. In that, we're making the same recursive call twice. The code doesn't know that it has already been calculated once, so don't calculate again xD

It'll simply repeat the same steps in the 2nd recursive call as well. Note that it may seem that if we're making 2 recursive calls instead of 1, the time will simple double xD

That's not it, because if the number of recursive calls double on the root level, they'll be 4 times on the next level, 8 times on the 2nd next level and so on.

Look at the below representation to be able to better grasp this scenario.

0:00
/1:37

Summing it up

The conclusion from the above example is that the time complexity for the 1st approach will be O(LogN) whereas it'll increase to O(N) in the 2nd case.

Not going to explain how did we calculate the time complexities, think yourself.
💡
The earning: Be extra careful when writing code xD