Understanding the time complexity of the Longest Common Subsequence Algorithm
To understand it properly look at the diagram carefully and follow the recursive top-to-down approach while reading the graph.
Here, xstr = "ABCD"
ystr = "BAEC"
lcs("ABCD", "BAEC") // Here x != y
/ \
lcs("BCD", "BAEC") <-- x==y --> lcs("ABCD", "AEC") x==y
| |
| |
lcs("CD", "AEC") <-- x!=y --> lcs("BCD", "EC")
/ \ / \
/ \ / \
/ \ / \
lcs("D","AEC") lcs("CD", "EC") lcs("BCD", "C")
/ \ / \ / \
lcs("", "AEC") lcs("D","EC") lcs("CD", "C") lcs("BCD","")
| \ / \ | / |
Return lcs("", "EC") lcs("D" ,"C") lcs("D", "") lcs("CD","") Return
/ \ / \ / \ / \
Return lcs("","C") lcs("D","") lcs("","") Return lcs("D","") Return
/ \ / \ / / \
Return lcs("","") Return lcs("", "") Return
| |
Return Return
NOTE: The proper way of representation of recursive call is usually done by using tree approach, but here i used the graph approach just to compress the tree so one can easy understand the recursive call in a go. And, of course it would be easy to me to represent.
Since, in the above diagram there are some redundant pairs like
lcs("CD", "EC")
which is the result of deletion of"A"
from the"AEC"
inlcs("CD", "AEC")
and of"B"
from the"BCD"
inlcs("BCD", "EC")
. As a result, these pairs will be called more than once while execution which increases the time complexity of the program.As you could easily see that every pair generates two outcomes for its next level until it encounters any empty string or
x==y
. Therefore, if the length of the strings are n, m (considering the length of the xstr isn
and ystr ism
and we are considering the worst case scenario). Then, we will have number outcomes at the end of the order : 2n+m. (How? think)
Since, n+m is an integer number let's say N. Therefore, the time complexity of the algorithm : O(2N), which is not efficient for lager values of N.
Therefore, we prefer Dynamic-Programming Approach over the recursive Approach. It can reduce the time complexity to: O(n.m) => O(n2) , when n == m.
Even now, if you are getting hard time to understand the logic, i would suggest you to make a tree-like
(not the graph which i have shown here) representation for xstr = "ABC"
and ystr = "EF"
. I hope you will understand it.
Any doubt, comments most welcome.
O(2^n)
means the run time is proportional to (2^n)
for large enough n
. It doesn't mean the number is bad, high, low, or anything specific for a small n
, and it doesn't give a way to calculate the absolute run-time.
To get a feel for the implication, you should consider the run-times for n = 1000, 2000, 3000, or even 1 million, 2 million, etc.
In your example, assuming that for n=5 the algorithm takes a max of 251 iteration, then the O(n)
prediction is that for n=50, it would take in the range of 2^(50)/2^(5)*251
= 2^45*251
= ~8.8E15
iterations.