Big O Notation
If you are into programming, you will surely come across the term Big O. When you do, the first thought that comes to mind is What? Why? Seriously!
So this is my attempt to present the concept of Big O in simple words and provide you the starting point to learn and use the Big O language for your algorithms.
According to WiKi: Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity.
Big O is a member of a family of notations invented by Paul Bachmann, Edmund Landau, and others, collectively called Bachmann–Landau notation or asymptotic notation.
The Basic Idea 💡: Imagine we have multiple implementations of the same function, then how can we determine which one is the best?
e.g., Write a function that accepts a string input and returns a reversed copy?
There are more than ten solutions for the same problem. How do we decide which one is the best? We need a scale.
Is It Necessary?
Ans: YESSS!
- It’s important to have a precise vocabulary to talk about how our code performs.
- It’s Useful for discussing trade-offs between different approaches.
- When your code slows down or crashes, identifying parts of the code that are inefficient can help us find pain points in our application.
- Less important: it comes up in interviews. 😀
e.g. Suppose we want to write a function that calculates the sum of all the numbers from 1 up to (and including) some number n.
Which one is better?
What is better ?
Faster?
Less memory-intensive?
More readable?
The problem with time: One should never depend on the execution time of an algorithm.
- Different machines will record different times.
- The same machine will record different times.
- For fast algorithms, speed measurements may not be precise enough.
If not time, then what?
Rather than counting seconds, which are so variable, let’s count the number of simple operations the computer has to perform!
Counting is hard depending on what we count, the number of operations can be as low as 2n or as high as 5n + 2.
But regardless of the exact number, the number of operations grow roughly proportional with n.
Big O Notation is a way to formalise fuzzy counting.
It allows us to talk formally about how the runtime of an algorithm grows as the input grows.
Another Big O Definition:
We say that an algorithm is O(f(n)) if the number of simple operations the computer has to do is eventually less than a constant time f(n), as n increases.
- f(n) could be linear (f(n) = n)
- f(n) could be quadratic (f(n) = n2)
- f(n) could be constant (f(n) = 1)
- f(n) could be something entirely different.
Example:
Another example:
One more example:
Simplifying Big O Expressions:
Constants don’t matter.
Big O Shorthands:
- Variable assignment is constant.
- Accessing an element in an array (by index) or object (by key) is constant.
- In a loop, the complexity is the length of the loop time the complexity of whatever is inside of the loop.
Few more examples:
Space Complexity :
How can we analyse the runtime of an algorithm as the size of inputs increases?
We can use Big O notation to analyse the space complexity:
- How much additional memory do we need to allocate in order to run the code in our algorithm?
What about the inputs?
- Sometimes you will hear the term auxiliary space complexity to refer required by the algorithm, not including the space taken by the inputs.
- Unless otherwise noted, when we talk about space complexity, technically we’ll be talking about auxiliary space complexity.
Space Complexity in Swift :
- Most primitives (Booleans, numbers, undefined, null) are constant.
- Strings require O(n) space (where n is the string length)
- Reference types are generally O(n), where n is the length (for arrays) or the number of keys (for objects)
Logarithms: Sometimes big O expressions involve more complex mathematical expressions, one that appears more often than you might like is the logarithm!
What is log ?
Logarithmic Complexity:
Certain searching algorithms have logarithmic time complexity. Efficient sorting algorithms involve logarithms. Recursion sometimes involves logarithmic space complexity.
Recap:
- To analyze the performance of an algorithm, we use Big O Notation.
- Big O notation can give us a high-level understanding of the time or space complexity of an algorithm.
- Big O notation does not care about precision, only cares about general trends (Linear/ Quadratic /Constant)
- The time or space complexity only depends on the algorithm, not the hardware used to run the algorithm.
- Big O Notation is everywhere!!!✌🏻