
Big-O notation
| Complexity | Class Name | Description | Real-World Example | Graph |
|---|---|---|---|---|
| \(O(1)\) | Constant Time | Always takes the same time regardless of input size | Accessing an element in an array | Flat Line |
| \(O(\log n)\) | Logarithmic Time | Runtime grows slowly with input size | Binary search in a phone book | Slowly Increasing Curve |
| \(O(n)\) | Linear Time | Runtime grows directly proportional to input size | Reading through a list of names in a document | Straight Line |
| \(O(n \log n)\) | Linearithmic Time | Efficient sorting, slower than linear | Efficient sorting algorithms like Merge Sort | Rising Curve |
| \(O(n^2)\) | Quadratic Time | Runtime grows exponentially with input size | Bubble Sort on a long list of elements | Steep Curve |
| \(O(2^n)\) | Exponential Time | Runtime doubles with each additional element | Recursive solutions to combinatorial problems | Exponential Curve |
| \(O(n!)\) | Factorial Time | Runtime grows factorially with input size | Generating all permutations of a list | Extremely Steep Curve |

Bubbles

Bubble Sort Flowchart
Celebrity Heights
[5, 1, 4, 2, 8] step-by-step:Compare 5 and 1 → Swap → [1, 5, 4, 2, 8]
Compare 5 and 4 → Swap → [1, 4, 5, 2, 8]
Compare 5 and 4 → Swap → [1, 4, 5, 2, 8]
Compare 5 and 2 → Swap → [1, 4, 2, 5, 8]
Compare 5 and 8 → No swap.
List after first pass: [1, 4, 2, 5, 8]
Compare 1 and 4 → No swap.
Compare 4 and 2 → Swap → [1, 2, 4, 5, 8]
Compare 4 and 5 → No swap.
Compare 5 and 8 → No swap.
List after second pass: [1, 4, 2, 5, 8]
Compare 1 and 2 → No swap.
Compare 2 and 4 → Swap → [1, 2, 4, 5, 8]
Compare 4 and 5 → No swap.
Compare 5 and 8 → No swap.
List after third pass: [1, 2, 4, 5, 8]
Compare 1 and 2 → No swap.
Compare 2 and 4 → No swap.
Compare 4 and 5 → No swap.
Compare 5 and 8 → No swap.
List after fourth pass: [1, 2, 4, 5, 8]
The list is now sorted; no further passes needed.
Write a Python program to perform a bubble sort
[5, 1, 4, 2, 8][1, 2, 4, 5, 8]# Initialize a list of unsorted items
items = [5, 1, 4, 2, 8]
# Determine the number of items in the list
n = len(items)
# Initialize a boolean variable to track swaps
swap = True
# Continue looping while swaps occur
while swap:
swap = False # Reset swap flag
for i in range(1, n):
if items[i-1] > items[i]: # Compare adjacent items
items[i-1], items[i] = items[i], items[i-1] # Swap if needed
swap = True # Set swap flag to True
# Print the sorted list
print(items)[1, 2, 4, 5, 8]
Best Case: \(O(n)\) – This occurs when the list is already sorted. In this case, Bubble Sort makes only one pass.
Worst Case: \(O(n^2)\) – This occurs when the list is in reverse order. Every element needs to be compared and swapped.
Bubble sort repeatedly compares adjacent elements and swaps them if they are in the wrong order.
In the worst-case scenario (when the list is reversed), Bubble Sort must perform comparisons and swaps for nearly every pair of elements.
The above sum can be simplified using the arithmetic series formula: \[ \frac{n(n-1)}{2} \]
The simplified complexity expression is: \[ \boxed{T(n) = O(n^2)} \]
The Matrix Code
Given two matrices \(A\) and \(B\):
\[ A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \quad \text{and} \quad B = \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} \]
Their product \(C = A \times B\) is:
\[ C = \begin{bmatrix} (1 \times 5 + 2 \times 7) & (1 \times 6 + 2 \times 8) \\ (3 \times 5 + 4 \times 7) & (3 \times 6 + 4 \times 8) \end{bmatrix} = \begin{bmatrix} 19 & 22 \\ 43 & 50 \end{bmatrix} \]
For each element \(C_{i,j}\) in the resulting matrix: \[ C_{i,j} = \sum_{k=1}^{n} A_{i,k} \times B_{k,j} \]
Where \(A\) is an \(m \times n\) matrix, and \(B\) is an \(n \times p\) matrix.
Write a Python function matrix_multiply that takes to input matrices, A and B and returns the product of their matrix multiplication
def matrix_multiply(A, B):
# Initialize an empty list to store the result of the matrix multiplication
result = []
# Create a matrix with the same number of rows as A and columns as B, filled with zeros
for i in range(len(A)):
row = [] # Initialize a new row
for j in range(len(B[0])): # Iterate over columns of B
row.append(0) # Append 0 to represent the initial state
result.append(row) # Append the row to the result matrix
# Perform matrix multiplication
for i in range(len(A)): # Loop over rows of A
for j in range(len(B[0])): # Loop over columns of B
for k in range(len(B)): # Loop over rows of B (and columns of A)
# Multiply corresponding elements and add to the current cell in result
result[i][j] += A[i][k] * B[k][j]
return result # Return the result matrix
# Example matrices to test the function
A = [[1, 2], [3, 4]]
B = [[5, 6], [7, 8]]
matrix_multiply(A, B) # Expected output: [[19, 22], [43, 50]][[19, 22], [43, 50]]
Given two matrices \(A\) of size \(m \times n\) and \(B\) of size \(n \times p\), their product \(C = AB\) is an \(m \times p\) matrix, where each element is computed as:
\[ C_{ij} = \sum_{k=1}^{n} A_{ik} B_{kj} \]
To compute a single element \(C_{ij}\), we perform:
Thus, roughly \(2n\) operations per element.
Since there are \(m \times p\) elements in the resulting matrix, the total number of operations is: \[ 2n \times m \times p \]
Dropping constant factors, the complexity becomes: \[ \boxed{T(m,n,p) = O(mnp)} \]
The complexity of multiplying two matrices is cubic in general (for square matrices \(m = n = p\)): \[ \boxed{T(n) = O(n^3)} \]
Nautilus shell
fibonacci(5)
|
+-- fibonacci(4)
| |
| +-- fibonacci(3)
| | |
| | +-- fibonacci(2)
| | | +-- fibonacci(1)
| | | +-- fibonacci(0)
| | +-- fibonacci(1)
| +-- fibonacci(2)
| +-- fibonacci(1)
| +-- fibonacci(0)
+-- fibonacci(3)
|
+-- fibonacci(2)
| +-- fibonacci(1)
| +-- fibonacci(0)
+-- fibonacci(1)
fibonacci function to count the number of recursive calls.fibonacci(5)
fibonacci(n) splits into two subproblems: fibonacci(n-1) and fibonacci(n-2).The Fibonacci function is defined as: \[ F(n) = F(n-1) + F(n-2), \quad F(0)=0, \quad F(1)=1 \]
Each call to fibonacci(n) creates two additional calls:
fibonacci(n-1) and fibonacci(n-2)The time complexity \(T(n)\) satisfies:
\[ T(n) = T(n-1) + T(n-2) + O(1) \]
This recurrence relation closely resembles the Fibonacci series itself. The number of recursive calls made by the algorithm aligns with the Fibonacci numbers. Specifically, the number of calls grows as: \[ T(n) \approx F(n+1) \]
Since the Fibonacci numbers grow exponentially, this can be approximated by: \[ F(n) \approx \frac{\phi^n}{\sqrt{5}}, \quad \text{where } \phi = \frac{1 + \sqrt{5}}{2} \approx 1.618034 \text{ (The Golden Ratio)} \]
Thus, the complexity is: \[ T(n) \approx O(\phi^n) \]
The recursive Fibonacci algorithm has an exponential time complexity: \[ \boxed{T(n) = O(2^n)} \]
[Source: Big O Notation Tutorial – A Guide to Big O Analysis]
