What is the worst algorithm

Algorithms

Asymptotic Complexity

   

Complexity of Algorithms

The number of steps an algorithm takes is called the algorithm run time. The term step refers to a specific underlying machine model. The machine must be able to perform a single step in constant time.

The running time then generally depends on the input, in particular on the length of the input, which is also referred to as the problem size.

Example: The runtime of a sorting algorithm is greater, the more elements are to be sorted.

With a pre-sorted input sequence, the algorithm may require fewer steps than with an unsorted input sequence of the same length.

In order to be able to evaluate the algorithm independently of the concrete input, one considers the time complexity. The time complexity is a function () depending on the problem size. The value of () is the worst case runtime of the algorithm (worst case), i.e. the maximum number of steps that can be carried out for any given entry.

Occasionally one also considers for the behavior of () on average (average case), i.e. the number of steps that are required on average for a large number of randomly selected length entries.

The exact number of steps that an algorithm executes naturally depends on the specific implementation of the algorithm. In fact, when implementing a sorting algorithm, there are not only comparisons and swaps, but also other steps such as increasing loop counters and the like.

In order to be able to evaluate algorithms independently of the details of the implementation, the time complexity is specified using the notation. The notation only reflects the order of magnitude of the complexity, i.e. whether it is a linear, quadratic or exponentially growing function, for example. The order of magnitude is given in the form of a complexity class.

It is also common to write the functions with argument, so () (()).

Example: Is () = 22 + 7 - 10 and () =2, then:

()  (()),

because with = 3 and down 0 = 5 applies:

22 + 7 – 10  ·2.

One says: The function () lies in (2).

To illustrate this, Figure 1 shows a function () that starts from 0 is below the straight line () = 1/2. Hence () ().

Fig. 1: Function G(n) in O(n)

To designate a complexity class, you always specify the simplest function that is suitable to represent the respective complexity class. In fact, (22 + 7 - 10) and (2) by the same amount, but you choose (2) as a designation.

The amount (2) is the complexity class of all functions that grow at most quadratically. Correspondingly, () is the set of at most linearly growing functions, (log ()) is the set of at most logarithmically growing functions, (1) are the functions bounded by a constant, () are at most polynomially growing functions, (2) are at most exponentially to the base 2 growing functions.

Example: 10 + 5 log () + 8 ()

8    (2)

65    (1)

1000    (2)

log10()   (log())

With the class of logarithmically growing functions, the base of the logarithm does not matter because it is

log () = log () with = log ().

The -notation abstracts from constant factors.

When we state the complexity of an algorithm, we often write something like: The algorithm has a complexity of (2). The complexity is a function, while (2) is a lot of functions. What is meant is: The complexity of the algorithm lies in (2) or: The algorithm has a complexity of () with () (2).

There (2) is a lot, we use the correct spelling () (2) and not () = (2) as can be read from time to time.

, Ω and Θ

For example, if the time complexity of an algorithm is () (2) specified, it means that the algorithm at most square steps are required. Strictly speaking, this does not mean that the algorithm actually requires the square of many steps.

It could also be a linear algorithm, for example. A linear algorithm is also needed at most square many steps. For example, if the complexity of the algorithm is () = 10, then ab applies 0 = 10 that () 2 is. So is () (2).

A complexity class (()) can only be used to estimate () upwards, as an upper bound. It would make little sense to say, for example, "The algorithm requires at least () steps". This is because, according to the definition of the complexity class (), this would mean that the algorithm requires at least at most · steps.

Other complexity classes are required for a more precise characterization of ().

Example: Is () = 22 + 7 - 10 and () =2, then:

  Ω (),

because with = 1 and down 0 = 2 applies:

22 + 7 – 10   ·2.

The example function considered () is therefore both in (2) as well as in Ω (2), i.e. it grows both at most and at least quadratically, i.e. exactly quadratically.

The class anzugeben () is used to specify the order of magnitude of a function; the letter Θ is the Greek theta.

Definition: Θ () = () Ω ()

Example: A rough analysis shows that the worst case insertion location is at most 2 Executes steps, because the method has to sort each of the numbers into an already sorted section, the section of course consisting of at most numbers. Therefore, worst case time complexity () of insertion location applies

()  (2).

A more detailed analysis shows that there is a case where insertion location requires at least (-1) · / 2 steps. Therefore, in the worst case, the time complexity () of insertion location also applies

()  Ω (2).

In the worst case, the time complexity () of the insertion location can thus be characterized by

()  Θ (2).

Complexity of problems

If you have a concrete algorithm for solving a problem, you can estimate how many steps the algorithm needs at most. The complexity of the algorithm represents an upper bound for the complexity of the problem.

The question then arises whether there is possibly a faster algorithm for solving the problem, or whether there can be at all.

Often a lower bound can be given for the problem, i.e. one can say how many steps everyone algorithm at least must perform to solve the problem. As with the upper bounds, a specific algorithm or machine model is used so that the term step is clear.

For example, to sort numbers, every algorithm has to look at the numbers at least once. A sequential algorithm that can look at a number in one step therefore needs at least () = steps to sort the numbers. So () = a lower bound for the sorting problem, based on the mentioned sequential algorithm model.

Here, too, constant factors are not important; it should essentially be irrelevant whether an algorithm can process one, two or 10 numbers in one step. And for all too small problem sizes below 0 special conditions may possibly apply, so that here, too, it is only a matter of the asymptotic behavior for 0 arrives.

Again, only the order of magnitude of the lower bound is of interest, i.e. whether it is a linear, quadratic or exponential function, for example. The order of magnitude is again expressed by a function class.

As we have just seen, there is a lower bound for the sorting problem in Ω (). An upper bound lies in (2), e.g. with the sorting procedure insertion sort. The question arises whether there is a discrepancy between these two functions and 2 can be eliminated, i.e. whether a sharper lower or a sharper upper limit can be found.

In fact, both are the case: The lower bound can be improved to Ω (log ()), at least for methods that are based on comparisons (lower bound for sorting), and the upper bound can also be reduced to (log ()) improve, for example with the Heapsort sorting method.

Definition: Given a problem and an algorithm with the time complexity () to solve the problem.

The algorithm is asymptotically optimal if Ω (()) is a lower bound for the problem.

The heapsort sorting method is therefore optimal because it reaches the lower limit for the sorting problem.