From dd3dad440c7317cb8a67d07ae14958f2b2e7cbf5 Mon Sep 17 00:00:00 2001 From: Andreas Kapp Lindquist Date: Wed, 29 Oct 2025 18:09:56 +0100 Subject: report(*): Random fixes --- report/report.tex | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/report/report.tex b/report/report.tex index 7456acb..d8670e2 100644 --- a/report/report.tex +++ b/report/report.tex @@ -230,8 +230,8 @@ easier to understand.} \caption{Pseudocode for \texttt{array\_maker}}\label{fig:arraymaker} \end{algorithm} -\subsection{quick-sort} -The quick-sort implementation uses \textbf{Hoare partition scheme}, +\subsection{Quicksort} +The quicksort implementation uses \textbf{Hoare partition scheme}, a \textit{two-way partitioning} approach two pointers scan from opposite ends of the array toward the middle. The left pointer advances rightward while pointing at elements smaller than the \textit{pivot}, and the right pointer @@ -257,9 +257,9 @@ one for generating test data (\textit{generate\_test\_data.sh}), one for testing the validity of the output of the program (\textit{test.sh}) and one for testing the execution time of the program (\textit{benchmark.sh}). -The script that generates test files, generates files of size 0, 5000, 10000, -50000, 100000, 500000 and 1000000. (a size of 10, means the file consists of 10 -coordinates). For each $n$ we also create three different kinds of test files. +The script that generates test files, generates files of size $n$ for all +$\{1000n\mid 0\leq n\leq 100\}$. The size is equivalent to the number of +coordinates. For each $n$ we also create three different kinds of test files. One where all of the data is random, one where the data is already sorted, and one where it is reversely sorted.\footnote{We also did run it for $n=$ 1 million with random, but to get a pretty plot we decided to omit these values, so it was -- cgit v1.2.3-70-g09d2