aboutsummaryrefslogtreecommitdiff
path: root/report
diff options
context:
space:
mode:
authorAndreas Kapp Lindquist <andkaplin05@gmail.com>2025-10-28 11:48:38 +0100
committermithe24 <mithe24@student.sdu.dk>2025-10-29 13:49:57 +0100
commitb68fc9c777dd21d6f9b030c4f3581eda0f5fa7e3 (patch)
tree0c2e33671100e5c6faaafbbd43ad89da1e29dc64 /report
parentf1cc32749fdbeecc76d1f4f28766caf815782d32 (diff)
downloadsorter-b68fc9c777dd21d6f9b030c4f3581eda0f5fa7e3.tar.gz
sorter-b68fc9c777dd21d6f9b030c4f3581eda0f5fa7e3.zip
report(evaluation): Introductary text before figure done
Diffstat (limited to 'report')
-rw-r--r--report/report.tex99
1 files changed, 49 insertions, 50 deletions
diff --git a/report/report.tex b/report/report.tex
index 5b252ba..d412a4e 100644
--- a/report/report.tex
+++ b/report/report.tex
@@ -227,66 +227,65 @@ the inner loop. If j at some point hits 0, the element A[i] is the current small
and should be inserted there. This process continues until R15 which holds the index of "i" is greater than the number of coordinates.
\section{Evaluation}
-\begin{itemize}
- \item Test insertion-sort
- \item Test quick-sort
- \item Benchmark insertion-sort vs. quick-sort random data
- \item Benchmark insertion-sort vs. quick-sort sorted data
- \item Benchmark insertion-sort vs. quick-sort unsorted data
-\end{itemize}
To evaluate whether out program works, we created three \textit{shell scripts},
one for generating test data (\textit{generate\_test\_data.sh}), one for testing
the validity of the output of the program (\textit{test.sh}) and one for testing
the execution time of the program (\textit{benchmark.sh}).
-The script that generates test files, generates files of size $n$ for all $n\in
-\{10000\cdot i \mid 0 <= i <= 10\}$ (a size of 10, means the file consists of
-10 coordinates). For each $n$ we also create three different kinds of test files. Now we can use our \textit{test.sh} script, which runs the
-program with each of the generated test files and compares the output with the
-output from the \textit{builtin} function \textit{sort}.
+The script that generates test files, generates files of size 0, 5000, 10000,
+50000, 100000, 500000 and 1000000. (a size of 10, means the file consists of 10
+coordinates). For each $n$ we also create three different kinds of test files.
+One where all of the data is random, one where the data is already sorted, and
+one where it is reversely sorted. This way we can see both the average case, and
+worst case of any algorithm we decide to use. Now we can use our
+\textit{test.sh} script, which runs the program with each of the generated test
+files for all sorting algorithms of the program, and compares the output with
+the output from the \textit{builtin} function \textit{sort}.
Given all of the tests pass, we can then run \textit{benchmark.sh}, which times
-how long it takes for out program to run with each test file, and saves the
-result in a \textit{csv} file.
+how long it takes for our program to run with each test file for each algorithm,
+and saves the result in a \textit{csv} file. The results of this \textit{csv}
+file is shown
\begin{figure}[H]
- \centering
- \begin{tikzpicture}
-
- \begin{axis}[
- xlabel={size},
- ylabel={seconds},
- ymode=log,
- width=\textwidth,
- legend pos=north west,
- scatter/classes={
- isort_random={mark=*,red},
- isort_reverse={mark=*,green},
- isort_sorted={mark=*,blue},
- qsort_random={mark=*,cyan},
- qsort_reverse={mark=*,magenta},
- qsort_sorted={mark=*,yellow}
- }
- ]
-
- \addplot[
- scatter, only marks,
- scatter src=explicit symbolic
- ]
- table [x=size, y=real, meta=type, col sep=comma]
- {../test/benchmark_results.csv};
- \legend{isort\_random, isort\_reverse, isort\_sorted, qsort\_random, qsort\_reverse, qsort\_sorted}
-
- % \addplot[domain=100:100000, samples=10, thick, densely dotted, black]{x/250000};
- % \addlegendentry{$\mathcal O(\text{size})$}
- %
- % \addplot[domain=100:100000, samples=10, thick, dashed, black]{(x*x)/4000000000};
- % \addlegendentry{$\mathcal O(\text{size}^2)$}
-
- \end{axis}
- \end{tikzpicture}
- \caption{}\label{fig:}
+% \centering
+% \begin{tikzpicture}
+%
+% \begin{axis}[
+% xlabel={size},
+% ylabel={seconds},
+% ymode=log,
+% width=\textwidth,
+% legend pos=north west,
+% scatter/classes={
+% isort_random={mark=*,red},
+% isort_reverse={mark=*,green},
+% isort_sorted={mark=*,blue},
+% qsort_random={mark=*,cyan},
+% qsort_reverse={mark=*,magenta},
+% qsort_sorted={mark=*,yellow}
+% }
+% ]
+%
+% \addplot[
+% scatter, only marks,
+% scatter src=explicit symbolic
+% ]
+% table [x=size, y=real, meta=type, col sep=comma]
+% {../test/benchmark_results.csv};
+% \legend{isort\_random, isort\_reverse, isort\_sorted, qsort\_random, qsort\_reverse, qsort\_sorted}
+%
+% % \addplot[domain=100:100000, samples=10, thick, densely dotted, black]{x/250000};
+% % \addlegendentry{$\mathcal O(\text{size})$}
+% %
+% % \addplot[domain=100:100000, samples=10, thick, dashed, black]{(x*x)/4000000000};
+% % \addlegendentry{$\mathcal O(\text{size}^2)$}
+%
+% \end{axis}
+% \end{tikzpicture}
+ \caption{Benchmarking data from Insertionsort and Quicksort for both random,
+ sorted and reverse sorted data.}\label{fig:benchmark}
\end{figure}