How to measure the quality of my code?
The best and most direct way is to check an assembly code generated by your compiler at different optimization level.
//EDIT
I didn't mention benchmarking, because your question is about checking the difference between two source codes using different language constructions to do the same job.
Don't get me wrong, benchmaking is recommended solution of assuring general software performance, but in this particular scenario, it might be unreliable, because of extremely small execution time frames basic operations have. Even when you calculate amortized time from multiple runs, the difference might be to much dependend on the OS and environment and thus pollute your results.
To learn more on the subject I recommend this talk from Cppcon, it's actaully kinda interesting.
But most importantly,
Quick peek under the hood by exploring assembly code can give you information whether two statements has been optimized into exactly same code. It might not be so clear from benchmarking the code.
In the case you asked (if vs tenary operator) it should always lead to same machine code, because tenary operator is just a syntactic sugar for if and physically it's actually the same operation.
Analyse the Time Complexity of the two algorithms. If they seem competitive,
Benchmark.
Provide a sufficient large input for your problem, so that the timing is not affected by other -OS- overheads.
Develop two programs that solve the same problem, but with a different approach.
I have some methods in Time measurements to time code. Example:
#include <sys/time.h>
#include <time.h>
typedef struct timeval wallclock_t;
void wallclock_mark(wallclock_t *const tptr)
{
gettimeofday(tptr, NULL);
}
double wallclock_since(wallclock_t *const tptr)
{
struct timeval now;
gettimeofday(&now, NULL);
return difftime(now.tv_sec, tptr->tv_sec)
+ ((double)now.tv_usec - (double)tptr->tv_usec) / 1000000.0;
}
int main(void)
{
wallclock_t t;
double s;
wallclock_mark(&t);
/*
* Solve the problem with Algorithm 1
*/
s = wallclock_since(&t);
printf("That took %.9f seconds wall clock time.\n", s);
return 0;
}
You will get a time measurement. Then you use solve the problem with "Algorithm 2", for example, and compare these measurements.
PS: Or you could check the Assembly code of every approach, for a more low level approach.
One of the ways is using time function in the bash shell followed by execution repeated for a large number of times. This will show which is better. And make A template which does nothing of the two and you can know the buffer time.
Please take the calculation for many cases and compare averages before making any conclusions.