My PC has two processors, and I know that each one runs at 1.86 GHz.
I want to measure the clock pulse of my PC processor/s manually, and my idea is just to compute the quotient between the number of assembler lines a program have, and the time my computer spends to execute it, so then I have the number of assembly instructions per time processed by the CPU (this is what I understood a 'clock cycle' is). I thought to do it in the following way:
- I write a C program and I convert it into assembly code.
- I do:
$gcc -S my_program.c
, which tells to gcc compiler to do the whole compiling process except the last step: to transform my_program.c into a binary object. Thus, I have a file named my_program.s that contains the source of my C program translated into assembler code. I count the lines my program have (let's call this number N). I did:
$ nl -l my_program.s | tail -n 1
and I obtained the following:1000015 .section .note.GNU-stack,"",@progbits
It is to say, the program has a million of lines of code.
- I do:
$gcc my_program.c
so that I can execute it. I do:
$time ./a.out
("a.out" is the name of the binary object of my_program.c) for obtaining the time (let's call it T) it is spent for running the program and I obtain:real 0m0.059s user 0m0.000s sys 0m0.004s
It is supposed that the time T I'm searching for is the first one represented in the list: the "real", because the other ones refer on other resources that are running in my system at the same right moment I execute ./a.out
.
So I have that N=1000015 lines and T=0.059 seconds. If I perform N/T division I obtain that the frequency is near to 17 MHz, which is obviously not correct.
Then I thought that maybe the fact that there are other programs running on my computer and consuming hardware resources (without going any further, the operating system itself) makes that the processor "splits" its "processing power" and it does the clock pulse goes slower, but I'm not sure.
But I thought that if this is right, I should also find the percentage of CPU resources (or memory) my program consumes, because then I could really aspire to obtain a (well) approximated result about my real CPU speed.
And this leads me to the issue of how to find out that 'resource consumption value' of my program. I thought about the $ top
command, but it's immediately discarded due to the short time my program spends to be executed (0.059 seconds); it's not possible to distinguish by simple sight any peak on the memory usage during this little time.
So what do you think about this? Or what do you recommend me to do? I know there are programs that do this work I try to do, but I prefer to do it by using the raw bash because I'm interested on doing it through the most "universal way" possible (seems like more reliable).