Hello , I am not Linux and Unix expert so maybe this is a simple question to be answered , I am just not sure. I have a sever with 8 cores (running CENTOS) that executes a batch job once a day for few minutes . I tried to monitor the CPU usage foreach core when the batch is executed using the top command and then pressing '1'.

There I saw something really strange (for me). Initially I saw that job being executed in Cpu5 (it was obvious because it is a job that needs almost 100% of the CPU that is executed) , then Cpu5 usage dropped and Cpu1 usage starting to raise , and then Cpu1 was 100%. That happened several times switching other CPU cores as well (randomly as far as I can tell).

Could a single job that is not using threads change in what CPU core is executed? Is that normal? And if so why is it happening?

Thank you for your responses in advance.

Could it be that the job is not changing in which core is executed but the indexes in the report by 'top' command is changing over time (e.g. the core that seems as Cpu1 for a while becomes Cpu5 when there is a refresh in data) ?

I'd have to know a lot more about the app. Many batch jobs are run as scripts and for each line in the script, the task scheduler should start a new task on another CPU according to its rules.

If you want to get into this you'll have to know more about the batch job and how it works.

https://www.cs.columbia.edu/~smb/classes/s06-4118/l13.pdf notes CPU affinity if you want to lock it to a single CPU.

This is a type of load-balancing or system optimization. It is normal behavior and is built into the kernel. In any case, using top to do what you are interested in is not optimal. Check out sar - read the man page for that.

"This is a type of load-balancing or system optimization." Actualy I was asking if this is happening (you wrote that it does) how and why . Why its load-balancing system optimization to assign a job from one core to other ? Does it have cost ? How ? (the WHY question is more important)

As to your WHY part of the question you would have to dig into the load balancer used in your Linux kernal. There were many balancers used in Linux since it rolled out and I didn't bother to keep track as it worked and I didn't need to be that close to the kernal inner workings. But that doesn't mean you can't investigate which balancer they use in your kernal.

As to the does it cost, I found more cores reduced total run or startup times so the load balancer definitely reduced costs.

I remember it was used to reduce overheating issues. I cannot find the article anymore but search for Dynamic Thermal Management techniques.

commented: Definitely hot! Impact upon performance is minimal, but impact (positive) on thermal issues is major. +14
commented: That's hot. +12

When I compile a Linux kernel and use too many cores (my server is an 8 core dual cpu) it gets very hot. I monitor the temperature with the sensors tool. That also monitors my RAM heat. It has saved my ass on a number of occasions! :-)