Is swap space in Linux, something like reserved RAM?
Which avoids to display spikes and calculation lag?

Offcourse I know, I can't use 2GB RAM program, on 1.5GB RAM computer with 2GB swap.
But, if I use 2GB program and go for one calculation that uses in fact 2.5 RAM. So it won't "lag" or hang around, right?

Because there is not always enough Random Access Memory (RAM) available for compilation processes, it is a good idea to use a small disk partition as swap space. This is used by the kernel to store seldom-used data and leave more memory available for active processes. The swap partition for an LFS system can be the same as the one used by the host system, in which case it is not necessary to create another one.

Say "Linux From Scratch" docs.

Swap space is used as virtual memory. IE, if your system and applications need more memory than you physically have, then the swapper will move the least-recently used memory blocks to the swap space, freeing that physical memory for active processes. This can be expensive, since it entails physical I/O to move the memory blocks from RAM to disc. When your process that is using a lot of memory is is done, then memory that has been swapped out but still in use by other processes will remain in the swap space until needed, then they will be swapped from disc into RAM. This is an "on-demand" operation. IE, don't mess with it until needed!

So, it may "lag" (show some latency), but unused memory won't "hang around" if that is what you are asking. Myself, I usually allocate a swap partition at least as large as my physical memory, and sometimes larger "just in case". Example: on my personal workstation, I have 8GB of RAM, but I have a 16GB swap partition. This allows me to have up to 24GB of memory for those rare occasions when I may need it for some memory-intensive computations.

Also note that the Linux system agressively caches frequently used file system data, assuming that you are going to ask for it again. That avoids the need to access it on disc, since physical I/O is VERY expensive as compared to RAM memory access. Since I do a lot of disc I/O, on my system right now, my RAM is using almost 4.5GB of disc cache. A lot of this is directory information plus recently used video files. So, initial searches for a file on my set of disc drives and arrays may take awhile, but subsequent searches will take less than a second to look through what is about 10TB of disc. The operating system will flush this cached data as necessary for running programs. Again, it will only flush the least-recently-used data, assuming that recently used data will be wanted again. Not always true, but good in the long-term view.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.