OS - Linux Arch
I assume it could support more than 512GB of RAM.
Motherboard - Intel® X79 Express Chipset
According to this page. X79 support 64GB of RAM @ 1600MHz.
CPU - Intel® Core™ i7-4960X
According to this page. i7-4960X supports 64GB of RAM @ 1866MHz.

OS, motherboad and CPU support 64GB at highest clock 1600MHz. Can I just put, 16GB RAM times 4 at clock 1600MHz and expect it to work or I have to look at something else yet?

Recommended Answers

All 4 Replies

I see no reason why this wouldn't work. You picked the chipset that is recommended for the given processor and it supports the given mmc clock rate as well as the 64Gb capacity (total). As for Linux, the kernel has supported terabytes of RAM for a decade at least, and currently supports about 128TB of RAM and 4096 cores (and running more than a millions processes at once), if I'm not mistaken (remember, Linux is still primarily as server-grade operating system, and thus, can support really beefed-up systems).

commented: Linux-boss is still answering... goal!! +2

How is 4096 cores a limitation for Linux? I can understaand that 128TB is from 48-bit system, but 4096 number is like what, 3-bit? Why wouldn't they expand it to, for example 32-bit and get 4,2 trillion core support?

The 4096 cores is a limitation built into the kernel. If you want to modify the kernel, you could increase that. There are Linux supercomputers that have lifted that limit considerably - not a task for the noobie kernel hacker... :-)

As for the 128TB RAM limit, that is probably due to the fact that the x86 architecture is still a segmented one. Each segment has 48 bits, and 16 bits (of the 64-bit register size) is reserved for the segment. You can use 64K x 128TB of RAM, but you would have to implement the segmentation code in the kernel yourself. Having done this in the deep dark past of i286 processors, I can testify that it is a real PITA.

If I'm not mistaken, the 48bit limit on the RAM addresses on 64bit platforms has to do with the virtual address spaces. Virtual addressing (for those who don't know) translates the relative logical addresses that processes use for their own memory (program data) into the physical addresses in RAM. This is something that is implemented by the CPU architecture (firmware / hardware) and is specified as part of the instruction set, not the operating system. So, unless you were to implement some complicated workaround scheme (which, as rubberman says, is a real PITA) that is the effective limit. The x86-64 (amd64) instruction set basically makes RAM addresses 48bits long, and thus limiting RAM addressing to 256TB total (and the Linux kernel reserves half of that to kernel-space code, thus leading to 128TB limit). Unless the OS poses further limitations, the supported amount of RAM depends mostly on the instruction set and the size of its virtual address space.

I found this list of limits for kernel 3.0.10:

https://www.suse.com/products/server/technical-information/#Kernel

Apparently the limit is 64TB, but apparently, even the people at SUSE have not been able to put together a system with so much RAM.

There are Linux supercomputers that have lifted that limit considerably - not a task for the noobie kernel hacker... :-)

Aren't supercomputers using very different instruction sets? Like Sparc, System/Z or PowerPC instruction sets. These tend to be much more scalable when it comes to the amount of RAM and CPU cores. I think that those instruction sets can support petabytes of RAM (as, for example, in that SUSE table for IA-64 and PPC64).

How is 4096 cores a limitation for Linux?

I have no specific knowledge of why that limitation exists. I assume, like other limits, that it's just about the number of bits that are used to identify each core (4096 is 12bits). You have to understand that there are trade-offs to these kinds of things. The kernel or CPU architecture needs to have identifiers to identify specific threads, specific cores, specific addresses, etc... and it has to allow for a certain maximum amount of unique values for each. They cannot simply say "let's support a maximum of everything" because that would mean a lot of unused identifiers. For example, what's the point of supporting 4 billion cores if nobody uses more than a couple of thousands (e.g., servers). What's the point of supporting exabytes of RAM on an instruction set that is mostly for PCs that rarely have more than a dozen gigabytes. As an illustration, if you need a number to identify (1) the process, (2) a memory address, and (3) the core being used, then it would be a huge waste to used 3 separate 64bit registers for that, i.e., it's much more effective to use 1 or 2 64bit registers that are split into bit-fields that are limited to "sensible" values, and thus, freeing up registers or cache memory that could be used for other purposes, such as actually computing things, which is what a computer is for.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.