Please clarify what you mean. Are you talking about numeric data of 2^64, or data set sizes up to 2^64 bytes? If the first, you either need to use 64-bit data types (assuming these are unsigned integer values), which in modern C/C++ is an "unsigned long long" data type; however, even that can only represent a number up to 2^64 - 1 in size, and you can't do any addition or multiplication with it without overflowing the variable. If that is what you want, then you need a specialized numeric library that can manage unbounded values... In the second case (data set size), that is simply impractical - 2^64 bytes is something like (I think) 16 exabyes, which would be about 16 million 1TB discs... So I don't think that's what you mean either.
So, let's go back to the most likely scenario (trying to read your mind) - you want to perform calculations on numbers up to 2^64 (more or less). Use unsigned long long variables. Modern compilers and systems can deal with that just fine. If you need to do floating point calculations, some compilers also handle a long double type, which is 128 bits and can handle very large calculations. A regular double type is only 64-bits which when you factor in the exponent part, can't handle numbers up to 2^64.
Ok. Now it is becoming clear! You have a project to determine prime numbers up to 64-bits in size. That is clearer. Start with a sieve of eratosthenes to build a reasonable size array of about 10,000 elements. On todays computers that's a few nano seconds. Then you develop a recursive function that does a binary search of the potential answer domain, after all a number can only be factored by another number 1/2 its size or less. Determination of a prime is basically looking for numbers that have no factors greater than 1. Since this is probably a school problem, I won't go into it further except to say that I wrote such a program/function originally over 25 years ago which ran on an old IBM PC (Compaq Plus, actually), which was a 4.77MHz processor with only 512KB of RAM, and it could determine if a 64-bit number was prime in less time than it took to get my finger off of the Enter key. :-) In any case, with a base sieve of 10,000 elements you can find the answer with 5 or less searches (recursive function calls). FWIW, if you build a sieve array of 1,000,000 entries, then your recursion depth is 3 or less. I'm not sure if that would be faster or not, but I suspect not. My analysis 25 years ago was that 10,000 entries was a good trade-off of space vs. time.