does that apply to samba logins (does the .bashrc file get read during a samba login...)?
Yes.
does that apply to samba logins (does the .bashrc file get read during a samba login...)?
Yes.
I have compiled and linked my code written in C and C++ and calling certain Windows APIs to a Win32 .dll using MinGW in Code::Blocks runnings on Windows XP. My code has also been compiled to and linked to create a Win32 .exe, which works successfully whilst communicating serially from PC to a motherboard through either physical port COM1 or COM2, as selected by a .ini file.
I have some open source C# code, whose GUI identifies the available physical or virtual COM ports and allows selection of any available COM port for serial port communications. I can use the .ini file to use the selected COM port in the Win32 .dll. I can then call the Win 32 .dll into the rewritten C# code using a p/invoke call and use MonoDevelop on Ubuntu Linux to compile and link my code to create an executable which runs on Linux and Mac OSX.
This sounds like an incredibly convoluted setup to do something extremely simple. From what I understand, you need a little GUI dialog that will list the available serial ports and allow the user to select one. That's less than a 100 lines of code, and at most a few hours of work to write it, assuming no familiarity at all with the needed APIs. Why go through all this trouble?
Would the GUI element allowing the identification and selection of physical or virtual COM ports and the Win32.dll GUI element still work as on Windows?
…
I think the most common solution is to use an on-the-fly encryption of the backup file-system. Tools like dm-crypt
(part of the Linux kernel since quite a while) and true-crypt
allow you to mount an encrypted file-system, just like you mount a HDD partition but with encrypted key protection. The physical data itself remains encrypted all the time, but it gets transparently encrypted and decrypted as the user (with permission) reads or writes to the file system. From the perspective of the user, after the authentification has passed, it just looks like a normal file system (i.e., it is a kernel-space device mapper). So, obviously, backup software like rsync can run on top of it too. It is quite trivial to set this up and all the tools involved are standard-issue Linux tools, here are some very simple instructions. Here's another.
As for all the transmission, well, rsync
can use SSH to as a remote shell to the backup machine, and all communications are encrypted, so this is a none-issue.
Samba shares can also run on top of an encrypted device.
The basic procedure in both cases (rsync or linux-based samba server), after you have set up an encrypted partition (instructions in the links given), you just put the few commands to mount the encrypted drive into the .bashrc
file and the few commands to unmount the encrypted drive into the .bash_logout
file, and you do so for each user account that might log in to …
My very first language was RapidQ (a freeware clone of Visual Basic). But very quickly after that, it was Delphi (Object Pascal) which I played around with for quite some time, while flirting with the idea of moving to C++, which I eventually did. But all throughout, I've also experimented with or used many other languages, too many to enumerate (mainly C, Fortran, Matlab, LabView, Visual Basic, HTML, GLSL, Java / C#, inline Assembly, database languages (SQL, XQuery, etc.), scripting languages (Python, Bash, Batch, etc.), etc.). After all this, my language of choice is always C++ (as many of you know), for virtually everything that is "real programming", in other words, excluding: file-hauling tasks that are done with scripts (usually bash), and then all the documentation-related tasks (LaTeX documents / presentations / posters, HTML web-pages, etc.).
Why anyone praise SCIENCE before GOD?
I praise science because it's awesome (even though careless fools can use it to do bad things).
I don't praise God because there isn't a shred of evidence of that such a being exists. I don't worship Santa Claus either.
Why anyone praise SCIENCE before GOD?
Well if science is just another part of God's "wonderful creation", then get on-board with the plan and concentrate your efforts on science, don't waste time praising God, He probably would prefer to see you getting busy fulfilling His plan for the fruition of science.
Praising science has a practical purpose, i.e., to promote it and encourage people to learn about it, to develop their critical thinking, and participate to the advancement of society, the last 3-4 centuries have been the most fruitful, lets keep it up!
Praising God has the effect of lending credence to many of the corrupt leaders that lead the "flock", those who twist some vague old scriptures into any meaning that serves them. Praising God reinforces the idea that it's good to believe in things without evidence, leaving the door wide open to charlatans and propagandists. Praising God sends the message to your fellow humans that you do not care to work with them, to discuss with them, to reason with them, and to compromise with them for a better world for all, but that you'd rather blindly obey some rigid cosmic law very incompetently handed down by …
I don't mind that there are lots of "inferior" programmers, in the sense that their knowledge is limited to being able to push the right buttons here and there to get the job done without much understanding of either the technical details or the theoretical aspects of things. I don't mind that because we need a lot of those people, after all, the bulk of all the software being developed worldwide is absolutely trivial to write (e.g., mobile apps, web-pages, etc.), I mean, trivial in the sense that it doesn't require deep technical know-how or fancy out-of-this-world algorithms.
The problem is, whether you're interested in learning to push buttons for a living, or in learning the deep technical know-how, or in creating the next fancy out-of-this-world algorithms, what do you do? You enroll in a CS degree. This is unsustainable. It creates introduction-to-everything curriculums that try to mix all of these things into one, and the people who graduate from those CS degrees are led to believe that they are now experts at all these things when they are at best a novice at one of them (and they often have an attitude, big talkers, little doers). And as Schol-R-LEA said, the people who actually turn out to be really brilliant are those who went out of their way to really learn the subject, and they often have a lot of trouble getting that recognized as they are stuck in a crowd of average button-pushing programmers with basically the same …
:'( but how will i live without him ? he is my heart ! to whom can i tell my feelings ? my parents are also so impatient. his ambitions, care, love , he himself i gonna miss.
I'm sad to hear that. I think you ought to tell him about what you just said there. I had a feeling from your original post that this was more of a cry of pain than anything else. Your grand pa will live on in the memories you share and in the things he inspired in you. Cherish every moment you have left with him, but don't be afraid, everything's gonna be alright. Talk to him about it, after all, I'm sure he has faced plenty of deaths of loved ones in his own life.
@imBaCodes:
Its not no one was looking for a cure . Its Because that illness does not exist in the past centuries.
Why do we have this desease nowadays? its because of Science noot all things discovered by science
You really must be joking, or insane. It's true that many illnesses did not exist in past centuries, that is, when life-expectancy was around 25-30 years old. Since medical science (mostly germ theory, hygiene, vaccines, and penicillin), life-expectancy has almost tripled. Many of the old-age illnesses didn't exist because very few people made it to that age, and those that did, died of "old age" or "natural death" which are just umbrella terms …
I second deceptikon on that, grab some screen-shots and some UML graphs, print them out, and pop them into a nice-looking sleeve that you can carry to the interview. Hand them to the interviewers at the start of the interview as a "here are some examples of my past projects" such that they can either look them through right away and inquire about them, or they can look through them during the interview as a visual aid.
There really isn't enough room on a resume to start elaborating too much on projects, and you can't really give them justice when you can't show visual aids like screen-caps and UML-like diagrams.
you guys must learn java programing language because i think aproximate one billion devices are running java...
Since most JVMs and most components of the Java's standard library implementations are actually programmed in C / C++, and most OSes are programmed in C / C++ / ASM, the number of devices running Java, and for that matter, the amount of Java code running, can only be a small fraction of the C / C++ code running out-there. What's left of your argument now? These kinds of appeals and bullet-points are meaningless drivel, just points that salespeople make for the pigeons. Critical thinking is an important skill, I would suggest perfecting your skills in that domain first.
It's also somewhat of a post hoc fallacy because there's an implicit assumption that interest in Java is the cause of a "great future".
I totally agree. Most great programmers in the world have at least some experience with Java (as I have, even though barely ever need to read or write Java code), does that mean the their great skills or success come from their knowledge of Java? I doubt it. Even if you are programming in Java all day and are really good and successful with it, who says you wouldn't do just as good if you happened to be programming in some other language. This is like saying that most Nobel prize winners are American or British, so it must mean that speaking English makes you more …
The language standard and the standard library is developed as a negociation between implementors (who need to comply to it) and users (who want features from it). If the amount of users that request a particular feature very strong and all implementors have little or no problem providing it (they often provide it already, just not in a standardized way), then it can go into the standard.
In this case, there isn't much need for unbuffered streams. Any kind of serious reading / writing operations involving files (or, equivalently, cin / cout
) will need to be buffered. And there are only very rare occasions (mostly "toy" console programs) where an unbuffered input would be useful (mostly in making the program nicer, but usually not critical to its operation).
Then, how can you specify that the stream should be unbuffered? There are always buffers everywhere. There are usually hardware-cache chips at both ends of a SATA connection, there is buffering done by the operating system on most file I/O operations (and terminal / console I/O is usually implemented via files, or virtual files), and heck, there is also multiple levels of caching (L1, L2, L3) between RAM and the CPU. In this kind of environment, what is "unbuffered"?
Do you mean that there isn't buffering done in user-space (e.g., under the hood of iostream
or printf/scanf
)? But that still doesn't eliminate the buffering done in kernel-space, in the CPU architecture and at the hardware level. So, even if the standard had …
An IDE has a number of practical advantages. And they also depend largely on what kind of projects your working on.
When you're just "writing code": An IDE helps you with:
When you're working on a bigger project: An IDE helps you with:
When compiling and testing the code: An IDE helps you with:
I like it.
Standard file-streams are buffered for efficiency. They read data in advance from the file and temporarily store it in a buffer which is emptied as you extract data from it. So, just use the std::ifstream
, which has an underlying std::filebuf
accessible through rdbuf(). Normally, the buffering is set to be a reasonably good trade-off between not loading too much file data into memory and having enough in-memory data to deliver it in a timely fashion to the program reading it. However, if you want to manually adjust the size of the buffer, you can use the following code:
const std::size_t buf_size = 32768; // choose some buffer size.
char* my_buffer = new char[buf_size]; // create a buffer of that size.
std::ifstream in_file("C:\\Some\\Path\\To\\file.txt");
in_file.rdbuf()->pubsetbuf(my_buffer, buf_size); // give the buffer to the filebuf.
// .. read data from 'in_file'
in_file.close();
delete[] my_buffer;
My brother's first computer was a 286 (Intel 80286). Soon after he got it, he bricked it with an accidental DOS command. He wasn't much of a fan of computers after that. He works construction now.
Can I have my $800.00 back now?
You forgot to adjust for inflation. In present-value money, that hard-drive cost you about 2000$.
The trick is just to get a window handle for whatever surface you want to draw on, that is, a valid HWND
handle that points to where you want to draw. This can be done easily in Qt. I assume most other GUI libraries would provide a similar thing. It is difficult for GUI libraries to maintain any built-in support for Direct3D due to the high instability of the Direct3D library, once you use D3D you enroll yourself for years of maintenance work in the future to keep up with the whims of Microsoft, and hand-code backward compatibility. But, leaving the politics behind, all you need to do is grab the HWND handle of some dummy GUI object (like a Panel, or QWidget) and write a few bits of code to handle resizing and forwarding mouse inputs, as seen in that link for Qt. The rest is just normal D3D.
I'm not sure, but I'm inclined to think that the indices i
and j
should be declared within the for-loop statement. Otherwise, each parallel thread will be using the same variable, which is not going to go well. I would try this:
#pragma omp parallel
{
#pragma omp parallel for
for (std::size_t i = 0; i < pars[1]; i++) // vector <int> pars(some size), pars[1] > 0
{
for (std::size_t j = 0; j < pars[1]; j++)
{
//...
}//end j
}//end i
}//end pragma omp parallel
I'm a bit younger than you guys... The first (real) computer we had was a Pentium MMX 200 MHz with I believe a blazing-fast 8Mb 3D accelerator card! So, my first experiences were more along the lines of playing one-on-one death-matches of Duke Nukem 3D over the dial-up modem (i.e., punch in the telephone number on the computer, dial-up, and have your friend know (and his parents) to have his computer "answer" the call), and then talk about it over the phone after the game was over. Things have changed a lot since then, but it's also very much the same (but easier and faster).
My parents also had a black-and-white Macintosh Classic (for office work, with, wait for it... Microsoft Word!). But it did have a black-and-white version of Shufflepuck Café, I think I played that game for hours on end (getting a glimpse of the Princess' cleavage was good motivation to try and defeat her!).
I would say this isn't something that you would do in C++. There are many system administration tools that would help you implement this as a script that is scheduled to run periodically and upon connection of any new "directorate".
For example, the rsync
utility would be perfect for the job. Just have it scheduled as a cron task or any other job scheduler.
And if you use shared network drives, then you shouldn't even need to do this at all.
This is really a system administration question.
You don't understand, the instruction #pragma omp parallel for
literally means that the for-loop following that instruction will be split up into a number of segments that run in parallel. For example, if you do this:
#include <iostream>
int main() {
#pragma omp parallel for
for(int i = 0; i < 100; i++)
std::cout << i << std::endl;
return 0;
};
Instead of printing 0 1 2 3 4 ...
it might print something like 0 25 50 75 1 26 51 76 2 ...
(it's probably going to be more random than that). This is because the for-loop will be split into, lets say, 4 threads that execute a segment each, e.g., one thread does [0, 24]
, another does [25, 49]
, and so on, all in parallel. And this is just one of many different instructions you can do.
Also, how is OpenMP better/different from standard 'Threads'? (STD::thread)
Standard threads are good for multi-threading, but that's not the same thing as parallel processing. If you were to take a for-loop and split it up into many segments that run in parallel using standard threads, you would have quite a bit of work on your hands (and the code would look nothing like a simple for-loop anymore). With OpenMP, it's just a one-line instruction and the compiler does the rest. Multi-threading is for running different concurrent tasks on different threads, while parallel processing is generally for distributing one big and repetitive task among a …
This certainly exists, the main one that I know of is OpenMP. Pretty much all modern compilers support openmp. It is really easy to use, you just put some #pragma
statements at the appropriate places, and you configure the number of threads to use (in relation to your number of cores) either in the compilation option or in the code itself. This is pretty much the minimal example (a parallel for-loop):
int main(int argc, char *argv[]) {
const int N = 100000;
int i, a[N];
#pragma omp parallel for
for (i = 0; i < N; i++)
a[i] = 2 * i;
return 0;
}
I believe there are also other similar tools out-there, but OpenMP is by far the most popular. But, of course, the cadillac of development tools for this purpose is Intel's Parallel Studio.
I disconnected the CMOS battery for quite a while, twice, and the problem won't go away.
You should try resetting the "Clear CMOS" jumper on your motherboard, it should trigger a flashing of the factory-original BIOS. That's usually better than unplugging the battery. And if that doesn't solve your problem, I doubt that flashing in the BIOS you downloaded is going to work any better.
Fast forward, put a new HDD but it won't accept any windows installation disk. Vista, XP, 7, nothing. It just ignores them everytime and ends telling me no OS was detected.
...
I downloaded the BIOS from gateway but I can't make it run now that the computer runs with linux.
A quick google search revealed that the gateway nv53 model has a BIOS flashing procedure that does not require any operating systems to even exist on the failed computer (which is smart!). Just read the manual (see the BIOS recovery section). You need a Windows computer to create the recovery USB device, but the actual BIOS flashing procedure is done from that bootable USB device and does not require that you boot up into Windows. I believe that in what you have downloaded from gateway, you will find a "Crisis Recovery Disk" (or CRISDISK) which is a small bootable environment that will flash your BIOS upon booting from it. Follow the instructions carefully.
This is a good question. Iterators are awesome but with the implementation of a custom iterator often comes a number of non-trivial issues, especially when you're pushing the envelop.
My first recommendation would be to get well acquainted with the different iterator concepts, if you haven't done so already. They are grouped in a logical fashion and it is important to understand the requirements and behavior you need to fulfill.
But I'd like to make it compatible with the STL so that I get access to tons of neat functions like find_if() and so on.
Aligning yourself with standard libraries and idiomatic, modern C++ is certainly a great idea. You should nevertheless be careful and make sure you are not bending your library or the standard library in ways that are too awkward.
Now, before someone starts to say "oh, but combinations is in such and such library...", yeah, maybe, but I have a bunch of those for many other mathematical objects.
I can't help myself, I must point you to the Boost.Iterator library, in case you weren't aware of it yet. They do have a permutation_iterator
which you might consider, at least, for inspiration.
What would this mean? Probably change next to ++it, but then what is an iterator? does it hold the "current combination" on its own, and calling
*it
returns a reference to current_combination?
That's an option. Or, it could manufacture the "current combination" when it is …
When I grew up, in the school yard, we used to play soccer only during the winter. The summer was for basketball, kickball, dodgeball, and street hockey, and the winter was for hockey, soccer and football.
Playing soccer in winter time (and outside) is pretty nice actually, I don't know why it isn't more popular. And I certainly don't see any problem with playing under a light snow-fall as shown in the linked video, just like they do for football. It's certainly nicer than playing in the rain, and with very little risk of lightning, making it even safer. As for slipping, it's no worse than a wet pitch.
Whoever schedule this game should be kick in the A88.
I don't think you can blame the guy who scheduled the game several months in advance for not being able to make a weather forecast that even the biggest super-computers in the world aren't able to do. And when the snow came, for the reasons mentioned above, I see no reason to cancel the game.
I was reading some book and the author for some reason said that classes just store pointers to the location of all datamembers
That's the way it is in reference-semantics languages like Java or C#. This is not true for C++ (which has value-semantics), either the author is not talking about C++ or he knows nothing about C++. In reference-semantics languages, the stack doesn't exist (abstracted away) and automatic variables don't exist either. All memory is managed by a virtual machine, so, it's all essentially heap-allocated memory, and each data member is also individually heap-allocated, and there's an extra indirection (reference) to everything you do (although sophisticated virtual machines try to optimize away some indirections and some heap-allocations). This fundamental difference leads to completely different styles of programming, so, be very careful about mixing books / personal-knowledge of Java/C# with C++, it's a bit like trying to learn to do snowboarding by taking skiing classes (or vice versa), they are kind of similar and knowing one helps for learning the other, but if you try to face forward (as you would while skiing) while on a snowboard, all you'll do is fall on your face. If you are transitioning from Java to C++, keep that in mind, and don't blame the shape of the snowboard for your falls, just learn to ride it in the direction it's meant to go.
Can functions even be on the heap?
Functions do not occupy any memory on a per-object basis, only the data members. Functions are made up of executable code and occupy memory on the code-section of the program's memory. They exist in one place for the whole program, and require zero memory per object. And no, they cannot be on the heap, even if you tried, you cannot have executable code in the freestore (as far as I know, and any attempt to do so is definitely a very unusual hack).
I understood the class is on the heap by why are all the data-members on the heap too if they are not allocated with new?
I think you seem a bit confused about what the difference between the stack and the heap is. You seem to think of those two things as sort of the same thing (free-store), which is incorrect, so let me just spell that out to make sure the foundation is solid.
The stack is a static chunk of memory that is given to the program (at startup) to allow it to create local (automatic) variables within functions. We generally say that the stack grows as functions get called (entered) and shrinks when they return, which is a pretty accurate way to picture it. Think of the stack as a large static array (usually between 1Mb and 8Mb), and you start with a pointer sptr
to the start of that array. You enter …
I would just chip in another vote for Qt. I've had nothing but pleasant experiences working with this library and the tools (drag-drop designer, IDE, qmake and cmake add-on), it's been smooth sailing all the way and the feature-set is very complete, stable and looks good and the same everywhere (OS-wise). Even for Windows-only work, I would go for Qt, hands down (with maybe only Borland's VCL beating it, but it's too outdated now).
First, a bit of vocabulary. Given a class, an object of that class will contain data members and base-class objects (the instance of its base-classes), these two things are referred to as sub-objects.
So, the fundamental thing to understand here is that, from a memory perspective, an object and the collection of all its sub-objects are one and the same thing. So, if you have this simple class:
struct Class1 {
int a;
int b;
};
Then, an object of class Class1
has two sub-objects, i.e., two integers. In memory, an object of class Class1
is two integers. If the object is on the stack, it is two integers on the stack. If it is on the heap (free-store), it is two integers on the heap.
As for the Foo
class, then, in memory, an object of that class is a std::string
. If the Foo
object is on the stack, so are all its sub-objects, because they are one and the same. Sub-objects are just the sub-divisions of the memory that the object occupies. So, sub-objects always reside in the same memory (stack vs. heap) as the object they are a part of.
It's that simple. There is no indirection here. When you access a data member of an object, you are not being redirected somewhere else in memory, you are merely accessing a specific part within that object. This is one of the crucial aspects of value-semantics. In reference-semantics languages (like Java/C# and many others), everything is …
how did Tony figure his out?
I guess something along the lines of:
Get your public IP:
$ lynx -dump ifconfig.me | grep 'IP Address'
Get its location:
$ lynx -dump http://www.ip-adress.com/ip_tracer/?QRY=<insert-ip-here> | grep address|egrep 'city|state|country|latitude|longitude'
I think you misunderstood, Tony was just making a point about the lack of precision of the ip address geo-locations. For example, your ip location is:
IP address country: ip address flag United States
IP address state: California
IP address city: Ladera Ranch
IP address latitude: 33.5680
IP address longitude: -117.6328
So, you can judge for yourself how accurate that is.
I guess it would make more sense to have the highlighting occur on the "Watched Articles" at the top of the page. I like the idea.
Writing a GUI application in C++ must involve one of many GUI libraries out there. As Tinnin mentioned, some possibilities are SFML or SDL, and in that same vein there is the Win32 API. All these options are very low-level. They can be useful for making a computer game where the window is basically just a window with a 2D/3D display.
If you want a more complete GUI, with menus, buttons, list-views, etc.., you need a more elaborate GUI library. One of the front-runners and most used library and toolset for this is Qt. This library is very complete, well-designed, and easy to use. And it works on all platforms. Another lesser popular alternative is WxWidget.
In the Microsoft world, they have pretty much given up on providing a modern C++ GUI library, you have to move to .NET in order to use WinForms. Technically, you could use C++/CLI which is a hybrid between C++ and C#, but I wouldn't recommend it, if you want to do .NET, just use C# (which, in a nutshell, is almost identical to Java).
For me, these characters seem to appear when I do ALT + SPACE. I sometimes get those because I use a french-canadian keyboard and the {}
and []
brackets (and a few others) require using the ALT key and they are often followed or preceded with a space, and so, the ALT + SPACE combination happens once in a while and leaves those \302
and \240
stray characters, and the compiler gets angry when it sees them. The character-pair \302\240 is the "NO-BREAK SPACE" from UTF-8 encoding (302 and 240 are indeed octal numbers). The character \302\206 is probably something similar (all 302 combinations are weird silent UTF-8 things), you can try and see what key combination generates it on your system, and you might also find a way to disable that key combination too. (question: What OS is this? I suspect a Linux distro or Mac, given the native use of UTF8 to encode keystrokes)
The IDE that I am using (KDevelop, which uses the Kate editor) highlights those characters when they occur, so I can immediately correct it. Normally, these two characters, together, appear as one space (or nothing at all), and if your editor doesn't highlight it, it will go unnoticed. So, you can try and get an editor that will highlight those stray characters (or see what you can do with the settings of your current editor). Or, you can write a simple character-replacement script that will find those character combinations and replace them with a …
creating SomeFunction(int a) of Derived should have no effect on SomeFunction(string b) of Base and no one will think otherwise.
What if the function in the base class is Base::SomeFunction(const char* a)
and there's a call to the function like this: my_derived_object.SomeFunction("foo");
. There are all sorts of situations in which overloading can have surprising effects, things can get ambiguous or unintended overloads can be selected instead of the one you expect. You always have to be aware of that and be careful with overloading functions. If there is great code-distance between different overloads, it makes things worse and can cause bugs that are hard to find in a large code-base. The solution of hiding the base class functions with the option of explicitly pulling them up to the derived-class' scope is not a perfect solution, but it helps, and when weighting the pros and cons, seems like the better way to go. That's all.
The actual problem is not the compiler, but the system libraries you need to link to. This is why you get all these linker errors. So, what is needed is an installation of GCC with the two sets of libraries, i.e., for 32bit and 64bit systems. Try the TDM-GCC releases, these are reliable releases and ahead of their game, and they do provide dual compilation releases (compiler + both sets of libraries).
The CSV format just means Comma-Separated Values. A typical CSV file would look like this:
x, x2, x3
1.0, 1.0, 1.0,
2.0, 4.0, 8.0,
3.0, 9.0, 27.0
4.0, 16.0, 64.0
It's that simple. Just the values, separated with commas. Excel can also import any other similar thing, like values separated by spaces or tabs. Here's some simple code to generate a CSV:
<fstream>
int main() {
std::ofstream file_out("my_test.csv");
file_out << "x, x2, x3" << std:endl;
for(double x = 1.0; x < 10.1; x += 1.0)
file_out << x << ", " << (x * x) << ", " << (x * x * x) << std::endl;
return 0;
};
since we are software engineering students at the University of Waterloo in first year we are still learning all the tedious linked list implementations
Keep in mind that linked-lists are theoretically nice and make for good prorgramming exercises in computer science classes, but they are far from being all that useful in practice. Cases where a linked-list is the better choice are few and far between. Can't use them if you want random-access. Shouldn't use them if you want to do frequent traversals. Shouldn't use them if you need to sort the data, or keep it sorted, or do searches. An array of pointers is almost always preferrable to a linked-list if the objects are expensive to copy (and cannot be moved, which is very rare) and/or you need them to have a persistent address in memory, and the array-of-pointers comes with the added bonus of less overhead and random-access capability.
This leaves you with very few application areas, most of which end up requiring a hybrid structure (e.g., unrolled linked-list, linked-arrays, cache-oblivious lists, etc., or with an efficient small-object allocator), or have the data distributed over modules or computer-nodes in a linked topology, or in a medium where data mobility is very difficult (e.g., a file-system). The argument also extends to other kinds of linked-structures (linked-trees, adjacency-list graphs, etc.), although in many cases the alternatives are not as easy or convenient to implement and don't pay off as much. But for everyday simple tasks (for which you …
I already created a version that just stored numbers as "before decimal point [unsigned array]" and "after decimal point [unsigned array]".
The decimal point? That's untenable. Are you saying that in order to store the number 1e30
or 1e-30
you need to store a long array of zeros followed by 1? That's insane. You should look at the way the floating point numbers are stored for real, and emulate that to arbitrary precision (instead of fixed). Where the decimal point falls is completely arbitrary, and has no baring on the precision nor the storage requirement.
Second, I must assume that you're doing base-2 arithmetic. Base-2 arithmetic is what the computer understands and work with well. So, I assume you mean "decimal" just in a colloquial sense, not in the sense that you use base-10 arithmetic. Don't work against what is natural for the processor.
The issue is that it is both inefficient and incomplete.
Yeah, that's gonna be a problem. Doing this is really hard, see the GMP and MPFR libraries.
I would like to incorporate complex numbers into the system (I am not, however going to incorporate quartic numbers). I am also going to incorporate vectors and matrices.
That shouldn't be a problem at all. Once you have a good arbitrary precision class, with all the required operator overloads and standard function overloads (sin, tan, log, exp, ldexp, frexp, etc...), you should be able to use std::complex
…
This discussion might enlighten you. You find this behavior strange, but understand that the inverse behavior could lead to results just as strange, if not stranger, considering that it would go against the more general scoping rules in C++. There are maintainability issues with allowing functions in the base-class to overload those of derived classes. That's a general maintainability rule, things in a broader more general scope should not change the behavior of code in a narrower sub-scope. This is why local variables take precedence over global variables of the same name, the same goes between namespace scopes, and between derived and base class scopes.
Think about this. Say you're part of a team of developers working on some big library / application. Your job is to work on class "Derived", and some other developer that you barely know works on class "DeepBase" which is somewhere a few inheritance levels up from the class you're working on. One day, that developer pushes a small change to the DeepBase class' code, the next day you wake up with the team screaming at you because the code in the Derived class broke the last nightly build. Now, you're stuck plowing through the code to find the source of the problem, and then, after a lot of pain-staking research and debugging, you realize that a call to a function of the Derived class was being re-routed (through overloading) to a newly added function in the DeepBase class. This is a maintenance …
I guess the endorsement system can't keep up with the rate at which your "friends" in California register phony accounts and endorse you (i.e., most of your endorsements are from people from California, who registered in the last month, and barely posted anything themselves). You're not exactly coming to this issue with clean hands, are you?
This may be irrelevant but i also noticed that i recieved reps from a user who had 7 rep points and my rep points didn't go up
I guess you're speaking of this post and the rep by this user. 7 rep-points is really low (and coincidently, the 7 reps you gave him), and it also depends on other factors (post count, time since registration, etc.), that's why this user (from Calfornia, registered 2 hours ago...) has the "Power to Affect Someone's Reputation Positively" of 0.
I don't know what you're doing or what you're hoping for, but it sure doesn't smell very good.
This is not a bug, it is the expected behaviour. There is a rule in C++ that overloading of member functions is limited to a single class scope. Basically, calling SomeFunction
on a Derived
object causes the compiler to look for overloads within the Derived
class only. If I compile your code with Clang (which has superior warning messages compared to GCC), I get the following messages:
overload_inheritance_test.cpp:13:10: warning: 'Derived::SomeFunction' hides overloaded virtual function [-Woverloaded-virtual]
bool SomeFunction(const int& A, int& B) const { return false; }
^
overload_inheritance_test.cpp:6:18: note: hidden overloaded virtual function 'Base::SomeFunction' declared here
virtual bool SomeFunction(const int& A) const = 0;
^
overload_inheritance_test.cpp:24:43: error: too few arguments to function call, expected 2, have 1
std::cout << "Hey: " << SomeFunction(5) << std::endl;
~~~~~~~~~~~~ ^
overload_inheritance_test.cpp:13:5: note: 'SomeFunction' declared here
bool SomeFunction(const int& A, int& B) const { return false; }
^
The error is the same as GCC (unresolved overload), as expected from any standard-compliant compiler, but what is of interest here is the warning preceding it. That basically sums it up, any function in a derived class with the same names as a function in the base-class will hide that base-class function from overload-resolution, which can be useful in some circumstances, but is mostly annoying. I don't really know what the reason is for this rule, but I imagine it has some technical ramifications, otherwise the standard committee wouldn't have imposed it.
The way to fix it is to use the using
statement, as …
First impression, it seems right to me
I beg to differ.
Computer Systems Technology – Networking
You should write "Computer Systems Technology and Networking" (without quote marks), the dash is confusing, it looks like a separation between propositions (in the grammatical sense) and breaks the flow of the sentence. Making a major grammatical blunder is not a great way to start a cover letter.
I am a student at St. Clair College studying Computer Systems Technology and Networking looking for a student job as a Network Administrator Assistant.
You should avoid using the same word three times in one sentence. You're a "student", we get it. Also, the word "student job" is not really appropriate, maybe "internship", "employment", "experience in the workplace", etc... For example, consider this re-phrasing:
"As a student in Computer Systems Technology and Networking at St. Clair College, I am eager to gain experience in the workplace."
Or, a bit more forward:
"As a student in Computer Systems Technology and Networking at St. Clair College, I believe I am ready for a challenging internship in a high-standard company. For that reason, I immediately considered Bell Canada and ..."
My interest in working with Bell Canada lead me to the Jobs@Bell website, I noticed Bell Canada offers an internship placement in network technology and IT that I am interested in pursuing.
Be selective about the information you put in a cover letter, you only have a few sentences on average …
would you please show me how to use rand_MAX for entire code?
You could just create a global functions like this:
double drand() {
return ((double) rand()) / RAND_MAX;
};
That function creates uniformly-distributed random floating-point variable between 0 and 1.
As Jim said, you must see the CPU as an interpreter of machine instructions. It takes in instructions and executes the required actions. In order to have stability over time, it is most practical to have a standard set of instructions that all CPUs (or a class of CPUs) can be made to understand / respond to. Those standard sets of instructions are called Instruction Set Architectures, or usually just "instruction sets". Familiar instruction sets include x86, x86-64, ARM, AVR, PowerPC, and many extensions like SIMD (SSE, SSE2, SSE3..). And many instruction sets are compatible (mostly backward compatible) to provide more stability. Most PC CPUs today will implement either x86 (32bit) or x86-64 (64bit), plus many SIMD extensions. Then, GPUs might support even more SIMD-like extensions (btw, SIMD is for doing many floating-point operations in parallel, which is very useful for 3d graphics). Most other less common instruction sets are for embedded devices or very big computers (servers or super-computers), due to the special needs these environments have.
The instruction set is essentially the language you need for talking to a processor. It is a very simple language, as in, every individual instruction is kind of trivial. It goes a bit like this:
Take the number 2
Take the number 3
Add them together
Give me that number
Now take the number 4
Multiply them together
Keep the result
Multiply again
Give me that number
which would probably execute roughly this code:
int c = 2 + 3;
int …
The dynamic allocation is unnecessary (and leaking), you could simply do this:
int i = 0;
for (i; i < n; i++) {
Neutron neutron;
initNeutron(&neutron);
int steps = 0;
do {
fprintf(file, "%f\t%f\n", neutron.r[0], neutron.r[1]);
++steps;
} while (step_succeeded(&neutron));
fprintf(file,"\n");
step_sum += steps;
if (neutron.absorbed)
++absorbed;
if (neutron.escaped)
++escaped;
}
Another problem is that you translated this line of C++ code:
r = vector<double>(2, 0);
into these lines of C code:
this->r[0] = 2;
this->r[1] = 0;
You misinterpreted what the C++ code meant. The vector was being constructed with the second version of the constructors, see here, which takes the length of the vector as first argument (2) and then, the value with which to fill the vector (0). This means, the C code should be:
this->r[0] = 0;
this->r[1] = 0;
Then, another big issue is with the random number generator. In the C++ code, you have this line:
double theta = 2 * pi * rg.rand();
which takes a random number in the range [0,1]
(produced by rg.rand()
) and multiplies it with 2-pi in order to get a random angle between 0 and 2-pi. In your C code, you have:
double theta = 2 * pi * rand();
which might look like the exact same thing to the untrained eye, but they are entirely different. The C rand()
function generates a random integer number between 0 and RAND_MAX
(which is usually a …
Is there good and bad type of compiler? or wording of the question is incorrect!
Good / bad is too vague, it depends too much on what you want: faster code, faster compilations, wider platform support, better value for your money, better debugging tools, stability, standard compliance, etc...
The main C++ compilers are GCC (open-source from GNU), ICC (proprietary from Intel), MSVC (proprietary (but free) from Microsoft), and Clang/LLVM (open-source from Apple and Google and others). There are also a few others like C++Builder (Borland -> Embarcerado) and IBM's XL compiler. Each of them have different qualities and marketing points. Also, most are split between front-end (that parses the code) and back-end (that generates the executable code), and those can be crossed (e.g., GCC front-end with LLVM back-end, or Clang front-end with GCC backend, or EDG front-end with ICC back-end). And then, there are the standard library implementations, which are mainly the GNU implementation (libstdc++
), the LLVM implementation (libc++
), and the Dinkumware implementation (proprietary) (used by Microsoft, IBM, Borland, Comeau, and many others).
A broad-brush classification, in my opinion and limited knowledge, would be this: (X > Y: X is better than Y)
Faster code:
ICC >> GCC > Clang > MSVC
Faster compilation:
MSVC > Clang > ICC > GCC
Wider platform support:
GCC > ICC > Clang >>> MSVC
Debugging tools:
ICC > MSVC >> GCC = Clang
Stability:
ICC > GCC >> Clang > MSVC
Standard compliance:
GCC > …
As the titles says I wanna know what are versions of C++ other than Standard C++ and what are the differents between those version if any!
The first "version" has to be the ARM (around 1990-1991). This wasn't an official, formal standard document, but the later editions were essentially the working draft for the standard in 1998. This pre-98 version of C++ is what is generally referred to as "pre-standard C++". The language changed quite a bit from 1985 to 1998, so it cannot be defined in clear terms what exactly was or wasn't in it. It basically goes a bit like this. The core addition to C from the start was classes, that is, the extension of C-style "struct" to include member functions, inheritance, the private/public/protected scopes, and all those kinds of classic object-oriented programming features, and probably also things like const
, references, function overloading, etc., the things that weren't part of C but were well established in other new languages of the time. Then, during the 90s, the more exciting stuff was introduced, such as exceptions, templates, namespaces, and, of course, the main components of the standard library that we could hardly live without today, like STL containers and algorithms, and the IO-stream library. Another noticeable difference between standard and pre-standard C++ is the fact that pre-standard headers had the .h
extension (as in #include <iostream.h>
or <math.h>
, as opposed to today's <iostream>
and <cmath>
), and the elements (classes and functions) of the standard library were …
There are a few observations that I must make before saying anything else.
First of all, your Control
class is a resource-holding class, and should be designed as such. So far, your class is ill-conceived if it is to be holding a resource (i.e., tied to an active window). You have to ask some fundamental questions like: Should the class be copyable? Should the class be moveable? Or neither?
Second, the dynamic polymorphism that you are trying to imbue on the Control/Button classes seems very awkward to me. Seems to me like a clear case of not observing the golden rule of OOP inheritance: "Inherit, not to reuse, but to be reused". I see two telltale signs: (1) your Control class seems to want to serve two purposes, be a base-class with some set of virtual functions and be a provider of some basic functionality in being the care-taker of a window-resource; and, (2) the inheritance is not oriented towards polymorphic services being provided but rather implementation details being shared (e.g., the "Dispose()" function). The main focus of an OOP inheritance scheme should be to establish (through the base classes) a hierarchical set of functionality provided by the descendants (derived classes), if a few innocuous data members (directly tied to functionality) show up in the base classes, it's no big deal, the same goes for a few helper (protected) member functions. But these commonly useful data members or member functions should never be the reason for the inheritance …
I will use "malloc" to reserve a space for the ErrLog structure and assign the pointer value same as the char array. Is this ok or just use vector?
If you're going to use dynamically allocated memory to store your error logs, then you'd be better off using an STL container like vector or list. That's for sure. I was assuming you wanted to avoid dynamically allocating memory during the formation of the error log, due to the fact that it creates stochastic and long latency between the formation of the error log and being able to catch it or respond to it. In embedded systems, this is often a requirement when hard-real-time constraints exist. If you don't have such constraints, then you definitely should consider using C++ exceptions which will be far more convenient and faster than that error-log scheme. By creating polymorphic exceptions, the whole problem of determining what you should include in your error-log struct will go away, since you can create custom exception classes for any given situation with whatever data members are needed for that specific situation.
I have some idea on the member and would like to get some suggestion to add / optimize the data structure :
One thing certainly pops to mind, include a pointer to the next error log. The reason for this is not for a linked-list linking, but that it is often the case that you want to catch an error which causes the broader function to fail and forward a broader error log out for that function, which then causes its caller to fail, etc... In other words, you want to nest the errors such that you can trace back the errors from top to bottom. It is really annoying when an error pops up from the depth of the code without any idea of the stack-trace behind it (from where and for what reason was that deep function called and failed).
So that, every module reporting the error log and store in a data structure directly instead of keep return a false from function.
Since you are posting this in the C++ forum, I must ask why you can't use exceptions? I know that embedded folks are not big fans of C++ exceptions and prefer lighter-weight alternatives (which lead to smaller code and deterministically short latency, but also produces slower code). In any case, the mechanism you are proposing is heavier, less flexible and slower than C++ exceptions, so you might consider using them instead of developing your own custom error reporting mechanism. But, of …
Partitioning / sorting algorithms generally require a strict weak ordering. Meaning that it cannot be symmetric in the sense of less-or-equal on one side and greater-or-equal on the other side, as it is in your pseudo-code, because it would mean that there are "equal" elements that could go on either side. Normally, you partition by less-than on one side and not less-than on the other. And, obviously, on that picture 5 is not less-than the pivot (5) which means that it should go on the "greater-than-or-equal" side of the partitioning. In other words, here is the corrected pseudo-code:
//Partition array from A[p] to A[r] with pivot A[p]
//Result: A[i] ≤ A[j] for p ≤ i ≤ q and q + 1 ≤ j ≤ r
x = A[p]
i = p - 1
j = r + 1
while true // infinite loop, repeat forever
repeat j = j - 1
until A[j] < x
repeat i = i +1
until !( A[i] < x )
if i < j
swap A[i] <-> A[j]
else return j
You can see how the above also has the advantage of relying only on one comparison operator (less-than), which is typical of most implementations. Most generic implementations allow you to specify any strict weak ordering comparison function (e.g., less-than, greater-than, lexicographic ordering (e.g., alphabetic order), etc.), for example, the C++ std::sort function.